id
stringlengths
3
9
source
stringclasses
1 value
version
stringclasses
1 value
text
stringlengths
1.54k
298k
added
stringdate
1993-11-25 05:05:38
2024-09-20 15:30:25
created
stringdate
1-01-01 00:00:00
2024-07-31 00:00:00
metadata
dict
246530138
pes2o/s2orc
v3-fos-license
Schwann cell endosome CGRP signals elicit periorbital mechanical allodynia in mice Efficacy of monoclonal antibodies against calcitonin gene-related peptide (CGRP) or its receptor (calcitonin receptor-like receptor/receptor activity modifying protein-1, CLR/RAMP1) implicates peripherally-released CGRP in migraine pain. However, the site and mechanism of CGRP-evoked peripheral pain remain unclear. By cell-selective RAMP1 gene deletion, we reveal that CGRP released from mouse cutaneous trigeminal fibers targets CLR/RAMP1 on surrounding Schwann cells to evoke periorbital mechanical allodynia. CLR/RAMP1 activation in human and mouse Schwann cells generates long-lasting signals from endosomes that evoke cAMP-dependent formation of NO. NO, by gating Schwann cell transient receptor potential ankyrin 1 (TRPA1), releases ROS, which in a feed-forward manner sustain allodynia via nociceptor TRPA1. When encapsulated into nanoparticles that release cargo in acidified endosomes, a CLR/RAMP1 antagonist provides superior inhibition of CGRP signaling and allodynia in mice. Our data suggest that the CGRP-mediated neuronal/Schwann cell pathway mediates allodynia associated with neurogenic inflammation, contributing to the algesic action of CGRP in mice. F or almost a century it has been known that cutaneous tissue injury elicits a local vascular response, referred to as neurogenic inflammation, that is associated to a wider area of increased sensitivity to mechanical stimuli 1 . A subset of C-fiber primary afferents, which mediate neurogenic inflammation, is the main source of the neuropeptides substance P (SP) and calcitonin gene-related peptide (CGRP) 2,3 . In rodents, noxious stimuli such as capsaicin, a pungent agonist of the transient receptor potential vanilloid 1 (TRPV1) channel 4 , evoke the peripheral release of CGRP which induces arteriolar vasodilatation 2 and of SP which elicits plasma protein extravasation 5 , and produce sensory responses, which encompasses acute nociception and prolonged mechanical allodynia 6 . Capsaicin administration to the human skin elicits a similar pattern of responses, consisting of local cutaneous vasodilatation and focal and transient burning pain (min) associated with widespread, sustained mechanical hypersensitivity (hrs) 7 . While CGRP has been identified as the mediator of neurogenic vasodilatation in rodents 2 and humans 8 , the cellular and molecular mechanisms underlying mechanical allodynia associated with neurogenic inflammation are unknown. Mechanistic studies in animal models and humans have highlighted the role of CGRP in migraine pain 9 . Thus, small molecule antagonists of the CGRP receptor and monoclonal antibodies against CGRP or its receptor can relieve migraine pain 10 . The poor blood-brain barrier penetration of some smallmolecule antagonists 11,12 and of monoclonal antibodies 13,14 suggests a peripheral contribution to CGRP-mediated migraine pain. However, little is known about the proalgesic actions of CGRP in the periphery. In mice, intraplantar injection of CGRP evokes mechanical allodynia 15 and systemic CGRP causes facial grimace 16 . Periorbital CGRP injection, while failing to evoke spontaneous nociceptive behavior, produces sustained (~4 h) periorbital mechanical allodynia (PMA) 17 . CGRP released from trigeminal peripheral terminals mediates PMA in mice 18 evoked by systemic (intraperitoneal) administration of the pro-headache agent glyceryl trinitrate (GTN) 19 . Facial cutaneous allodynia is one component of the migraine attack 20,21 . Although the process that initiates migraine pain may originate in the central nervous system (CNS) 22,23 , the cell type and signaling pathway by which CGRP acts in the periphery to cause pain are unknown. The CGRP receptor is a heterodimer of calcitonin receptor-like receptor (CLR), a G protein-coupled receptor (GPCR), and receptor activity-modifying protein 1 (RAMP1), a single transmembrane domain CLR chaperone 24 . These two components coexist in cells that mediate the actions of CGRP, for example, vascular myocytes 2 . Satellite glial cells and Schwann cells express CLR/RAMP1 and are closely associated with peptidergic sensory neurons 25 . While the extracellular space between the soma of trigeminal neurons and satellite glial cells is not a recognized locus for neurotransmission, the varicosities of C-fibers and the ensheathing Schwann cells are sites where neuropeptides, including CGRP 26 , are normally released. Schwann cells from rat sciatic nerve respond to CGRP by increasing intracellular cAMP levels 27 and CLR/RAMP1 are expressed by Schwann cells that wrap CGRP + ve terminals of rat nociceptors 25,28,29 . Schwann cells mediate mechanical allodynia in mouse models of neuropathic and cancer pain 30,31 . Cutaneous Schwann cells can also directly activate sensory nerves to promote mechanical nociception 32 . Although GPCRs are usually considered to signal principally from the plasma membrane, GPCR kinases and β-arrestins (βARRs) rapidly terminate this signaling. Persistent endosomal signaling of GPCRs, including CLR/RAMP1, underlies sustained neuronal activation and nociception in the CNS [33][34][35] . Herein, we hypothesized that mechanical allodynia associated with neurogenic inflammation is mediated by CGRP which targets CLR/RAMP1 in Schwann cells ensheathing peripheral endings of nociceptors. By selective RAMP1 gene deletion in Schwann cells, we reveal that CGRP released from trigeminal terminals causes PMA by paracrine signaling to the surrounding Schwann cells. We also hypothesized that persistent CGRP/CLR/ RAMP1 signaling from endosomes in Schwann cells underlies sustained PMA. By using inhibitors of clathrin-and dynaminmediated endocytosis and stimulus-responsive nanoparticles designed to release CLR/RAMP1 antagonists in acidified endosomes, we found that CLR/RAMP1 endosomal signaling results in a cAMP-dependent release of nitric oxide (NO), which activates transient receptor potential ankyrin 1 (TRPA1), a proalgesic channel and sensor of oxidative stress 36 . Results CGRP evokes PMA by activating Schwann cell CLR/RAMP1. We detected CLR and RAMP1 mRNA and immunoreactivity in primary cultures of human Schwann cells (HSCs) or mouse Schwann cells (MSCs) taken from the sciatic or trigeminal nerve (Fig. 1a, Supplementary Fig. 1a). The S100 + ve mouse Schwann cell line (IMS32) recapitulated features of primary MSCs, including expression of CLR and RAMP1 mRNA and immunoreactivity ( Supplementary Fig. 1a, b) and TRPA1-dependent Ca 2+ response to allyl isothiocyanate ( Supplementary Fig. 1c). Immunoreactive CLR and RAMP1 were also detected in S100 + ve Schwann cells in nerve bundles in biopsies of human abdominal and mouse periorbital skin (Fig. 1b). Intravenous CGRP provokes delayed headache attacks in patients 37 . Intraperitoneal CGRP caused PMA and paw allodynia in male and female C57BL/6 J mice without gender difference (Fig. 1e, Supplementary Fig. 1g). In Plp-Cre ERT+ ;Ramp1 fl/fl mice treated with periorbital 4-OHT PMA, but not paw allodynia, was similarly reduced in males and females in response to intraperitoneal CGRP (Fig. 1f, g and Supplementary Fig. 1h, i). Systemic (intraperitoneal) 4-OHT reduced both PMA and paw allodynia by intraperitoneal CGRP (Supplementary Fig. 1j, k). These results reveal an essential role for CLR/RAMP1 of Schwann cells surrounding periorbital trigeminal endings in PMA elicited by local and systemic CGRP. To investigate the contribution of endocytosis to the activation of Gα proteins and βARRs in endosomes, we preincubated cells with hypertonic sucrose. CGRP increased EbBRET between hCLR-Rluc8 and tdRGFP-Rab5a in HEK-hCLR/RAMP1 cells, consistent with CLR endocytosis (Fig. 5i). Hypertonic sucrose inhibited these changes, which indicates an inhibition of endocytosis (Fig. 5i). Hypertonic sucrose caused a delayed yet more sustained activation of Rluc8-mGα s , Rluc8-mGα sq , Rluc8-mGα si , and Rluc2-βARR2 at the plasma membrane (Supplementary Fig. 4a-f), and an almost complete inhibition of activation of Rluc8-mGα s , Rluc8-mGα sq , Rluc8-mGα si and Rluc2-βARR2 in endosomes (Fig. 5j, k). Sucrose similarly delayed CGRP-induced recruitment of Rluc8-mGα s , Rluc8-mGα sq , Rluc8-mGα si and To examine the contribution of endosomal CLR/RAMP1 signaling to CGRP-induced cAMP formation, we preincubated HSCs expressing the cADDis cAMP reporter with sucrose or vehicle. In vehicletreated cells, CGRP stimulated a rapid (1 min) increase in cAMP formation that was sustained for 30 min (Fig. 5n, o). Sucrose reduced but did not abolish the initial response, yet strongly inhibited the sustained phase of CGRP-stimulated cAMP formation (Fig. 5n, o). Thus, CGRP initially activates Gα and βARR at the plasma membrane, which is followed by sustained activation of Gα and βARR in early endosomes. Endocytosis is necessary for the recruitment of Gα and βARR to endosomes. Gα s continues to signal in endosomes, leading to sustained cAMP formation. CLR/RAMP1 activation in Schwann cells releases NO, which initiates but does not sustain PMA. We investigated the mechanisms that sustain PMA following CLR/RAMP1 activation and endocytosis in Schwann cells. Pre-but not post-treatment (60 min after CGRP or capsaicin) with CLR/RAMP1 antagonists, olcegepant or CGRP8-37, attenuated PMA evoked by capsaicin and in accordance with previous studies 17, 18 PMA evoked by CGRP ( Supplementary Fig. 5a-f). Similarly, inhibitors of clathrin-and dynamin-mediated endocytosis had no effect when administered Although NO can release CGRP with proalgesic functions, the contribution of NO to CGRP-evoked allodynia is uncertain. Pretreatment with an NO synthase (NOS) inhibitor (L-NAME) or an NO scavenger (cPTIO) (Fig. 6a), abrogated CGRP-evoked PMA (Fig. 6b, c). L-NAME and cPTIO pretreatment also attenuated capsaicin-evoked PMA (Fig. 6d, e). However, L-NAME and cPTIO did not affect PMA when administered 60 min after CGRP (Supplementary Fig. 5o, p) or capsaicin ( Supplementary Fig. 5q, r). Thus, PKA-dependent NO release 39 is necessary to initiate, but is not sufficient to sustain, CGRP-evoked allodynia. In vitro findings recapitulated in vivo results. HSCs, MSCs, and IMS32 cells predominantly expressed NOS3 (eNOS) mRNA, with little or no expression of NOS1 and NOS2 (nNOS and iNOS, respectively) mRNA (Fig. 6f, g; Supplementary Fig. 5s). In both HSCs and IMS32 cells CGRP elicited a transient increase in NOS3 phosphorylation (i.e., activation), consistent with NO generation, which peaked at 5-10 min and declined within 30-60 min (Fig. 6h), and a cAMP increase that was prevented by olcegepant, CGRP8-37 and an adenylyl cyclase inhibitor (SQ22536), but not by L-NAME (Fig. 6i). The increase in cAMP evoked by CGRP but not that elicited by forskolin was reduced in cultured MSCs obtained from Plp-Cre ERT+ ;Ramp1 fl/fl mice as compared to Control mice treated with intraperitoneal 4-OHT (Fig. 6j). In contrast, the CGRP-evoked increase in NO was attenuated by all these interventions, including NOS inhibition (Fig. 6k). The cAMP increase evoked by forskolin was unaffected by CLR/RAMP1 antagonism and NOS inhibition, and olcegepant failed to inhibit NO release by the NO donor NONOate ( Supplementary Fig. 5t, u), indicating selectivity. NO release evoked by CGRP, but not that evoked by NONOate was inhibited by PS2 and Dy4, but not their inactive analogs ( Supplementary Fig. 5v), further supporting selectivity. These results suggest that clathrin-and dynamin-dependent endocytosis and endosomal CLR/RAMP1 signaling evoke NOS activation and NO generation in Schwann cells. Schwann cell TRPA1 mediates CGRP-evoked PMA. NO belongs to a series of reactive oxygen species (ROS) that target TRPA1 40 . TRPA1 is coexpressed with TRPV1 and CGRP in a subpopulation of primary sensory neurons 41 . TRPA1 is expressed in Schwann cells of nerve bundles of human skin and mouse sciatic nerve, where it mediates mechanical allodynia in rodent models of pain 30,42 . Immunoreactive TRPA1 was coexpressed with RAMP1 in S100 + ve Schwann cells in human abdominal and mouse periorbital cutaneous nerves bundles (Fig. 7a). Thus, CLR/RAMP1 might engage signaling pathways that activate TRPA1 in trigeminal Schwann cells to initiate allodynia (Fig. 7b). This hypothesis was supported by the observation that both CGRP-and capsaicin-evoked PMA were reduced in Trpa1 −/− mice and in mice with sensory neuron-specific deletion of TRPA1 (Adv-Cre + ;Trpa1 fl/fl ) (Fig. 7c, d and Supplementary Fig. 6a, b). We next investigated the signaling pathway by which the CLR/ RAMP1 activates TRPA1. In HSCs and IMS32 cells, CGRP stimulated a slowly developing yet sustained increase in Ca 2+ response (Fig. 7e, f) and increased H 2 O 2 levels ( Supplementary Fig. 6c). Olcegepant, CGRP8-37, SQ22536, H89, L-NAME, Ca 2+free medium, a ROS scavenger (PBN) or a NOX1 inhibitor (ML171) attenuated Ca 2+ responses (Fig. 7e, f) and H 2 O 2 levels ( Supplementary Fig. 6c). A TRPA1 antagonist (A967079) inhibited CGRP-stimulated Ca 2+ (Fig. 7e, f) and H 2 O 2 responses ( Supplementary Fig. 6c) but did not affect CGRP-stimulated NO formation (Fig. 7g). CGRP-evoked Ca 2+ responses were reduced in Schwann cells from Trpa1 −/− mice ( Supplementary Fig. 6d). These results support the hypothesis that CGRP liberates NO, which activates Schwann cell TRPA1; activated TRPA1 promotes a Ca 2+ -dependent H 2 O 2 generation that sustains a feed-forward mechanism comprising TRPA1 channel engagement and ROS release. In vivo results corroborated this hypothesis. Whereas CLR/ RAMP1 antagonists or NO inhibitors attenuated PMA only if given before CGRP or capsaicin, both pre-and post-treatment with a TRPA1 antagonist, a ROS scavenger and a NOX1 inhibitor reduced PMA (Fig. 7h-j; Supplementary Fig. 6e-g). Although pretreatment with TRPA1 or ROS inhibitors did not affect the acute nociception, they inhibited capsaicin-evoked PMA (Supplementary Fig. 6h-j). Post-treatment also attenuated capsaicinevoked PMA (Supplementary Fig. 6h-j). These findings highlight the mechanistic differences between acute nociception and delayed PMA. After an initial and transient NO-dependent phase, PMA is sustained by persistent ROS liberation, which targets TRPA1 in Schwann cells. This hypothesis is robustly supported by the observation that PMA evoked by CGRP or capsaicin was markedly attenuated in mice with selective deletion of TRPA1 in Schwann cells (Plp-Cre ERT+ ;Trpa1 fl/fl ) ( Fig. 7k; Supplementary Fig. 6k). Targeting endosomal CGRP signaling provides superior relief of CGRP-and capsaicin-evoked PMA. The finding that persistent GPCR signaling from endosomes mediates pain transmission suggests that GPCRs in endosomes rather than at the plasma membrane are a valid and perhaps superior target for the treatment of pain [33][34][35] . Nanoparticles have been used to deliver chemotherapeutics to tumor, where endocytosis and endosomal escape are necessary for drug delivery to cytosolic and nuclear targets 43 . The realization that GPCRs within endosomes are a therapeutic target, raises the possibility of exploiting the acid microenvironment of endosomes as a stimulus for nanoparticle disassembly and release of antagonist cargo 34 . To determine whether DIPMA-MK-3207 can antagonize CLR in endosomes, we measured CGRP-stimulated cAMP formation using the CAMYEL cAMP BRET sensor, which detects total cellular cAMP. HEK293T cells expressing rat CLR/RAMP1 (HEK-rCRL/ RAMP1) were preincubated with graded concentrations of DIPMA-MK-3207 or free MK-3207, DIPMA-Ø or vehicle (control) for 30 min. Beginning at 0 min, baseline BRET was measured for 5 min, and cells were then challenged with CGRP. At 10 min, cells were washed to remove extracellular CGRP, and BRET was measured up to 35 min. In vehicle-treated cells, CGRP stimulated a prompt increase in cAMP formation (1 st phase, 6-10 min) that gradually declined after agonist removal from the extracellular fluid (2 nd phase, 11-35 min) (Fig. 8e). DIPMA-Ø did not affect this response. Free MK-3207 and DIPMA-MK-3207 (100, 316 nM) both inhibited CGRP-evoked cAMP in the 1 st phase to a similar extent (Fig. 8f). During the 2 nd phase, free MK-3207 was inactive at all concentrations whereas DIPMA-MK3207 (31.6, 100, 316 nM) strongly inhibited responses (Fig. 8g). The results suggest that To assess the antagonism of the pain signaling pathway in HSCs, we measured CGRP-evoked changes in Ca 2+ response, which depend on endosomal CGRP signaling and activation of TRPA1. HSCs were preincubated with graded concentrations of DIPMA-MK-3207 or MK-3207 for 20 min to allow the accumulation in endosomes, and washed to remove extracellular compounds. At 10 min after washing, cells were challenged with CGRP and Ca 2+ response was measured as an index of TRPA1 activity. DIPMA-MK-3207 inhibited CGRP-evoked increase in Ca 2+ (Fig. 8i). DIPMA-Ø had no effect. Thus, endosomal targeting enhances the efficacy of a CLR/RAMP1 antagonist in a preclinical model of migraine pain. Discussion The major findings of the present study are that CGRP causes PMA by activating CLR/RAMP1 of Schwann cells, CLR/ RAMP1 signals from endosomes of Schwann cells to activate pain pathways, and endosomal CLR/RAMP1 can be targeted using nanoparticles and endocytosis inhibitors to relieve CGRP-evoked PMA. CLR/RAMP1 stimulation and trafficking to endosomes results in a persistent cAMP-dependent NOS activation and generation of NO, a mediator of migraine pain 19 . The role of NO in PMA is crucial, yet transient, as it is temporally limited to the engagement of TRPA1/NOX1, which releases ROS with a dual function. On one hand, ROS target TRPA1/NOX1 of Schwann cells to maintain ROS generation by a feed-forward mechanism. On the other hand, as suggested by experiments with selective TRPA1 deletion in primary sensory neurons, ROS target TRPA1 on nociceptors to signal allodynia to the CNS. Periorbital capsaicin injection elicited acute nociception mediated by TRPV1 excitation and ensuing afferent discharge, which signals pain to the CNS. In a larger cutaneous area, capsaicin evoked delayed and prolonged PMA. While the acute pain response is most likely dependent on ion influx associated with TRPV1 activation, the mechanism underlying mechanical hypersensitivity 7,44 has remained elusive. Our findings support the existence of a paracrine mechanism that underlies PMA associated with neurogenic inflammation. We suggest that capsaicin locally activates TRPV1 + ve nerve fibers to generate action potentials that propagate antidromically into collateral fibers which release CGRP in a broader area, thus eliciting widespread PMA. PMA depends on the interaction between peptidergic nerve fibers, surrounding Schwann cells and nociceptive neurons that convey allodynic signals to the CNS. CGRP liberated from the varicosities of trigeminal TRPV1 + ve nerve fibers binds to CLR/ RAMP1 of adjacent Schwann cells. CNS perturbations may target the trigeminovascular system and initiate the migraine attack 22,23 . These central mechanisms may underlie the delayed facial allodynia associated with migraine 20,21 . However, the beneficial effect of anti-CGRP medicines that do not cross the blood-brain barrier suggests that CGRP acts in the periphery to elicit pain. The peripheral site of the algesic action of CGRP released from peptidergic C-fibers has been proposed as the CLR/RAMP1 on adjacent non-peptidergic Aδ-fibers 45 and more precisely at the level of the node of Ranvier 28 . The present results in Adv-Cre + ;Ramp1 fl/fl mice suggest that CGRP does not act on trigeminal nociceptors to cause PMA in mice. This is consistent with failure of CGRP administration to elicit any itch, pain or axon reflex responses in humans 46 . Instead, our results support the hypothesis that CGRP released from trigeminal nociceptors targets CLR/PAMP1 on Schwann cells that wrap their terminals to evoke PMA. A limitation of our study is that we only assessed PMA that mimics one component of migraine pain. We cannot exclude the possibility that central mechanisms contribute to other pain symptoms of migraine, and that some of the locally administered antagonists used in the present study penetrated the CNS, where they could also influence pain transmission. Although human Schwann cells express CLR/RAMP1 and show functional responses to CGRP that can account for allodynia in mice, further work is needed to understand whether similar mechanisms account for migraine pain in humans. Another limitation of the present study is that we cannot pinpoint which type of neuron conveys the signals that underlie mechanical allodynia in the trigeminal region. Specifically, we were unable to distinguish between TRPV1-expressing nerve fibers that release CGRP and TRPA1-expressing nerve fibers that are targeted by Schwann cell ROS and convey allodynic signals centrally, since TRPV1 and TRPA1 may coexist in the same population of CGRP-expressing Aδ-or C-fiber primary sensory neurons 2,41 . Most Schwann cells in Remak bundles contain multiple unmyelinated axons from C-fiber nociceptors, including CGRP + ve fibers, which release the bulk of CGRP 2 , and nonpeptidergic isolectin B4 + ve fibers 47 . Thus, CGRP-evoked release of ROS from Schwann cells could induce allodynia by targeting TRPA1 on three neuronal subtypes, including the same Aδ-or C-fiber that releases CGRP, a different C-fiber of the same Remak bundle, or a different adjacent Aδ-fiber. The observation that both C-fiber and Aδ-fiber nociceptors contribute to capsaicinevoked hypersensitivity in humans 48 supports the hypothesis that both types of neurons 28,45 are implicated in CGRP-mediated allodynia, thus highlighting the complex neural transmission of mechanical allodynia associated with neurogenic inflammation. CLR/RAMP1 signals from endosomes by G-protein-mediated mechanisms that activate a subset of compartmentalized signals, including cytosolic protein kinase C and nuclear extracellular signal-regulated kinase; these kinases regulate excitation of spinal neurons and pain transmission 35 . Our results show that CLR/ RAMP1 activates Gα s , Gα q and Gα i and recruits βARRs in endosomes of Schwann cells, determined by EbBRET. Inhibitors of clathrin-and dynamin-mediated endocytosis blocked the recruitment of CLR/RAMP1, Gα and βARR to endosomes, which presumably requires CLR/RAMP1 endocytosis. GPCR/Gα signaling complexes have also been detected in endosomes by using conformationally selective nanobodies 49 . The observation that endocytosis inhibitors attenuated CGRP-stimulated cAMP formation and activation of NOS and TRPA1 reveals a central role for CLR/RAMP1 signaling in endosomes of Schwann cells in CGRP-evoked periorbital pain. Endocytosis of other Gs-coupled GPCRs is also necessary for the full repertoire of cAMP-mediated signaling outcomes, which entails endosomal recruitment of adenylyl cyclase 9 50 and assembly of metastable accumulations of PKA 51 . We found that a nanoparticle-encapsulated CLR/RAMP1 antagonist, which targeted CLR/RAMP1 in endosomes and released cargo in the acidified endosomal microenvironment 34 , also attenuated CGRP-stimulated cAMP formation and blunted TRPA1 activation. The observation that periorbital injection of inhibitors of clathrin and dynamin and of DIPMA-MK-3207 prevented CGRPand capsaicin-evoked PMA provides the evidence for a prominent role of endosomal CGRP signaling of pain from a peripheral site. The finding that nanoparticle encapsulation enhanced the potency of a CGRP antagonist for inhibition of endosomal signaling and resultant nociception supports the hypothesis that CLR/RAMP1 in endosomes mediates facial allodynia which contributes to migraine pain. Nanoparticle encapsulation similarly boosts the efficacy of an NK 1 receptor antagonist in preclinical models of inflammatory and neuropathic pain 34 . An antagonist of CLR/RAMP1 conjugated to a membrane lipid cholestanol also accumulates in endosomes and provides superior relief from pain 35 , which reinforces the importance of CLR/ RAMP1 endosomal signaling for pain transmission. Limitations of the present study include uncertainty about the nature of the CLR/RAMP1 signaling complex in endosomes of Schwann cells, which warrants further investigation by proteomics approaches. Although some of the pharmacological inhibitors used to dissect the signaling pathway can have nonspecific actions, we bolstered confidence in selectivity by using inhibitors of the same pathway and by genetic deletion of GPCRs and TRP channels. Our findings reveal a prominent role for CLR/ RAMP1 in Schwann cells for CGRP-evoked periorbital pain. Future studies will investigate the role of this pathway in preclinical models of migraine pain. Monoclonal antibodies to CGRP, although beneficial, are not effective in all patients 10 . While non-CGRP-dependent mechanisms might explain this failure 52 , monoclonal antibodies likely do not inhibit CGRP signaling in endosomes. The small molecule CLR/RAMP1 antagonist, rimegepant, was found to resolve migraine attacks in patients treated with the anti-CLR/RAMP1 monoclonal antibody, erenumab 53 . This unexpected result was interpreted by the inherent membrane permeability of the lipophilic antagonist rimegepant 54 that might favor inhibition of CGRP signaling in endosomes 53 , while neither receptor-targeted nor ligand-targeted monoclonal antibodies internalized with CLR/RAMP1 activated by CGRP 55 . Our results showing a superior inhibition of CGRP signaling in Schwann cells and of PMA by DIPMA-MK-3207, which selectively targets receptor activity in endosomes, reveal a better approach to control allodynia. In 1936, Sir Thomas Lewis postulated 1 that in human skin action potentials are carried antidromically from the injured nerve terminal to collateral branches from where a chemical substance is released that produces the flare and increases the sensitivity of other fibers responsible for pain. CGRP has been previously identified as the mediator of neurogenic vasodilatation in rodents 2 , and in humans 8 . Herein, we propose that CGRP is the 'chemical substance' that, via the essential role of endosomal CLR/RAMP1, TRPA1/NOX1 and oxidative stress of surrounding Schwann cells, sustains the enhanced sensitivity of primary sensory neurons associated with neurogenic inflammation (Fig. 9). The present results suggest that peripherally acting anti-CGRP medicines reduce migraine pain in part by targeting the facial allodynia that originates from CGRP-mediated endosomal signaling in Schwann cells. The group size of n = 8 animals for behavioral experiments was determined by sample size estimation using G*Power (v3.1) 60 to detect size effect in a post-hoc test with type 1 and 2 error rates of 5 and 20%, respectively. Mice were allocated to vehicle or treatment groups using a randomization procedure (http://www.randomizer.org/). Four independent and blinded investigators performed the treatments, behavioral experiments, genotyping and data analysis, respectively. No animals were excluded from experiments. Methods The behavioral studies followed the animal research reporting in vivo experiment (ARRIVE) guidelines 61 . Mice were housed in a temperature (20 ± 2°C) and humidity (50 ± 10%) controlled vivarium (12 h dark/light cycle, free access to food and water, five animals per cage). At least 1 h before behavioral experiments, mice were acclimatized to the experimental room and behavior was evaluated between 9:00 am and 5:00 pm. All the procedures were conducted following the current guidelines for laboratory animal care and the ethical guidelines for investigations of experimental pain in conscious animals set by the International Association for the Study of Pain 62 . Animals were anesthetized with a mixture of ketamine and xylazine (90 mg/kg and 3 mg/kg, respectively, i.p.) and euthanized with inhaled CO 2 plus 10-50% O 2 . Behavioral experiments Treatment protocol. Subcutaneous injections were made in the periorbital area 2-3 mm from the external eyelid corner 17 . Briefly, the mouse was lifted by the base of the tail and placed on a solid surface with one hand and the tail was pulled back. Then, it was quickly and firmly picked up by the scruff of the neck with the thumb and index finger of the other hand. The injection was made rapidly by a single operator with minimal animal restraint. Mice received unilateral (right side) injections (10 μl/site) of CGRP (1.5 nmol in 0.9% NaCl), SP (3.5 nmol in 0.9% NaCl), capsaicin (10, 50, 100 pmol in 0.1% dimethyl sulfoxide, DMSO), or vehicles (control). Mice received bilateral injections (10 µl/site, right side same site as stimulus, left side symmetrical to right side) of antagonists and inhibitors. CGRP (1.5 nmol in 0.9% NaCl) or vehicle was also administered by intraplantar (i.pl., 20 μl/site) or systemic (0.1 mg/kg, i.p.) injection. GTN was administered at 10 mg/ kg, i.p. injection. Acute nociception. Immediately after the p.orb. injection, mice were placed inside a plexiglass chamber and spontaneous nociception was assessed for 10 min by measuring the time (s) that the animal spent rubbing the injected area of the face with its paws 17,65 . Periorbital mechanical allodynia. PMA was assessed using the up-down paradigm 66,67 in the same mice in which acute nociceptive responses were monitored. Briefly, mice were placed in a restraint apparatus designed for the evaluation of periorbital mechanical thresholds 17 . One day before the first behavioral observation, mice were habituated to the apparatus. PMA was evaluated in the periorbital region over the rostral portion of the eye (i.e., the area of the periorbital region facing the sphenoidal rostrum) 68 before (basal threshold) and after (0.5, 1, 2, 4, 6, 8 h) treatments. On the day of the experiment, after 20 min of adaptation inside the chamber, a series of 7 von Frey filaments in logarithmic increments of force (0.02, 0.04, 0.07, 0.16, 0.4, 0.6, and 1.0 g) were applied to the periorbital area perpendicular to the skin, with sufficient force to cause slight buckling, and held for approximately 5 s to elicit a positive response. Mechanical stimuli were applied homolaterally outside the periorbital area at a distance of 6-8 mm from the site where stimuli were injected. The response was considered positive by the following criteria: mouse vigorously stroked its face with the forepaw, head withdrawal from the stimulus, or head shaking. Mechanical stimulation started with the 0.16 g filament. The absence of response after 5 s led to the use of a filament with increased force, whereas a positive response led to the use of a weaker (i.e. lighter) filament. Six measurements were collected for each mouse or until four consecutive positive or negative responses occurred. The 50% mechanical withdrawal threshold (expressed in g) was then calculated from these scores by using a δ value of 0.205, previously determined. Paw mechanical allodynia. Paw mechanical allodynia was evaluated by measuring the paw withdrawal threshold by using the up-down paradigm 66,67 . Mice were acclimatized (1 h) in individual clear plexiglass boxes on an elevated wire mesh platform, to allow for access to the plantar surfaces of the hind paws. von Frey filaments of increasing stiffness (0.07, 0.16, 0.4, 0.6, and 1.0, 1.4 and 2 g) were applied to the hind paw plantar surfaces of mice with enough pressure to bend the filament. The absence of a paw being lifted after 5 s led to the use of the next filament with an increased force, whereas a lifted paw indicated a positive response, leading to the use of a subsequently weaker filament. Six measurements were collected for each mouse or until four consecutive positive or negative responses occurred. The 50% mechanical withdrawal threshold (expressed in g) was then calculated. Primary culture of mouse Schwann cells. Mouse Schwann cells (MSC) were isolated from sciatic or trigeminal nerves of C57BL/6 J, and from sciatic nerve of Trpa1 +/+ and Trpa1 −/− , Plp1-Cre ERT+ ;Ramp1 fl/fl and Plp1-Cre ERT-;Ramp1 fl/fl mice 30,69 . The epineurium was removed, and nerve explants were divided into 1 mm segments and dissociated enzymatically using collagenase (0.05%) and hyaluronidase (0.1%) in Hank's Balanced Salt Solution (HBSS, 2 hr, 37°C). Cells were collected by centrifugation (150xg, 10 min, room temperature) and the pellet was resuspended Fig. 9 Schematic representation of the pathway that signal prolonged cutaneous allodynia elicited by CGRP released and associated with neurogenic inflammation. The pro-migraine neuropeptide, CGRP, released from trigeminal cutaneous afferents, activates CLR/RAMP1 on Schwann cells. CLR/RAMP1 traffics to endosomes, where sustained G protein signaling increases cAMP and stimulates PKA that results in nitric oxide synthase activation. The ensuing release of nitric oxide targets the oxidant-sensitive channel, TRPA1, in Schwann cells, which elicits persistent ROS generation. ROS triggers TRPA1 on adjacent C-(1) or Aδ-fiber (2) afferents resulting in periorbital allodynia, a hallmark of migraine pain. The inset shows several unmyelinated axons invaginated into a Schwann cell forming a Remak bundle. and cultured in DMEM containing fetal calf serum (10%), L-glutamine (2 mM), penicillin (100 U/ml), streptomycin (100 mg/ml), neuregulin (10 nM) and forskolin (2 μM). Three days later, cytosine arabinoside (Ara-C, 10 mM) was added to remove fibroblasts. Cells were cultured at 37°C in 5% CO 2 and 95% O 2 . The culture medium was replaced every 3 days and cells were used after 15 days of culture. qRT-PCR. Total RNA was extracted from HSCs, IMS32 and sciatic or trigeminal MSCs cells using the RNeasy Mini kit (Qiagen SpA), according to the manufacturer's protocol. RNA concentration and purity were assessed spectrophotometrically by measuring the absorbance at 260 nm and 280 nm. RNA was reverse transcribed with the Qiagen QuantiTect Reverse Transcription Kit (Qiagen SpA) following the manufacturer's protocol. For mRNA relative quantification, rt-PCR was performed on Rotor Gene® Q (Qiagen SpA, Rotor-Gene® Q-Software Version 2.3.1.49). The relative abundance of mRNA transcripts was calculated using the delta CT method and normalized to GAPDH levels. The sets of primers for human and mouse cells are listed in the Supplementary Table 3. In-cell ELISA assay. HSCs or IMS32 cells were plated in 96-well black wall clear bottom plates (Corning Life Sciences) (5 × 10 5 cells/well) and maintained at 37°C in 5% CO 2 and 95% O 2 for 24 h. HSCs and IMS32 cells were exposed to CGRP (1 and 10 μM, respectively) or its vehicle (phosphate-buffered saline, PBS) for 5, 10, 15, 30 and 60 min, at 37°C, then washed with DMEM pH 2.5 and fixed in 4% paraformaldehyde for 30 min. Cells were then washed with TBST (0.05%) and blocked with donkey serum (5%) for 4 h at room temperature and incubated overnight 4°C with eNOS pS1177 (#ab184154, rabbit polyclonal, 1:100, Abcam, Lot: GR3257047-9). Cells were then washed and incubated with donkey anti-rabbit IgG conjugated with horseradish peroxidase (HRPO, 1:2000, Bethyl Laboratories Inc.) for 2 h at room temperature. Cells were then washed and stained using SIGMA-FAST OPD for 30 min protected from light. After the incubation period, the absorbance was measured at 450 nm. Change in NOS3 phosphorylation was calculated as a percentage of the signal in vehicle-treated cells. cAMP ELISA assay. cAMP level was determined by the CatchPoint™ cyclic-AMP fluorescent assay kit (#R8088, Molecular Device) according to the manufacturer's protocol. Briefly, HSCs or IMS32 cells were plated in 96-well black wall clear bottom plates (Corning Life Sciences) (5 × 10 5 cells/well) and maintained in 5% CO 2 and 95% O 2 (24 h, 37°C). The cultured medium was replaced with HBSS added with olcegepant (100 nM), CGRP8-37 (100 nM), SQ22536 (100 μM), L-NAME (10 μM) or vehicle (0.1% DMSO in HBSS) for 20 min at room temperature. HSCs or IMS32 cells were then stimulated with CGRP (1 and 10 μM, respectively), forskolin (1 μM, positive control) or their vehicles (HBSS) and maintained for 40 min at room temperature protected from light. Signal was detected 60 min after exposure to the stimuli. cAMP level was calculated using cAMP standards and expressed as nmol/1. Ultra-performance liquid chromatography-mass spectrometry (LC-MS). MK-3207 loading into the core of NPs was assessed by LC-MS using a Waters Micromass Quattro Premier triple quadrupole mass spectrometer coupled to a Waters Acquity UPLC (USA). Freeze-dried DIPMA-MK-3207 (1 ml, 1 mg/ml) were dissolved in a mixture of DMSO and formic acid 0.1% (5:2). The samples were prepared for analysis by mixing an aliquot of each preparation with internal standard solution (diazepam, 5 µg/ml) in a 5:2 proportion and made up to 500 µl with the dilution solvent (acetonitrile 50%: formic acid 0.1%, 1:1). Samples were fractionated on a Supelco Ascentis Express RP Amide column (50 mm by 2.1 mm, 2.7 µm particle size) equipped with a Phenomenex SecurityGuard precolumn fitted with a Synergi Polar cartridge, maintained at 40°C. MK-3207 loading was quantified against MK-3207 standards (0.016-20 µM). Compounds were eluted under gradient conditions with a mobile phase of formic acid (0.05%) and acetonitrile. Mass spectrometry was conducted in positive electrospray ionization conditions and elution of compounds were monitored with multiple-reaction monitoring. Transmission electron microscopy (TEM). The morphology of NPs was determined by TEM imaging using a Tecnai F20 transmission electron microscope at an accelerating voltage of 120 kV at room temperature. Carbon-coated grids were prepared by plasma discharge (35 s). DIPMA-MK-3207 samples (5 µl, 1 mg/ml) were placed on the grid for 20 s. Samples were negatively stained with uranyl acetate (5 µl, 0.5 wt %, 25 s). Statistical analysis. Results are expressed as mean ± standard error of the mean (SEM). For multiple comparisons, a one-way analysis of variance (ANOVA) followed by the post-hoc Bonferroni's test or Dunnett's test was used. Two groups were compared using Student's t-test. For behavioral experiments with repeated measures, the two-way mixed model ANOVA followed by the post-hoc Bonferroni's test was used. Statistical analyses were performed on raw data using Graph Pad Prism 8 (GraphPad Software Inc.). IC 50 values and confidence intervals were determined from non-linear regression models using Graph Pad Prism 8 (GraphPad Software Inc.). P values less than 0.05 (P < 0.05) were considered significant. Statistical tests used and the sample size for each analysis are listed in the Fig. legend.
2022-02-05T06:23:44.746Z
2022-02-03T00:00:00.000
{ "year": 2022, "sha1": "e0dac4cd2e4a2e233a6edf62eb570399c80458b5", "oa_license": "CCBY", "oa_url": "https://www.nature.com/articles/s41467-022-28204-z.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "41febcea2459cbe8b144665adaae73b14b0bec76", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Medicine" ] }
12662884
pes2o/s2orc
v3-fos-license
A very rare case of HPV-53-related cervical cancer, in a 79-year-old woman with a previous history of negative Pap cytology The introduction of organized cervical cancer (CC) screening programs has drastically reduced the prevalence of CC. However the incidence is still too high, especially among elderly women. All guidelines strongly recommend a regular Papanicolaou (Pap) testing for young and middle-aged patients. On the other hand, many international professional societies no longer advise screening in women who have undergone hysterectomy, and in women aged 65 years and above, who have a previous history of regular Pap smears. Here we report the case of poorly differentiated CC, involving the pelvic lymph nodes and urinary bladder, occurring in a 79-year-old woman who regularly underwent Pap tests, with no reported cytological abnormalities. In this very rare case, the CC cells, as well as cells from metastatic lymph nodes and cells from urinary specimens, molecularly showed human papilloma virus (HPV)-53. With the limitations of a single case, this report brings important information to prevent CC in elderly patients: the utility of molecular tests to increase sensitivity of Pap smears in postmenopausal women; the importance of HPV-53 as one of the four “emergent” genotypes having a possible role in oncogenesis; and the presence of HPV-53 in lymph node metastases from cervical carcinoma, which would support the role of this virus in the maintenance of malignant status. Introduction Cervical cancer (CC) is the second most common malignancy and the fourth leading cause of cancer mortality among women worldwide. 1,2 Research has established the incidence peak of CC in the fourth decade of life, with a median age at diagnosis of 48 years. Approximately 60% of CC occurs in women over 45 and 20% in women above 65 years of age. 3 Certainly, the introduction of organized Papanicolaou (Pap) smear screening programs has resulted in a decreased prevalence of CC by around 70%, but the mortality rate for this neoplasia still remains too high. 4,5 In particular, the number of elderly patients with CC is increasing in Europe. 6 Worldwide, within the older population, the crude incidence of CC is around 17 new cases for every 100,000 females. In the younger population, the corresponding rate ranges from 6 to 7 cases new cases for every 100,000. 6 Among women over age 65, who were diagnosed with invasive cancer, about 25% have never been screened by Pap testing, 50% had no Pap smear in the previous 3 years, and 25% had Pap screening in the preceding 3 years. 7 All guidelines strongly recommend regular Pap smears for young and middle-aged women, but no unanimity exists for elderly women. Many international professional societies (such as the American Cancer Societies) no longer advise screening for A very rare case of HpV-53-related cervical cancer, in a 79-year-old woman with a previous history of negative pap cytology This 684 Zappacosta et al patients who have undergone hysterectomy, or for women above 65 years of age with normal exams and proper screening history. 8,9 In this regard, "proper" screening history is defined as having human papilloma virus (HPV) deoxyribonucleic acid (DNA) test and Pap smear (cotesting) every five years, or cytology alone every three years. 9 The lack of unanimity about CC screening in the elderly reflects the uncertainty regarding the cost-effectiveness ratio of Pap cytology within the postmenopausal (PMP) population. 7 The efficacy of cytological screening is known to be lower in higher age groups, when compared with women aged 30-35 years, and is only effective in 20% of women aging 50 years or older. 9 A nationwide audit of organized cytological screening in Sweden showed that 25% of CC involved women with a previous history of normal Pap smears. 10 In Sweden, over 60% of patients with cervical squamous carcinoma occurred in PMP women, during 2006. 10 When the lower efficacy of Pap cytology in the PMP population was first noted, no molecular biomarkers were available to improve screening efficacy. 7 The involvement of oncogenic HPV (high-risk HPV) in the development of CC is unequivocal. High-risk HPV infection, with its ability to transform and immortalize infected cells, is a prerequisite of the oncogenesis, although cofactors are needed for malignant transformation. 11 Consciousness of this led to the development of molecular tests with higher sensitivity compared to cytology. The introduction of a HPV DNA test within CC screening of PMP women, could reduce the incidence of this neoplasia by about 25% or more. 10,12,13 Here we report the case of a 79-year-old woman with HPV-53-related CC and a previous history of regular Pap smear screening showing no cytological abnormalities. The case On July 2013, a 79-year-old Caucasian PMP woman presented to the Emergency Department with vaginal bleeding and malodorous vaginal discharge. Laboratory analysis showed: white blood cell count of 7.6×10 9 /L, with a slight high percentage of neutrophils (69.9%); red blood cell count of 4.43×10 12 /L; platelet count of 232×10 9 /L; hemoglobin value of 125 g/L; hematocrit of 39.1%; and a serum albumin value of 57 g/L. Ferritin and transferrin were both within normal limits. Hepatitis B surface antigen (HBsAg), hepatitis C antibody (HCVAb), and human immunodeficiency virus antibody (HIVAb) were all negative. All tumor markers were within normal ranges. The patient experienced menarche at age 15, was pregnant three times, and had three children. She reported regular menstrual cycle until 54 years, and no previous history of PMP bleeding. The patient also had a series of normal Pap smears, the last performed the year before. Her medical history showed hypertension, for which she was taking β-adrenergic blocking agents, and breast cancer, for which she took tamoxifen. No nicotine or alcohol consumption was reported. The patient's father had died of gastric cancer; her sister had undergone total hysterectomy because of ovarian cancer. On gynecological examination, a large mucosal lesion involving the cervix and the lower third of the vagina was identified. The left parametrial tissue, up to the pelvic side wall, was firm and indurated. Chest X-ray documented no pathological findings. Magnetic resonance (MR) imaging, before and after intravenous injection of gadopentetate dimeglumine (T1-and T2-weighted sequences), revealed a large and hyperintense mass of 4.5 cm in maximum diameter, located in the left side of the cervix. After intravenous injection of gadopentetate dimeglumine, the cervical mass, left parametrium, left vaginal fornix, as well as the bladder mucosa demonstrated rapid enhancement. Axial T2-weighted MR images showed hyperintense obturator and internal iliac lymph nodes. Based on MR evidence, the disease was staged as MR International Federation of Gynecology and Obstetrics (FIGO) stage IV CC, 14 with bladder involvement. Cystoscopy confirmed infiltration of the bladder mucosa. The patient underwent a radical Wertheim hysterectomy, with bilateral pelvic lymph node dissection. Histopathological analysis of the uterine cervix showed a poorly differentiated, keratinizing squamous cell carcinoma ( Figure 1A), with parametrial and pelvic lymph node involvement ( Figure 1B). Microscopic examination of voided urinary samples showed atypical keratinizing squamous cells ( Figure 2). Materials and methods This study was approved by the Ethical Committee of G d'Annunzio University, in accordance with the principles outlined in the Declaration of Helsinki. Written informed consent was obtained from the patient. had been stored at room temperature and in accordance with the protocol of the Regional Cervical Cytology Biobank, which is located in the Laboratory of Advanced Diagnostic Techniques in Cellular Pathology of Abruzzo Region. A total of 4 mL aliquots from each of the cervical cytological samples, and voided urinary samples were removed to perform a Hybrid Capture 2 test (HC2 HPV DNA Test; Qiagen, Venlo, the Netherlands), in accordance to manufacturer's protocol. The HC2 test simultaneously detects 13 oncogenic HPV types (16,18,31,33,35,39,45, 51, 52, 56, 58, 59, and 68). The HC2 reactions, as a chemiluminescent signal, were read by an offline luminometer system (Digene DML 2000 Microplate Luminometer; Qiagen). The luminometer provided a viral load value for each individual sample, comparing the value obtained from each specimen to the mean of a series of positive controls containing 1 pg/mL of HPV DNA. The amount of 1 pg/mL of HPV DNA corresponded to ~100,000 HPV-16 genomes/mL or 5,000 HPV copies per reaction. The cutoff of 1 RLU was used to classify a specimen as positive or negative. All the cervical cytological specimens resulted as highrisk HPV-positive. Formalin-fixed-paraffin-embedded (FFPE) specimens were also evaluated for the presence of HPV. For our purpose, we selected one FFPE tissue sample from CC, one from contiguous cervical intraepithelial neoplasia grade 2-(CIN2-) or worse tissue, one from nonneoplastic cervix, two from metastatic lymph nodes, and two from nonmetastatic lymph nodes. Three sections of 10 μm were then cut from each tissue block and processed for DNA extrac-tion. Before sectioning, two outer sections were discarded. In order to prevent possible sample-to-sample contamination, both the microtome blade and working surface were cleaned of tissue and/or paraffin parts and decontaminated using DNA Away ™ solution (DNA Away Surface Decontaminant; Thermo Fisher Scientific Inc, Waltham, MA, USA) after each use. Tissue sections were incubated with 20 μL of proteinase K overnight, on a rocking platform at 56°C and 400 rpm. A 5 mL volume from each deparaffinized specimen and a 5 mL for each LBC sample (cervical and urinary ones) were 686 Zappacosta et al then transferred into a fresh 10 mL tube. Nucleic acids were extracted using silica extraction technology (NucliSENS ® easyMAG ® automated technology bioMérieux, Craponne, France). Finally, nucleic acids were eluted from the solid phase in 55 μL of elution buffer. HPV genotyping was performed using a semiquantitative highly multiplexed real-time polymerase chain reaction (PCR) kit (Anyplex ™ II HPV28 Detection, Seegene Inc., Seoul, Korea), according to the manufacturer's protocol. The Anyplex II HPV28 detects 19 Anyplex II HPV28 has been validated for use on nucleic acids previously extracted from frozen and FFPE tissue, as well as from LBC specimens, from all anatomical sites. This technology is highly effective on nucleic acids extracted by easyMAG ® technology. 15 Discussion In a rapidly growing elderly population, little information is available regarding the occurrence of lower genital tract disease since most studies on CC have focused on premenopausal women. However, it is certain that PMP women remain susceptible to oncogenic HPV infection. 16 In this context, the underutilization of screening methods and the low sensitivity of Pap smear test, seems to be the main reasons for the high prevalence of CC in women aged 50 years or older. Recently a wide range of international studies demonstrated the high sensitivity of HPV DNA testing in the early detection of CC and its precursors. 17 In 1999, the US Food and Drug Administration (FDA) approved the use of HPV DNA testing (HC2 technology) within CC screening programs. It was validated in primary screening of women aged 30 years and older, and as a reflex test to triage equivocal cytology for women of all ages. [18][19][20][21] CC screening in Italy comprises women 25 -64 years of age. The screening interval is 3 years. Until 2009, cytology was offered as the primary screening test, with HPV DNA tests being used to triage women with cytological abnormalities. 22,23 Starting from 2010, Italy changed its strategy, and the HC2 test was introduced as a primary screening test, in women aging 30-64 years. 24 As showed by Ronco et al this new screening strategy will prospectively reduce CC morbidity and mortality. 22 In addition, women aged 65 and older with negative HPV DNA test results for at least 10 years and with no history of CIN2 will not be screened any longer. Although the present report suffers from the inherent problem of a single case, we believe that our results can bring important information and practical approaches for CC prevention in elderly patients: 1. When HPV acquisition is followed by viral persistence, there is a high probability for progression of precancerous lesions to invasive cancer. In our patient, we retrospectively confirmed the high efficacy of the molecular test in detecting oncogenic infection, despite Pap-negative results. The use of this test could have successfully prevented invasive CC in this elderly woman. 2. In the lower genital tract, simultaneous infection with multiple HPV genotypes is often observed. Recent literature has shown that HPV type distribution and oncogenicity can be strongly associated with age-related changes within the cervical epithelium. 17 HPV-16 and -18 represent about 70% of all oncogenic HPV types worldwide. Low-prevalence HPV types, such as HPV-52, -53, -81, and -83, are more likely to occur in conjunction with high-prevalence HPV types, such as HPV-16 and -18. 25 HPV-53, actually defined as a "probable high-risk type", 26 is now recognized as one of the four " emergent" genotypes, with a possible role in oncogenesis. HPV-53 infection has been reported in 1.2%-16.2% of women with high-grade cytology but never in patients with CC. 27 It has been hypothesized that the true prevalence of HPV-53 is probably underestimated since this genotype has not been included in tests that are frequently in use. A recent report suggests that this genotype be added to HC2 probes. 28 In the present case, we molecularly analyzed cervical tissue showing invasive CC, as well as tissue from adjacent areas showing CIN2+ lesion. The CIN2+ tissue revealed ten different HPV types, while the invasive cancer, lymph node metastasis, and atypical urinary cells demonstrated only one genotype: HPV-53. In our patient, HPV-16 may probably have had a cooperative interaction with HPV-53 in starting neoplastic transformation. It would be likely that HPV-53 maintained the malignant phenotype induced by HPV-16 and subsequently, induced the switch of high-grade intraepithelial lesions into invasive cancer. 29 3. The detection of HPV DNA in the metastatic lymph nodes of patients with CCs was first reported in 1986, by Lancaster et al. 30 More recently, other authors have demonstrated that distant metastases from HPV-related tumors can also contain the virus. 31 On the other hand, a variable proportion of nonmetastatic lymph nodes harboring oncogenic HPV have also been described. Several hypotheses could explain these discrepancies. Firstly, the sensitivity of the technologies used to detect metastatic cells (in situ hybridization versus immunohistochemistry). Secondly, the presence of HPV-DNA in lymph nodal tissue specimens could be the result of damaged non-neoplastic cervical cells, conveyed by lymphatic flow or by phagocyte cells during viral infection, and progressing to lymph nodes even after the clearance of the lesion. 32 In the present case, using highly sensitive molecular techniques, such as real-time PCR, we did not find HPV within the CC-negative lymph nodes; vice versa, we demonstrated the virus in metastatic nodes, which also showed the same HPV type detected within primary CC (HPV-53). In our opinion, the detection of HPV-53 in metastatic cells far from primary tumor would support the role of this virus in the maintenance of malignant status. These findings also dispel the suspicion that CC might have been induced by tamoxifen. Literature shows that the carcinogenic effect of tamoxifen seems to be limited to endocervical glandular epithelium. In the cervical smear of women treated with tamoxifen, a higher incidence of benign reactive atypia or atypical squamous cells of undetermined significance has only been found, without an increased risk of dysplasia or CC. 33,34 In conclusion, with the limitation of a single case, our report would put the attention of clinicians on the limitation of cervical cytology in PMP women. We strongly believe that the inclusion of the HPV DNA molecular test, possibly with genotyping, in CC prevention strategies would both increase the sensitivity of cancer detection and reduce overtreatment of clinically irrelevant lesions. 9, 33 We certainly know that other factors, such as economic costs, will also affect the decision to screen PMP women. However, it would also be important to focus attention to elaborate age-specific guidelines for CC prevention, emphasizing the importance of testing the older population with molecular techniques, before discontinuing. Disclosure The authors report no conflicts of interest in this work.
2018-04-03T05:58:07.224Z
2014-04-15T00:00:00.000
{ "year": 2014, "sha1": "ec6bd498ba21253359974cc9546e3d024e4cac59", "oa_license": "CCBYNC", "oa_url": "https://www.dovepress.com/getfile.php?fileID=19671", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "04289fb590df830bf901e9715a49c1dedcd697fb", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
49336474
pes2o/s2orc
v3-fos-license
Environmental tastes , opinions and behaviors : social sciences in the service of cultural ecosystem service assessment Cultural ecosystem services are the nonmaterial ways in which humans derive benefits from ecosystems. They are distinct from other types of ecosystem services in that they are not only intangible, but they require an entirely different set of research tools to identify, characterize, and value them. We offer a novel way to assess how individuals perceive and use their local ecosystem, thereby advancing the state-of-the-art of cultural ecosystem service assessment. We identify distinct environmental "tastes" that represent general dispositions, preferences, or orientations regarding particular characteristics of the environment. We then use these environmental tastes to explain environmental behaviors (e.g., engagement in outdoor activities and resource conservation efforts) and opinions (e.g., perceived economic dependence on various environmental resources and opinions regarding environmentally focused development issues). We identify three distinct environmental tastes: "Landscape" is associated with the visual and sensory landscape; "Biota" is associated with living elements of the environment; and "Desert" is associated with the extreme climatic characteristics of the environment. We report that the "Biota" environmental taste has wide-ranging impact on subsequent measures of pro-environmental behaviors and opinions. We maintain that this taste dimension is important for the ability of researchers, land use managers, and policy-makers to understand and evaluate cultural ecosystem services and to characterize how humans perceive them and benefit from them. Bringing the social sciences into ecosystem service assessment The concept of ecosystem services (ES) continues to proliferate into the research and policy-making communities as an organizing conceptual framework with which to characterize and emphasize the dependence of humans on natural ecosystems (e.g., de Groot et al. 2010, Costanza et al. 2014).Since its popularization by the 2005 Millennium Ecosystem Assessment (MA) (Reid et al. 2005), the terminology has been widely adopted by environmental and resource management communities.As the framework moved from the conceptual to the applied stage, it underwent refinement to enable empirical assessment (identification, characterization, and valuation) of ES.Terminology was refined as well, but precise and consistent definitions remain elusive.The definition of ES we adopted, provided by the UK National Ecosystem Assessment, is "the outputs of ecosystems from which people derive benefits" (UK-NEA 2011:12).Ecosystem services, by definition, thus provide measurable benefits for humans, valued in terms of human health, economic well-being, and/or socio-cultural meaning. This research focuses on the third type of benefit-those outputs of ecosystems from which humans derive socio-cultural meaning.We introduce a novel approach for assessing how individuals perceive and use their local ecosystem, an approach that allows us to identify different groups according to their environmental "tastes."In order to demonstrate the relevance and implications of this new measure, we link these taste clusters to environmental opinions and behaviors.We suggest that this knowledge adds nuance to the important, but often poorly understood, concept of cultural ecosystem services (Daniel et al. 2012). The ES concept, though originating within the ecological sciences (Ehrlich and Mooney 1983), is a distinctly anthropocentric concept-putting humans at the center of the ecological universe and measuring ecological health in terms of the system's ability to provide crucial benefits for human existence and well-being in the short and long term.Each of the four ES typologies defined by the MA (Reid et al. 2005) is anthropocentric: provisioning services that provide us food, shelter, water, and commercial goods; regulating services, which assure relatively stable biogeological cycles and climate in which humans have evolved and survive; cultural services, or those ecosystem outputs that provide humans with intangible benefits, including aesthetics, recreational opportunities, spiritual growth, community development, and education; and supporting services, which are the ecological processes that assure provision of all other services, thereby benefiting humans indirectly. Assessing each type of service demands a particular disciplinary expertise or set of expertise.In the past, ecology and satellite disciplines contributed the most to ES assessment.The assessment of regulating and supporting services demands the particular expertise of natural scientists, and they are often the only people who are aware of the myriad ways in which human biological well-being is dependent on these services.At the same time, the natural sciences are less equipped than social science disciplines with theories and methodologies for assessing the socio-cultural value of ecosystems (Fagerholm et al. 2012).Economists have provided monetary valuation of ES, and they approached the topic equipped with experience and a diversity of valuation tools drawn from environmental economics (de Groot et al. 2002, Fisher et al. 2009).While acknowledging the http://www.ecologyandsociety.org/vol20/iss3/art28/importance of socio-cultural benefits derived from ecosystems, it has also been acknowledged that economic tools have proved insufficient for assessing the value of such benefits (Balmford et al. 2011, Church et al. 2011, Daniel et al. 2012). Thus, the valuation of cultural ES has proven to be a particular challenge because they elude monetization.The UK-NEA noted that "the MA's approach to cultural services struggled to find a consistent theoretical and methodological framework to match that underpinning other areas of the assessment" (Church et al. 2011:639).Since the predominant methods for valuating ES have been limited primarily to monetary measures, meanings and perceptions have been rendered a marginal aspect of ES assessment, and this has led to significant criticism of the entire ES assessment process (Kosoy and Corbera 2010, Spangenberg and Settele 2010, Dempsey andRobertson 2012, Luck et al. 2012).This criticism paralleled explicit calls to increase integration of social scientists, with their particular disciplinary skills, into interand transdisciplinary ES assessment (Duraiappah and Rogers 2011, Chan et al. 2012, Daniel et al. 2012, Raymond et al. 2013, Spangenberg et al. 2014).In their review of publications on cultural ecosystem services, Milcu et al. (2013) found few noneconomist social scientists engaged in the existing research on cultural ecosystem services. In response to this lacuna, the budding literature focuses on how individuals use, perceive, and benefit from cultural ES (Bryan et al. 2010, Chan et al. 2012, Spangenberg et al. 2014).One of the first contributions of this literature was the understanding that cultural ES are not uniformly perceived by all people, but rather, the perceived benefits vary with changing circumstances, cultural and social shifts, policy regimes, population groups, and other social characteristics (Spangenberg et al. 2014).Accordingly, researchers understood that the focus on socio-cultural meaning of ES demanded the research community to define the cultural ES through the lens of the beneficiaries themselves (Jax 2010, Menzel and Teng 2010, Chan et al. 2012, Spangenberg et al. 2014). How the social sciences have been and could be further integrated into ecosystem service assessment The science of ES assessment is young (approximately 10 years old), and assessment of cultural ES even younger, but social science-based assessment can draw lessons from a broad range of disciplines, including sociology, anthropology, environmental psychology, environmental history, and landscape architecture.Researchers in these fields have been investigating humanenvironment interactions long before the development and proliferation of the ES conceptual framework, and continue to do so today outside the ES framework (Milcu et al. 2013, Russell et al. 2013, Singh et al. 2013).These disciplines are equipped with the methodological tools and theoretical foundations to assess the nature of human-environment interactions and the role of cultural ES. Recent social research on ES has produced two central contributions to the field: (1) theoretical work suggesting conceptual and structural changes in the ES framework that would better assess ES (particularly cultural ES) from multiple perspectives (e.g., Chan et al. 2012, Luck et al. 2012, Raymond et al. 2013), and (2) research applying social science methodologies toward defining what ES are important to stakeholders and how that knowledge has been, or might be, integrated into the policymaking process (e.g., Gee and Burkhard 2010, Martín-López et al. 2014, Spangenberg et al. 2014).In contrast to quantitative ecological or economic values, social value of cultural ES is generally inferred from qualitative research that focuses on the interactions between human society and the natural environment or from quantitative survey data. A first step in identifying ES is by querying the recipients of those services and developing a synthesis of expert and local knowledge (Maynard et al. 2010, Raymond et al. 2010).Expert knowledge relies on those with the scientific wherewithal to be able to identify the myriad ways human well-being is dependent on ecosystem processes.Local knowledge complements this knowledge with the identification of what about ecosystems is perceived to matter most to people.Local knowledge is the information source for assessing cultural services in the broad sense, and for understanding socio-cultural meaning of cultural ES to specific individuals and groups in particular. Methodologies drawn from various social sciences and humanities have been applied to ES assessment, including public opinion surveys (Gee and Burkhard 2010, Sodhi et al. 2010, Martín-López et al. 2014), in-depth interviews (Sagie et al. 2013, Spangenberg et al. 2014), group deliberation (Palacios-Agundez et al. 2014), participatory GIS mapping (Brown et al. 2011, Fagerholm et al. 2012), or some combination of these (Cowling et al. 2008, Maynard et al. 2010, Raymond et al. 2013).Some of these methods were designed to measure perceptions regarding ES in various landscapes, and others were designed to complete broader ES assessments complementing expert knowledge. In summary, two of the major goals and contributions of social science-based assessments in general have been (1) to provide a mechanism(s) for nonmonetary valuation of ES, and (2) to highlight the diverse ways that humans attribute benefits from ecosystems within a broader understanding of complex socialecological systems. Environmental "tastes" and the development of a framework for social valuation of cultural ecosystem services We integrate social theory and social research methods into ES assessment in order to propose a novel way to assess how individuals perceive and use their local ecosystem.We measure individual perceptions of environmental characteristics as a proxy for ES in order to define distinct environmental "tastes."These tastes represent general dispositions, preferences, or orientations regarding cultural aspects of the environment.In the next step, and in order to introduce the broader potential relevancy of measuring ES as environmental tastes, we use these environmental tastes to explain environmental behaviors (e.g., engagement in outdoor activities and resource conservation efforts) and opinions (e.g., perceived economic dependence from the environment and opinions regarding environmentally focused development issues).Thus, we are able to articulate the link between cultural ES measured as particular sets of environmental tastes that are seen as inclinations or dispositions toward the environment, and an array of environmental practices and opinions. Our contribution is twofold.First, we identify specific sociocultural dimensions of attachment to the environment or socio-http://www.ecologyandsociety.org/vol20/iss3/art28/cultural meaning of ES, which we term environmental tastes, and show how they have different consequences for environmental behavior and opinion.Second, and on a more general level, we reveal how applying social science-based research to the study of ES assessment can guide us toward a more integrated and complex understanding of human-environment relations.Thus, we provide information for the identification, characterization, and valuation of cultural ES and possible building blocks for future social assessments of ES. To develop a new formulation for valuation of cultural ES, we draw from three bodies of sociological knowledge: environmental sociology, environmental psychology, and sociology of consumption.As noted, environmental tastes represent dispositions or inclinations that capture the cultural aspect of ES and the interactions between people and place.For example, landscape research in the social sciences focuses on the way individuals use, perceive, transform, debate, and define the landscape, and as a site where memories and identifications are formed (Tengberg et al. 2012).The environment does not only bear physical aspects such as landforms or land surface, but also psychological, historical, and social connotations.In that sense, the environment is the result of interactions between humans and nature, and we aim to capture this interaction through measuring environmental tastes.We hypothesize that if environmental tastes indeed capture a significant aspect of the human-nature interaction, this construct would be significantly associated with how individuals think about the environment and how they act upon it. To develop the measure of environmental tastes, we turn to the sociology of consumption literature and build on Bourdieu's (1984) theory of taste and its application within environmental research (Bourdieu andWacquant 1992, Horton 2003).Bourdieu defined tastes as acquired dispositions that individuals use to evaluate and differentiate things in the social world.In the context of environmental research, these tastes reflect dispositions toward such things as nature, sustainability, preservation, landscape, and daily consumption practices, and form a set of dispositions that generate perceptions and practices (Crossley 2003, Haluza-DeLay 2008, Sela-Sheffy 2011).These practices, or in our terms, environmental behaviors, are embedded in individuals' lifestyles and are therefore conditioned by particular social contexts.For example, Carfagna et al. (2014) report a class of ethical consumers characterized by a high cultural capital who exhibit an eco-habitus that encourages environmental awareness and sustainability principles. To test whether our proposed measure of cultural ES has any consequences, we draw from sociological and psychological literature on pro-environmental behavior, which focus on both socio-demographic variables and social-psychological constructs as correlates of behavior (Dietz et al. 1998).A number of studies showed consistent effects of education and age on environmental attitudes and behaviors (Jones and Dunlap 1992) but overall weak explanatory power attributed to socio-demographics (Diamantopoulos et al. 2003).Socio-psychological factors associated with environmental behavior, such as values and beliefs, have been more successful than socio-demographic dimensions in predicting pro-environmental behaviors (Boldero 1995, Guagnano et al. 1995, de Groot and Steg 2008).These works are based on the premise that individuals' behavior toward the environment should have something to do with what they feel and think with respect to the environment and with respect to environmental action.For example, the value-belief-norm theory (Stern 2000) has shown how environmental behaviors stem from the acceptance of particular personal values and from beliefs that things important to those values are under threat.Following this literature, we propose that environmental tastes would matter for various environmental opinions and activities, and we explore several different measures of such opinions and activities to make our illustration more rigorous.These include two measures of environmental behavior (outdoor activities and private sphere behavior), opinion on perceived economic derived from the ecosystem, opinion on development issues, and environmental concern. We ask two research questions.First, we ask whether there are distinct dimensions of environmental tastes that represent affinities for specific characteristics of the environment.We maintain that these taste clusters, if they exist, can be theorized as representations of the cultural benefits people derive from ecosystems.Second, we ask whether these dimensions provide potential explanatory power regarding types of environmental behaviors and opinions.In other words, we ask whether environmental tastes, interpreted as measures of cultural ES, have real consequences for environmental behaviors and opinions.Further, we explore the relative contribution of environmental tastes and demographic variables to explaining differences in behaviors and opinions. [1] Research site Our research area is the southern Arava Valley in Israel (Fig. 1).The Arava Valley is a hyperarid desert with an annual average rainfall of less than 30 mm.The valley is bounded by the Negev Mountains to the west, the Dead Sea to the north, the Gulf of Aqaba/Eilat to the south, and the Jordanian Araba Valley to the east.Our research focused on communities in the Hevel Eilot Regional Council that were located on the valley floor in the southern half of the valley, which included six kibbutzim [2] and one exurban community.We also included the southern coastal city of Eilat.The Arava Valley research site has proven particularly relevant for studying human-environment interactions, as its population has been shown to be particularly aware of its unique geographical, climatological, and ecological setting (Sagie et al. 2013, Orenstein andGroner 2014). The population of the Israeli southern Arava Valley (Eilot Regional Council) in 2008 was 3000, and the population of Eilat was 47,300 (Central Bureau of Statistics of Israel 2010). [3]Both Eilat and the Eilot Regional Council are in the middle range of Israel's socioeconomic rankings (with a ranking of 5 on a scale of 1 to 10) (Central Bureau of Statistics of Israel 2010).Economic income in Hevel Eilot is based primarily on agriculture, services, tourism, and light industry.In Eilat, income is through tourism, trade, real estate and other businesses.http://www.ecologyandsociety.org/vol20/iss3/art28/ Survey We distributed questionnaires in Israel's southern Arava Desert, including in seven rural communities and the coastal city of Eilat.The method of survey distribution varied in each community according to local constraints and concerns.In each rural community, the research team made contact with a local resident and/or with the administrators of the community to inquire about the best way to distribute questionnaires in that community.In some sites, questionnaires were distributed door-to-door, and completed surveys were collected several hours later.In others, questionnaires were distributed outside the communal dining hall during meal times, where they were filled in and returned.In Eilat, researchers chose a diversity of public areas, including outdoor and indoor shopping malls, restaurants, a university campus, a retirement home, and tourist sites to distribute and collect questionnaires.We received 257 completed surveys, of which 78 were from the city of Eilat and 179 were from the rural communities of the southern Arava.We purposely over-sampled the rural communities because each has its own unique character (thus we sampled in each one and did not treat them as a single unit) and because they have small populations and we desired to reduce variance in our samples. The design of the questionnaires was crafted to reveal whether local residents were aware of the services they receive from their ecosystem.Prior to designing the survey, we conducted a series of interviews with 10 community leaders (including political and business leaders, educators, activists, and scientists) to obtain information regarding relevant environmental issues, perceptions, and economic activities in the region.The results from these interviews and others are reported in Sagie et al. (2013).We learned from these interviews that the term "ecosystem services" was neither recognized nor intuitively understood by most respondents (a result also noted by the authors of UK-NEA [2011]).Thus, in the questionnaire, we did not use the term ecosystem services explicitly, but rather crafted questions whose answers could provide proxy measures for awareness regarding ecosystem services.As such, batteries of questions dealt with respondents' appreciation of various ecological and geological characteristics of their local environment (cultural ES), their recreational activities in their environment (cultural ES), and their perceived economic dependence on these characteristics (provisioning, cultural, or regulating ES).To measure behaviors and opinions, we used sets of questions that frequently feature in research on these issues (e.g., Guagnano et al. 1995, Stern 2000, de Groot and Steg 2008). Environmental characteristics We used a series of questions about environmental characteristics that serve as proxy measures for cultural ES.These questions assist in determining which physical and biological components of the ecosystem are valued by respondents.Such characteristics, when highly valued by the respondent and directly linked to biodiversity or geodiversity, are considered to be cultural ES (UK-NEA 2011).Respondents were asked to rank a list of environmental characteristics of their environment with regard to how much they appreciate them on a scale from 1 (strongly dislike) to 5 (strongly like).The characteristics included heat, aridity, openness, brightness/glare, sand dunes, quiet, dust/sand storms, mountains, landscape, animals, insects, shrubs, acacia trees, corals, and distance (from the rest of the country).We interpret such characteristics as indicating certain inclinations or dispositions that pertain to aesthetic, spiritual, emotional, climatic, landscape, and other qualities, considered together as "environmental tastes," as elaborated in the Results section. Environmental behaviors To measure behaviors, we used a set of questions on frequency of engagement in outdoor recreational activities, which indicate a form of human interaction with the ecosystem (Paracchini et al. 2014), and a set of questions on private sphere environmental behavior.To measure outdoor activities, respondents were asked to indicate the frequency of engaging in a list of activities, ranging from 1 (never) to 5 (almost every day).The activities included walking outside, walking outside in agricultural areas, hiking in the desert/mountains, riding bikes in the desert/mountains, riding on animals (horse/camel), driving motorcycles or off-road vehicles in the desert/mountains, swimming in the Gulf of Eilat, birding, snorkeling/scuba diving in the Gulf of Eilat, camping in the desert/mountains, spending time relaxing/building bonfires in the desert/mountains, and collecting animals/plants/minerals from their surroundings.These questions are another indicator of cultural ES, specifically when the outdoor activity is focused on biological or geological components of the landscape.To measure private sphere environmental behavior, we asked respondents to rank how often they engage in particular environmental behaviors, including turning off appliances and lights when not in use, recycling, walking, or riding a bike in lieu of using a motor vehicle (for environmental reasons), saving water, using energy-efficient light bulbs, and re-using bags or using cloth bags for shopping.Ranking was from 1 (never) to 4 (always).http://www.ecologyandsociety.org/vol20/iss3/art28/Opinions: perceived economic dependence We used a set of questions on perceived economic dependence from the environment as an indicator of provisioning, regulating, and/or supporting services.Respondents were asked to indicate the extent to which a list of natural resources provides economic benefits to them or their communities on a scale from 1 (never) to 4 (my economic well-being is dependent on this resource).The list of resources included water, soil, sun/heat, insects, birds, corals, animals (other than those previously mentioned), minerals (e.g., sand, copper), aridity, and open land. [4]These questions provide insight into whether the respondent perceives an economic reliance on ecosystem services, regardless of whether or not it is true in economic terms.Through these questions, we generated an indicator of how aware respondents were regarding their dependence on ecosystems and the services they provide.Treating this question as a perception of their economic dependence, we expected a high degree of awareness within the study population due to the importance of agriculture and tourism to the local economy. Opinions: development issues As an additional measure of opinions, we queried respondents about general environmental issues and specific, local, development issues.Respondents were asked to indicate the extent to which they agreed or disagreed with a series of statements regarding local and regional development issues, on a scale from 1 (strongly disagree with the statement) to 5 (strongly agree with the statement).We chose topics based on our a priori knowledge of local and regional issues, supplemented with issues that were raised in the semistructured interviews conducted prior to writing the questionnaire. Opinions: environmental concern We asked respondents about their level of concern regarding regional and global environmental issues.They were asked to rank their level of "worry" regarding a series of local to global-scale environmental challenges, including climate change, water quality and quantity, river pollution, toxic waste storage and disposal, species conservation, open space conservation, public access to beaches, and local level of recycling.Respondents ranked their opinions from 1 (not worried at all) to 5 (very worried). Socioeconomic and demographic characteristics This section in the questionnaire included questions about gender (male, female), place of residence (see details in Table 6), age, household status (single, married, cohabiting), number of children, years lived in the region, and level of formal education (elementary school, high school, undergraduate degree, graduate degree and higher). Environmental tastes Mean values for preference for each environmental characteristic are shown in Fig. 2. They reflect a general affinity with most of the environmental characteristics of the region.Respondents had, on average, a positive opinion regarding 11 of 15 environmental characteristics, and a negative opinion of only two of them (insects and dust/sand storms).Landscape, mountains, quiet, and open space were consistently chosen as the most appreciated characteristics of the environment.We conducted a factor analysis for opinions regarding environmental characteristics to extract environmental tastes, which we consider to indicate perceptions regarding cultural ES. Behavior: level of engagement in outdoor recreational activities Engagement in outdoor activities is considered a measure of cultural ES (Paracchini et al. 2014).Table 1 displays the distribution of frequency of engagement in these activities.Walking was by far the most prevalent outdoor activity from among the choices offered, with 87% of the respondents reporting that they walk at least once or twice a month (34% reporting that they walk almost every day).At the opposite end of the spectrum, most respondents reported that they never bike (53%), go animal riding (77%), use off-road vehicles (70%), go snorkeling (33%), go birding (62%), or go collecting (67%).Opinion: perceived level of economic dependency received from environmental resources Table 3 displays the distribution of the perceived level of economic dependency on natural resources/environmental characteristics questions.Two clear trends emerge.On the one hand, a large number of respondents noted total economic dependency on water, soil/land, and sun/heat.On the other, for every other resource or environmental characteristic, including insects, birds, other animals, minerals, aridity, and open space, the highest proportion of respondents noted that they are not at all dependent on them.What is also notable about this latter group of resources/ environmental characteristics is that between one-fifth and onequarter of the respondents did not know if their economic wellbeing depends on the resources or not. Opinion: opinions regarding development issues of contemporary concern in the Arava Valley The list of issues is presented in Table 4, along with results.We offer several preliminary observations regarding the results to these opinion questions, which are analyzed further in this section and in the Discussion.First, the overall tendencies of the respondents were toward environmental protection, with high percentages of respondents strongly agreeing with general statements regarding the importance of protecting habitats and biodiversity.Further, respondents largely rejected the statement that suggested that economic development should take place at the expense of environmental protection.Accordingly, respondents expressed support for balancing economic and environmental needs and reflected a belief that these can occur together.On specific development issues, opinion was most divided with regard to the construction of a new international airport, with one-fifth expressing strong opposition and one-fifth expressing strong support.Half the sample supported the statement that there were not enough people living in the Arava, and half supported the statement that tourism infrastructure development is important.Expanding agricultural activity could have been considered a controversial issue due to its demands on water and open space resources, on the one hand, and due to its significant contribution to the local economy, on the other, but most of the respondents either supported or strongly supported (44%) or were indifferent (30%) about expanding date orchards.Opinion: level of concern regarding regional and global environmental issues Table 5 shows results for questions that queried levels of concern regarding environmental issues.Overall, there was a high level of concern for environmental challenges across all categories.Toxic waste treatment, river pollution, and water quality and quantity ranked highest, while the level of recycling in their region and climate change ranked lowest from among the choices. Socioeconomic and demographic characteristics Descriptive statistics of these variables are presented in married, cohabitating) accurately reflected the actual population distribution.The formal educational achievement of our sample, and fertility (number of children per mother) were also similar to those of the general population, with a slight bias in the sample toward higher educational attainment. Analytic approach The analysis was carried out in three stages.First, we conducted a factor analysis for opinions regarding "environmental characteristics" to extract environmental tastes, which we consider to indicate perspectives regarding cultural ES.Second, we had to decide whether to treat the measures of behaviors and opinions as separate indicators, as indices, or as weighted indices (factors).We factor analyzed all the relevant batteries to explore whether they had an underlying structure and decided to treat them as follows: 1. We used factor analysis for the battery of questions on engagement in "outdoor activities" and the battery of perceived "economic dependency on environmental resources," which revealed theoretically coherent and empirically separate dimensions.2. We treated the questions regarding "level of concern" and the questions on "private sphere environmental behavior" as summed scales because both of them produced only one dimension in factor analysis and in a reliability test.We used Cronbach's alpha as a measure of internal consistency between items forming a single scale.The items forming the private sphere behavior scaled at 0.703, and the items forming level of concern scaled at 0.869. 3. The items measuring opinions regarding development issues pertained to several very different issues.These items did not form a scale, nor did we expect them to represent distinct underlying dimensions; therefore, we treated them as separate questions. Finally, we conducted multivariate analyses to estimate the effect of environmental tastes (opinions regarding environmental characteristics) and socio-demographic variables (gender, residential tenure, marital status, education, urban/rural residency, age, number of children) on measures of behavior (engagement in outdoor activities, private sphere environmental behavior) and measures of opinion (perceived economic dependency, level of concern, development attitudes). Factor analysis We applied factor analysis on the battery of questions measuring level of appreciation of "environmental characteristics," in order to identify clusters of environmental tastes.Factor analysis is a method of data reduction, which seeks underlying latent variables that are reflected in the observed variables.For all analyses, we applied principal component factor analysis with varimax rotation.Rotated factor loadings on the three factors that emerged are shown in Table 7. Each factor clustered a group of related variables that revealed particular affinities, or "tastes," for particular components of the desert environment.The first dimension, which we termed "landscape," included characteristics associated with the visual and sensory landscape, including sand dunes, corals, quiet, landscapes, open space, and brightness.The next dimension, which we labeled "biota," included all of the living elements of the environment, including shrubs, insects, wild animals, and acacias.Corals were not included in the "biota" taste dimension, and we speculate that many individuals may relate to corals as characteristics of the view, and not as living creatures.The third dimension, which we termed "desert," featured those climatic characteristics that define the extreme environment-heat, aridity, dust, and brightness.Notably, each of these components was ranked with the lowest degree of preference, on average (Fig. 2). The factor analysis for the battery of questions that queried frequency of engagement in various outdoor activities yielded http://www.ecologyandsociety.org/vol20/iss3/art28/two dimensions of activity (Table 8).The first concentrated all the activities associated with greater speed or action (off-road vehicles, swimming in the gulf, riding, biking) in addition to camping and campfires.The second dimension concentrated all of the slower activities (walking, collecting, birding, hiking).We note that these two dimensions could be compared by the pace and concentration at which a person observes the landscape and its biological components.Therefore, we labeled these dimensions "active" and "pensive," accordingly.For the battery of questions that queried perceived level of economic dependency, factor analysis distinguished between two dimensions, which we termed "physical" and "ecological" (Table 9).The first factor revealed perceived dependency on heat/sun, water, and soil/land.The second factor concentrated biotic components of the landscape, but also open space and minerals. We note that all of the elements in the second factor received low rankings with regard to perceived economic dependence. Multivariate analysis of environmental tastes, behaviors, and opinions We tested the relative effect of our environmental taste dimensions (landscape, biota, desert), controlling for the social demographic variables (Table 6), on outdoor activities (active, pensive), private sphere behavior, perceived economic dependency (physical, ecological), and environmental concern.Displayed in Table 10 are the statistically significant standardized effects, which show three main findings.First, very few effects of the sociodemographic variables were statistically significant, indicating that, in general, environmental behaviors and opinions in our sample were not conditioned by characteristics such as gender, age, marital status, etc.Second, five out of the six behaviors and opinions were significantly associated with at least one of the environmental tastes, indicating that this construct played a consistent role in shaping environmental behaviors and opinions.Third, although very few associations were significant, the explained variance in the different models was not negligible in the context of measuring behavior and opinions, ranging from 0.042 to 0.340.This means that although each model featured only one or two significant associations, these were quite strong. Looking specifically at the various measures, we see that respondents who appreciate the landscape environmental taste (e. g., mountains, sand dunes) tended to engage in "active" activities (e.g., swimming, animal riding) and to express more concern about the environment; respondents with a taste for the biota (e. g., shrubs, animals) tended to engage in "pensive" activities (e.g., walking, birding); those with a "desert" taste (e.g., aridity, heat) were more likely to report higher economic dependency on "ecological" components of the environment as well as higher scores on the private sphere environmental behavior.There were no consistent effects of the socio-demographic variables on the behavior and opinion measures, and the effects that were significant were in the direction reported in previous research: men are less inclined than women to adopt private sphere proenvironmental behavior; urban dwellers report lower levels of economic dependency on environmental resources than rural residents; and older respondents are less engaged in "active" outdoor activities compared to younger respondents (Orenstein and Groner 2014). Table 11 displays results of the regression of nine environmental opinion questions that had to do with development issues on environmental tastes and socio-demographic variables.Seven out of nine of the questions had statistically significant associations with the biota taste dimension, suggesting that this dimension is a reflection of stronger environmental opinions.We included the question regarding "not enough population," assuming that more environmental respondents would not support this statement.However, in the Israeli context, and in particular, in the context of the Israeli geographic periphery, support for population growth for socioeconomic reasons tends to overshadow concern for its environmental impact (Orenstein et al. 2011).Somewhat ironically, the statement "I am environmental" was positively associated with the desert and landscape dimensions, which, with one exception, were not positively associated with any other environmental opinion.Of all of the socio-demographic variables, only two-younger and rural-provided explanatory power for the environmental opinion results. Environmental tastes, ecosystem services, and landscape services In this research, we queried a sample population regarding their perceptions of various features of their natural environment and translated them into socio-cultural meaning they derive from their local ecosystem.We revealed three unique environmental tastes that reflect a split in public preferences for environmental characteristics."Landscape" is associated with the visual and sensory landscape, "Biota" is associated with living elements of the environment, and "Desert" is associated with the extreme climatic characteristics of the environment. Defining these tastes in terms of the benefits people receive from ecosystem services presents us with a conundrum that is representative of the larger challenge of defining cultural ES.In the most literal sense, ES are based on a biological dimension of the environment (Reid et al. 2005).In some recent ES research, however, both biotic and abiotic components of the ecosystem are considered to provide services (Gray 2011, UK-NEA 2011, Orenstein and Groner 2014). Those individuals who express a strong preference for animals and plants clearly place meaning in the biological life, and thus can be said to receive benefits from cultural ES.On the other hand, can those who express a taste for landscape-particularly in this arid landscape characterized by minimal vegetative cover -be said to receive benefits from cultural services?Is this primarily a semantic issue, or is it crucial to delineate a sharp differentiation between ecosystem services, natural resources, and landscape aesthetics?Can we adopt a more pluralistic approach to cultural ecosystem services that looks at the natural environment as a holistic entity comprised of biodiversity, geodiversity, climate, and other characteristics? We believe that demanding a rigid dichotomy between the biological and other elements of the ecosystem would ultimately diminish the research, management, and pedagogical value of the ES conceptual framework.While scientists and some other stakeholders may trace benefits directly to individual biological components of the ecosystem, other stakeholders express appreciation of the broader landscape.In fact, in four different studies we have conducted in five different regions in three different countries, respondents of surveys consistently gave highest preference ratings to landscape.We thus advocate the pluralistic view that multiple components of the ecosystem, including biological, geological, and climate components, interact and combine to provide diverse forms of culture values for different stakeholders.We adopt the suggestion of Termorshuizen and Opdam (2009), who suggest that all of these elements combine within in the landscape, and thus advocate for the unifying concept of "landscape services," where landscapes include "elements that the locals perceive, valuate, and manage," and whose benefits are not attributed solely to biodiversity (Termorshuizen and Opdam 2009).In short, what stakeholders are valuing may be better described as landscape services rather than cultural ES, and those services include the biophysical environment in its entirety (Brown et al. 2011, Fagerholm et al. 2012). Environmental tastes: a window into how people use and perceive their environment? We investigated whether environmental taste dimensions could serve as potential explanatory variables for environmental behaviors and opinions.Our results indicate that environmental tastes indeed have consistent and strong associations with environmental opinions and behaviors.Those individuals who reflect taste for biota engage in activities that are based on biological dimensions of the ecosystem (specifically, birding and collecting), and reflect more pro-environmental behaviors and opinions.Those who reflect a taste for landscape engage in activities that do not necessarily reflect an appreciation of biodiversity but do reflect an affinity for the combined biotic/ abiotic environment (i.e., landscape).While the desert taste also correlates with other variables, there is no clear pattern or obvious explanation for the relationships. The most interesting phenomenon to surface is the biota environmental taste, which seemingly has wide-ranging impact on subsequent measures of environmental behaviors and opinions.Our data suggest that if a respondent has a taste for biota, they are more inclined to have pro-environmental opinions and behaviors.An affinity for biota is positively associated with "pensive" activities-walking, hiking, collecting, and birding. Collecting and birding are derived explicitly from cultural ES as they are directly related to biological elements in the landscape.Biota taste positively correlated to a series of environmental http://www.ecologyandsociety.org/vol20/iss3/art28/There are two methodological issues that challenge the research findings.First, ours was not a random sample of residents in the southern Arava Valley but rather a convenience sample.While we attempted to get as representative a sample as possible, there may be missing population sectors, particularly in the city of Eilat.Second, there are many geographical, cultural, climatic, and economic reasons why this particular population may be unique both in the Israeli context and the global context.We have conducted similar surveys in other regions of Israel and in other countries; preliminary results reflect a fairly consistent rank ordering of environmental tastes.We will use these results to better define the relationship between environmental tastes and behaviors and to test the effects of geographic setting in addition to the independent variables used here. Social valuation of ecosystem services: more than a number Social valuation of ES, as conducted here, can catalyze a discussion of ethical values, including a respect for biodiversity that transcends its economic role in human life and well-being (Rozzi et al. 2012).Stakeholders express particularly high affinity for environmental characteristics and claim strong environmental opinions.Preferences for environmental characteristics were extremely high, even as economic dependency on those same characteristics was often ranked low.Philosophers, deep ecologists, and others argue that concern for biodiversity should be intuitive and not connected to whether one can generate proof of its utility or economic value (Zimmerman 1994, Luck et al. 2012). We suggest that social valuation of ES allows for inclusion of such ethical perspectives and encourages their acceptance as a legitimate part of civil discourse around the issue of ES.Further, social valuation allows for an understanding of humanenvironment interactions beyond the purely utilitarian (Raymond et al. 2013).Considered along with economic, biological, and health valuation, social valuation completes the necessarily broad spectrum of perspectives regarding the value of ecosystem services to humans (Martín-López et al. 2014).Obtaining each valuation depends on a particular disciplinary skill set drawn from the natural and/or social sciences, and hence the repeated call for both interdisciplinary research (diverse forms of expert knowledge) and transdisciplinary work (integration of local/ stakeholder knowledge [Haberl et al. 2006]).Each valuation approach reflects a particular perspective on the human experience, and each carries its own set of advantages and disadvantages with regard to accuracy and breadth.Together they provide a comprehensive picture of the complex relationship between humans and natural ecosystems, as is required for assessing the impact of human development on ES provision. All of these conclusions support the claim that the integration of interdisciplinary scholars with competence in the social sciences into ES assessment is crucial (Chan et al. 2012, Martín-López et al. 2014).Although there has been a consistent rise in the amount of social research focusing on ES, as cited throughout this paper and additional work (Barthel et al. 2005, Andersson et al. 2007, Ernstson et al. 2010, Andersson et al. 2014), we suggest that social theory (as used, for example, in Barthel et al. 2010 with regard to social memory) can and should play a much stronger role in the future in understanding human values, motivations, and activities vis-à-vis ecosystems and their services. Within the sociological and psychological literature, for example, there is a long and rich history of theoretical developments regarding how humans interact and value their natural environment, and what motivates them to utilize the environment or choose to actively work to protect it from degradation due to human activities, some of which are noted here.We encourage digging deeper into this literature in the service of ecosystem service assessment.By doing so, we can level the ES assessment playing field by raising the profile of socio-cultural valuation in policy-making and planning through the use of alternative measures of value that stakeholders attached to ES.Again, these measures complement traditional monetary valuation, which has been criticized on ethical and practical grounds (Kosoy and Corbera 2010, Spangenberg and Settele 2010, Rogers andSchmidt 2011, Turnhout et al. 2013).http://www.ecologyandsociety.org/vol20/iss3/art28/Our study has direct implications for the researchers and managers who are applying the concept of ES.Defining distinct dimensions of environmental tastes adds crucial nuance to our understanding of how different people value cultural ES differently.Differences in these taste dimensions may have a cascading impact on the way individuals perceive and benefit from other cultural ES (for instance, recreational activities) or how people perceive the economic importance of provisioning services.We are reminded that the general public does not fully understand the concept of ES or, more generally, human dependence on ecosystem integrity for providing ES (In Israel: Sagie et al. 2013, Orenstein andGroner 2014; elsewhere: UK-NEA 2011).Because the term ES is not a common part of everyday language, its translation into measurable indicators is not clear cut.We propose that environmental tastes could provide such indicators. As a "mission-oriented discipline" (Cowling et al. 2008), two of the goals of the ES conceptual framework are to educate the general public regarding the existential importance of biodiversity conservation to assure the long-term provision of ES, and to initiate policies that will meet this goal.Environmental tastes assist in learning about how groups of people perceive the presence and importance of ES and their contribution to their well-being.By strengthening our understanding of how people perceive and use their ecosystem via definition of environmental tastes, social analysis of ES can advance the normative goals of nature conservation policy and ecological education (Cowling et al. 2008, Menzel andTeng 2010). In conclusion, our research suggests that a foundation for proenvironmental opinions and behaviors might be established by nurturing a taste for biota.As such, environmental educationparticularly that which focuses on forming a strong identification with biota-may play a key role in promoting understanding about human dependence on ecosystem integrity and generating pro-environmental opinions and behaviors.On the other hand, as landscape is consistently the most highly valued environmental characteristic, a holistic approach to ecosystem management, that which includes biotic and abiotic components of the ecosystem, should be considered. __________ [1] We note that the application of social theory to organize and explain our results occurred ex post facto to the formulation and distribution of the survey.That is, the survey was constructed and distributed with the intent of exploring attitudes and behaviors of the local population vis-à-vis ecosystem services, and not for the specific intent of testing the hypotheses noted here. [2] Kibbutzim (kibbutz in singular) are Israeli cooperatives, pioneering communities in which property and income are shared among their members.When they were founded, agriculture was the primary economic activity of the kibbutz.However, in the past few decades, the economy of many kibbutzim has begun shifting, such that industry, services, and individual professional incomes have become prominent.Kibbutzim played an important, central role in Zionist settlement prior to the establishment of Israel in 1948, and continued to be central to peripheral, border settlement following the establishment of the state.Today, each kibbutz practices a different degree of cooperative living, some sharing property and income, while others are undergoing varying degrees of privatization. [ 3] The most recent data at the municipality level are available only for 2008.In 2012, the Central Bureau of Statistics population estimate for the Eilot regional council had risen by 17%, to 3500, while the population of Eilat had risen only slightly (less than 1%) to 47,700 (Central Bureau of Statistics of Israel 2014). [4]We note that we do not, at this point, differentiate between the biological components of the ecosystem and the geological components (e.g., minerals), nor do we differentiate between ecosystem services and natural resources.We address this issue in the Discussion. Responses to this article can be read online at: http://www.ecologyandsociety.org/issues/responses.php/7545 Fig. 2 . Fig. 2. Mean values of preferences in descending order. Table 1 . Frequency of engagement in outdoor activities.Behavior: private sphere environmental behavior Table2shows that respondents reported a high frequency of activity in all of the categories of private sphere environmental behavior, with the exception of walking/bike riding in lieu of using motor vehicles.Not including this question, more than 80% of the respondents reported that they sometimes or always engage in environmental behaviors in every category.We note an important caveat regarding the question of walking/bike riding: because they work in close proximity to their homes, most of the residents of the rural sector included in the survey used walking (37%) or bike riding (21%) as their primary means to commute to work (Central Bureau of Statistics of Israel 2010).So, although numbers of reported walkers/riders were low, they may be low due to the question's stipulation "for environmental reasons."According to the 2008 census, 16% of residents in the city of Eilat also use walking or bike riding as their means of commuting to work. ActivityFrequency of activity (% of valid responses) Table 2 . Private sphere environmental behaviors. Table 3 . Perceived dependency on natural resources and environmental characteristics. Table 6 . The age distribution of our sample was fairly even across age categories, with a slight bias toward middle and older age categories (30-69 years) as compared to the actual population (Central Bureau of Statistics of Israel 2010).Men were slightly over sampled (58% of the total sample).Household status (single, Table 4 . Opinions regarding selected issues pertaining to regional development, environmental protection, or use/importance of environment. Table 5 . Degree of concern regarding selected local to global scale environmental challenges. Table 6 . Demographic characteristics of survey sample. Table 7 . Rotated factor loadings for environmental tastes. Table 8 . Rotated factor loadings for outdoor activities. Table 9 . Rotated factor loadings for perceived economic dependency on selected natural resources or environmental characteristics. Table 10 . Standardized coefficients from ordinary least squares regressions of outdoor activity factors, perceived dependency factors, concern, and private sphere behavior on environmental tastes and socio-demographics (* p < 0.05, ** p < 0.01; only statistically significant results are reported). Table 11 . Standardized coefficients from ordinary least squares regressions of opinion on development on environmental tastes and socio-demographics (* p < 0.05, ** p < 0.01; only statistically significant results are reported).
2018-06-22T18:52:06.250Z
2015-09-01T00:00:00.000
{ "year": 2015, "sha1": "503f8396bf6ebd40878afdf94a3e6074d71609e6", "oa_license": "CCBY", "oa_url": "https://www.ecologyandsociety.org/vol20/iss3/art28/ES-2015-7545.pdf", "oa_status": "GOLD", "pdf_src": "ScienceParseMerged", "pdf_hash": "503f8396bf6ebd40878afdf94a3e6074d71609e6", "s2fieldsofstudy": [ "Environmental Science", "Sociology" ], "extfieldsofstudy": [ "Business" ] }
238263762
pes2o/s2orc
v3-fos-license
Analysis of Geomagnetic and Geoelectric Data to Identify the Potential of Gold Deposits (Case Study: Randu Kuning, Wonogiri, Central Java) The gold deposit of the low sulfidation epithermal system at the Randu Kuning prospect, Wonogiri, Central Java is the effect of magmatism during the Oligocene due to microdiorite intrusion. The magmatism causes a mineralization process that fills the fractures in the rock. The mineralized of ores that formed in the study area are pyrite, chalcopyrite, sphalerite, galena, electrum and native gold, magnetite and hematite. The appropriate geophysical method to this case study is using the geomagnetic and geoelectric Induced Polarization method. The application of the geomagnetic method aims to delineate mineralized zones and geological structures as channel way for hydrothermal fluids. The results of the geomagnetic method are in the form of a map of Total Magnetic Intensity carried out by filters such as Reduce to Pole (RTP) - High pass (HP) and Horizontal Gradient (HG). The west side of RTP - HP anomaly shows a low response of -4.9 to -0.8 nT due to intensely mineralized rock and the presence of fractures. The comparison between RTP - HP anomaly and HG anomaly shows the suitability due to intense mineralization which reduces the fault anomaly. A high HG value area of 0.001-0.0017 nT/m is interpreted as a mineralized fault. This can be seen from the alteration map which shows the continuity of veins from measurements in the field. The application of the geoelectric Induced Polarization method aims to identify associated mineral of gold vertically subsurface. Based on the results of geoelectric Induced Polarization data shows that there are chalcopyrite minerals at a depth of 20-30 m with a chargeability of 4-9 msec which is located in intrusion of igneous rock with a resistivity >200 Ωm. Based on geomagnetic geoelectric data, it can identify potential gold deposits in the Randu Kuning area, Wonogiri, Central Java. Introduction Indonesia is a country that flanked by three major plates namely the Eurasian plate, the Indo-Australian plate and the Pacific plate. This condition formed a ring of fire in the subduction zone and resulting frequently tectonic activity [1]. Tectonic activity as a trigger for the mineralization process will change the composition of bedrock minerals into sulfide minerals, one of which is gold. Wonogiri Regency, Selogiri Sub-District is one of the areas that has the prospect of gold deposits that result from the mineralization process. The area includes IOP Conf. Series: Earth and Environmental Science 830 (2021) 012052 IOP Publishing doi: 10.1088/1755-1315/830/1/012052 2 Jendi village, Kepatihan village and Keloran village which is included in randu kuning hill area. The main geological structure that controls mineralization is a part of the Southern Mountains system which is normal fault with orientation is northwestsoutheast [2]. The role of geological structures in the mineralization process is a requirement for the development of mineralization on geological structures [3]. The residual of magma fluid or hydrothermal fluid will break through the fracture in the rocks and be deposited until physical and chemical changes will occur that will form sulfide ore deposits. In fact, at the research site, there are people's mines that operate by following the pattern of structures and relics filled with sulfide minerals to obtain gold. The potential of gold deposits in sub-surface needs a detail research in order to be known its existence. So in this study, secondary data analysis and literature studies were conducted using geophysical methods, namely geomagnetic and geoelectric induced polarization method. Geomagnetic methods are applied to deliniation the alteration zone and local structure of the research area in mapping. The estimate structure utilizes magnetic filters and geological references to research areas. As for knowing the potential of gold deposits vertically used the geoelectric Induced Polarization (IP) method with the approach of identifying gold association minerals such as pyrite, chalcopyrite, and other sulfide minerals. So with the integration of this methods as well as the reference study of the research area can be known the spread of potential gold deposits in Randu Kuning, Selogiri, Wonogiri Regency. And the results of the data can be utilized for further and more detailed research and advance exploration activities. Method This research was conducted in Selogiri sub-district more precisely in Jendi village, Kepatihan village and Keloran village. The research methods used are geomagnetic methods in the form of secondary data and induce polarization methods in the form of reference studies. Geomagnetic methods utilize the properties of rock magnetism to determine geological conditions based on the value of magnetism. Geomagnetic data retrieval is sourced from the official website of BMKG which is on the Page of Earth Magnetic Calculator -BMKG [4]. The main data obtained is the intensity value of the Earth's magnetic field. The data was reduced by the Earth's magnetic field by 44,000 nT and then filtered reduce to pole (RTP) to polish the poles to facilitate quantitative interpretation. Furthermore, the RTP Map is filtered back using the High Pass so that the local anomalies of the research area are read. The latter uses a horizontal gradient (HG) filter to determine the geological structure in the research area as a mineralization controller. Meanwhile, the induce polarization (IP) method utilizes the precipitous nature of electrical currents injected into the ground to map subsurface conditions based on resistivity and chargeability values. IP data is processed using Res2DInv software to produce a cross-section of resistivity and sub-surface chargeability so that it can interpret the changing minerals of both parameters. The results of both methods are integrated to illustrate the potential for mineralization at the research site. Magnetic Method Geomagnetic method is a method in geophysics that uses the magnetic properties of the earth. The output obtained is a map of the subsurface rock susceptibility distribution in the horizontal direction. Geomagnetic survey targets are variations in the measured magnetic field on the surface that arise due to the contrast of the susceptibility of rocks to their surroundings. Basically, the measurement of the earth's magnetic field is influenced by two magnetic poles which have different directions in each place according to the angle of inclination and declination. The difference in the orientation of the magnetic field causes the anomaly to be incompatible with the location of the rock body. Reduce to pole filter transforms two magnetic poles (dipole) into a monopole by eliminating the effects of inclination and declination. Figure 1. Skewness of a magnetic anomaly due to a uniform arbitrarily magnetized source below Earth's surface in an obliquely oriented Earth's magnetic field (left) and its reduced-to-pole expression in the vertical magnetization and vertical field conditions (right) [5]. To interpret the fault structure and anomaly boundaries using a horizontal gradient filter. Horizontal gradient analysis squares the anomaly value so that the boundaries of the anomaly can be seen clearly. Horizontal gradient with the steepest pattern can be interpreted as an anomaly boundary that shows the horizontal change in magnetization [6]. Induced Polarization Method The resistivity method is a geophysical method used to identify the properties of electric electricity flow below the earth's surface. It is based on the injection of currents by the electrodes into the earth's surface so that the potential difference of each rock layer can be measured. In the geoelectric method, induced polarization of electric current is injected into the earth through two current electrodes, then the potential difference that occurs is measured through the two potential electrodes. The time domain method is used to measure the decay time of the electric current stored in the rock after the injection current is turned off with the final unit of chargeability (msec), which characterizes rock mineralization as it is affected by electrode polarization due to mineral deposits. Geology Alteration Data and Geomagnetic Method Analysis of geomagnetic methods to determine zoning alteration and geological structure needs to do geological reference study of research area first. The map below is an alteration map that supports research sites that include Randu Kuning and its surroundings. Figure 3. Selogiri Alteration Map [8] Figure 3 explains that the main structure of the study area that forms minor faults is included in the southern mountain fault with a northwest-southeast orientation. Local structures to control the channel way of hydrothermal solutions in the study area are the north-south and northwest-southeast dextral faults, northeastsouthwest sinistral horizontal faults, and northwest-southeast upward faults. The extension area in the research area is relatively northwest-southeast trending with a relatively upright slope pattern [2]. The position of this field is a brecciation plane which is formed relatively the same as the orientation of the local fault as a mineralization control. This structure serves as a medium for mineralization of deposition in the form of a vein system filled with altered minerals in rock fractures. Mineralization is the formation and deposition of ore minerals originating from metasomatism, pneumatolytic processes, and rising hydrothermal magmatic fluids. The mineralization system formed is a vein system which can be observed megascopically in rocks. Filling minerals were observed such as pyrite, chalcopyrite, and galena. Apart from being filled in the vein, the filler minerals are disseminated in the wall rock. This system of veins is found around the fault zone but is not continuous (discontinuous). Alteration Zone Spread Analysis Based on RTP Filter Basically, to interpret the magnetic anomaly map, reduce to pole filter first. This filter assumes that the earth's magnetic field value data has a constant direction and value so that the magnetic field anomaly is located right on the anomaly body [9]. Figure 4 is a Reduce to Pole map, where the High Pass filter has been carried out so that local anomalies can be seen. The anomaly responses on the RTP map are grouped into 4 lots based on the magnitude of the anomaly value. The alterations that form from west to east are advanced argillic, argillic, phylic, and propylitic alterations. If we pay attention to the magnetism anomaly that the more west the map the magnetic value gradually weakens. This shows that the closer to the west is the main system of alteration. According to [10] the alteration arrangement can be classified based on the alteration intensity. The higly altered group is advanced argillic (-4.9 to -1.9 nT) and argillic (-1.8 to 1.8 nT) alteration which is composed of clay minerals (ilite, smectite, and kaolin). The alteration is exposed in the form of strongly altered rock in lot A. The intermediate (altered) alteration group is a filic (-0.8 to 0.4 nT) alteration composed of clay minerals (smectite and opaque). The weak alteration group (intermediate altered) is a potasic alteration (-4.7 to 1.5 nT) characterized by the presence of biotite and clay minerals as accessories, so that the C plots of rock are still massive. Plot D refers to the geological map and the alteration of the research location is unalterated diorite rock so that the magnetic anomaly that is read will be very large (2.1 to 2.53 nT). Structure Analysis Based on Horizontal Gradient Filter Horizontal gradient (HG) analysis was applied to determine the anomalous bodies of the magnetic response boundaries. This filter concept is interpreted by the boundary anomaly that shows a sudden horizontal change in magnetization [6]. The results of the comparison of the HG anomaly with the alteration map ( Figure 5) were divided into 3 lots, namely A, B, and C area. Plots A and B showed very low HG anomalies of -0.002 to -0.003 nT / m. The low response magnetic anomaly pattern is wide and bounded by the geological structure in the southwestnortheast indicating that the alteration rate is more intense. The magnetic anomaly response is low in the effect of argillic, advanced argillic, and filic alteration which are the higly altered groups [10]. Meanwhile, advanced argillic can be found in strongly altered rock outcrops in lot B. High HG anomaly of 0.001 -0.0017 nT / m is a geological structure anomaly. The anomaly is high because of the surface anomaly object that changes its magnetic value drastically. There are 3 HG anomalies with high response located in lot C and the southwest of the HG anomaly map is indicated as a fault (dashed blue line). If the HG anomaly in lot C is compared with the alteration map, it can be said that the existence of the fault is a local fault that develops from the main fault as a channel way. Hydrothermal solution from the alteration process will fill the fractures in the local fault and follow the fault pattern. So it can be seen that the developing veins follow the structural pattern with a dominant north-south direction. In addition, the argillic alteration distribution relatively follows the structural pattern and is followed by the presence of veins. This shows that the mineralization that develops dominantly occurs in argillic alteration with a relatively north-south vein stretching direction. The urate filling minerals are pyrite, chalcopyrite, galena, and gold as shown in Figure 6 which is in the appendix. Geoelectrical Induced Polarization Method The results of geoelectric method provide two types of information, first is about the type of sub-surface rock (based on rock resistivity data) and the availability of sub-surface sulphide minerals (based on rock chargeability data). Sulfide mineral is one of the characteristic minerals for the presence of Au (gold mineral). The existence of these sulfide minerals can be obtained information on certain rock types if it is correlated with rock resistivity data. Judging from the lineation pattern of resistivity data [11] (Figure 8), it is obtained that the deeper there is a rock that has a higher resistivity value. In this case, it means that the rocks that are located deeper, the rocks are more compact. Rocks that have a resistivity value <30 Ωm (blue area) are interpreted as the presence of claystone, 30-80 Ωm (green-orange area) are interpreted as sandstones, and a resistivity value >80 Ωm (red area) is interpreted as the presence of igneous rock. Figure 8. Resistivity Cross Section [11] One of the sulfide minerals that characterize gold mineralization is chalcopyrite (CuFeS2). The results of of induced polarization in Jendi Village, Selogiri, contained chalcopyrite mineral with a chargeability value 4-9 msec, located at a depth of 20-30m [12] (Figure 9). The range of chargeability values interpreted as chalcopyrite minerals according to the rock chargeability table [7] (Table 1), which is the value 9.4 msec. If correlated with resistivity data, then at a depth of 20-30 m, the availability of chalcopyrite minerals located in igneous rocks with >200 Ωm resistivity, which is the research area is included in the Mandalika Formation consisting of dasit lava rocks and andesit rock [13]. Volcanologically, Mandalika Formation shows the characteristics of the development phase of a composite volcano body, the presence of repeated deposition of melt eruption products and eruptive eruptions [14] Geological and Geophysical Data Integration To know the potential of gold deposits clearly, the integration of data based on geological data, namely alteration map and geophysical data (Figure 10). So it was obtained to deliniation zone of potential gold caught at the research site. Figure 10. Geological and Geophysical Data Integration Increasingly towards the high mineralized zone (low magnetic value) gold association minerals are rarely found in contrast to the intermediate mineralized zone (medium magnetic value) where minerals fill and follow a fracture structure pattern, while based on the geoelectric response Induced Polarization there is chalcopyrite mineral at a depth of 20-30 m with a chargeability value of 4-9 msec located in an igneous intrusion with a resistivity >200 Ωm which is identified as an association of gold minerals. Conclusion The geomagnetic method RTP anomaly response zoned strong (highly altered) to weak (altered) alteration with a gradation from east to west. There are 3 faults from the horizontal gradient reading valued at 0.001 -0.00017, the northern fault and lot C are filled with sufide minerals from the samples found. Increasingly towards the high mineralized zone (low magnetism) gold association minerals are rarely found in contrast to the intermediate mineralized zone (moderate magnetism) where the minerals fill and follow the fracture structure pattern. The geoelectric response of Induced Polarization indicates there are chalcopyrite minerals located in igneous intrusions at a depth of 20-30m with 4-9 msec chargeability and >200 Ωm.
2021-10-05T20:13:13.830Z
2021-01-01T00:00:00.000
{ "year": 2021, "sha1": "934154a169e9b5381933f7c1f44747c9a5ae54f8", "oa_license": null, "oa_url": "https://doi.org/10.1088/1755-1315/830/1/012052", "oa_status": "GOLD", "pdf_src": "IOP", "pdf_hash": "934154a169e9b5381933f7c1f44747c9a5ae54f8", "s2fieldsofstudy": [ "Geology" ], "extfieldsofstudy": [ "Physics" ] }
255214192
pes2o/s2orc
v3-fos-license
The association between hyperuricemia and cardiovascular disease history: A cross-sectional study using KoGES HEXA data This cross-sectional study examines the association between hyperuricemia and cardiovascular diseases (CVDs). Data from the Korean Genome and Epidemiology Study from 2004 to 2016 were analyzed. Among the 173,209 participants, we selected 11,453 patients with hyperuricemia and 152,255 controls (non-hyperuricemia). We obtained the history of CVDs (stroke and ischemic heart disease [IHD]) from all participants. Crude and adjusted odds ratios (aORs) (age, income group, body mass index, smoking, alcohol consumption, anthropometry data, and nutritional intake) for CVDs were analyzed using a logistic regression model. Participants with hyperuricemia reported a significantly higher prevalence of stroke (2.4% vs 1.3%) and IHD (5.6% vs 2.8%) than controls did (P < .001). Participants with hyperuricemia had a significantly higher aOR for CVD than the controls. The aOR of hyperuricemia for stroke was 1.22 (95% confidence interval = 1.07–1.39, P = .004). When analyzed by subgroup according to age and sex, this result was only persistent in women. The aOR of hyperuricemia for IHD was 1.45 (95% confidence interval = 1.33–1.59, P < .001). In the subgroup analyses, the results were similar, except in young men. Hyperuricemia was significantly associated with CVD in the Korean population. Introduction Hyperuricemia is caused by elevated uric acid in the blood, [1] and diagnoses have increased in the US over the past 20 years. [2] In Korea, the prevalence of gout has multiplied 4.4-fold within the last 15 years. [3] Asymptomatic hyperuricemia is related to multiple diseases, including coronary artery disease, chronic kidney disease, hypertension, and diabetes. [4] Reports show that elevated uric acid increases all-cause mortality (risk ratio [RR] 1.24, confidence interval [CI] 1.09-1.42) and cardiovascular mortality (RR 1.37, CI 1.19-1.57). [5] European guidelines on arterial hypertension state that uric acid can influence an individual's cardiovascular risk. [6] Cardiovascular disease (CVD) comprises coronary heart disease, heart failure, stroke, and hypertension, [7] and caused 17.9 million deaths globally in 2015. [8] In Korea, the ischemic heart disease (IHD) mortality rate and hospitalization rate have gradually risen in the last decade. [9,10] The percentage of people with >2 risk factors increases from 14.7% in 20 to 29-year-olds to 58.4% in those >70 years of age. [9] According to national data, cardiovascular risk factors, such as obesity, hypertension, diabetes mellitus, and dyslipidemia, have also increased. [11] Hyperuricemia leads to CVD and chronic kidney disease by pathological induction of vascular smooth muscle cell proliferation and endothelial dysfunction, inducing inflammation. [12] There are conflicting results regarding the association between serum uric acid levels and CVD. In the Framingham Heart Study, uric acid was predictive of coronary heart disease in women, but it lost its significance after adjustment. [13] In contrast, the RR for coronary heart disease incidence in hyperuricemia was 1.21 (CI 1.07-1.36, P = .003) in 1 meta-analysis. [14] The pooled RR of stroke for the high-versus-low uric acid categories was 1.22 (CI 1.15-1.30) in another meta-analysis. [15] However, the heterogeneity (Q = 33.6, I 2 = 67.3%) of previous meta-analyses, [14] the use of relatively old data, and the analysis of uric acid levels by grouping rather than by the hyperuricemia criteria have confounded meaningful interpretation. Our hypothesis is that elevated uric acid levels are associated with CVD. Comorbid conditions and variations in anthropometry data could influence the association between hyperuricemia and CVD. This study investigated the association between hyperuricemia and CVD using national data through a cross-sectional study design. We matched hyperuricemia patients with control participants for age, sex, income, obesity, smoking, alcohol consumption, anthropometry data, and nutritional intake. Additionally, we performed a subgroup analysis based on age and sex. Study population and data collection The use of these data was approved by the ethics committee of Hallym University (2019-02-020). The requirement for written informed consent was waived by the Institutional Review Board. This prospective cohort study used data from the Korean Genome and Epidemiology Study (KoGES) from 2004 to 2016. A comprehensive description of these data was provided in a previous study. [16] Among the KoGES Consortium, we included the KoGES health examinee (HEXA) data of urban residence participants aged ≥40 years. It consisted of baseline data from 2004 to 2013 and follow-up data from to 2012 to 2016. Survey Trained interviewers asked participants about their prior histories of cerebral stroke (ischemic or hemorrhagic) and IHD (myocardial infarction or angina). We defined hyperuricemia as >7.0 mg/dL in men [2] and >6.0 mg/dL in women, [17] as outlined in previous studies. Systolic blood pressure (mm Hg), diastolic blood pressure (mm Hg), fasting blood sugar (mg/dL), total cholesterol (mg/dL), triglycerides (mg/dL), and high-density lipoprotein (HDL) cholesterol (mg/dL) were obtained from the health checkup data. Using health checkup data of weight and height, body mass index (BMI) was calculated in kg/m 2 . Smoking history was categorized as nonsmoker (<100 cigarettes throughout life), past smokers (quit for >1 year), and current smokers. Alcohol consumption was categorized as nondrinkers, past drinkers, and current drinkers. Participant nutritional intake (total calories [kcal/day], protein [g/day], fat [g/day], and carbohydrate [g/day]) was surveyed using a food-frequency questionnaire, which was validated in a previous study. [18] Income grouping was categorized into non-respondent, low-income (<$2000 per month), middle income ($2000-$3999 per month), and high income (≥$4000 per month) categories by household income. Statistical analyses Chi-square tests were used to compare the rates of sex, income group, smoking, alcohol consumption, and stroke and IHD history. To compare age, systolic blood pressure, diastolic blood pressure, fasting blood sugar, total cholesterol, triglycerides, HDL cholesterol, nutritional intake, and BMI, independent t tests were used. A logistic regression model was used to analyze the odds ratio (OR) of hyperuricemia for stroke/IHD. Crude and adjusted models (age, income group, BMI, smoking, alcohol consumption, anthropometric data [systolic blood pressure, diastolic blood pressure, fasting blood sugar, total cholesterol, triglycerides, and HDL cholesterol], and nutritional intake [total calories, protein, fat, and carbohydrate]) were used. In the subgroup analyses according to age, the dividing point was determined by the median age (≤52 years and ≥53 years). Two-tailed analyses were performed, and P values <.05 were considered significant. The results were statistically analyzed using SPSS (version 24.0; IBM, Armonk, NY). Discussion The association with CVD was larger in the hyperuricemia group than in the matched control group in the Korean population in this study. When grouped according to sex, the association between hyperuricemia and CVD was not evident in men after adjusting for other possible confounders. Hyperuricemia was strongly associated with stroke and IHD in women of all ages and associated with IHD only in older men. This study analyzed the largest number of subjects of any study published in the last decade and any study conducted in Korea. The anthropometric data used in this study included various laboratory results that may affect or be affected by CVD. Using the terms "hyperuricemia," "cardiovascular diseases," "myocardial ischemia," "ischemic heart disease," and "stroke," we explored PubMed and Embase and confined our search to English articles published before December 2021. There were 2 studies that investigated both cerebrovascular and coronary vascularization in hyperuricemia. A Taiwanese study also revealed an increased risk of stroke (RR 2.00 for men and 2.75 for women) and IHD (RR 2.45 for men and 3.96 for women) in patients with hyperuricemia. [19] However, the previous study only calculated RR, the control and hyperuricemic groups were not matched, and it was conducted on a rural population with relatively old data, so trends were difficult to demonstrate. A recent Italian study reported that the association between uric acid levels and CVD risk was observed only in men. [20] The highest quartile for uric acid level (uric acid >6.5 mg/dL) in men had an increased risk of CVD (hazard ratio [HR] 2.55 [1.41-4.62]) after adjustment. Because they calculated uric acid levels as quartiles, only men's quartiles were close to the criteria for hyperuricemia. Furthermore, their study included a limited number of participants with moderate to high CVD risk. The advantage of our study was that it calculated the ORs of 2 different diseases using each criterion for hyperuricemia in men and women. Patients with hyperuricemia had an increased aOR (1.22 [1.07-1.39]) of stroke, consistent with 2 meta-analyses (RR 1.22, and RR 1.41, respectively). [21,22] In the subgroup analysis, the results were duplicated only in the female participant groups in our study (ORs of 1.38-2.00), same as in a previous studies (HR 1.32 [1.00-1.73]) [19] (OR 1.888 [1.244-2.864]). [23] The risk of hemorrhagic stroke for increased uric acid was statistically significant only in women in 1 meta-analysis (HR 1.19 [1.04-1.35]). [24] The authors suggested that women have a longer lifespan, greater vulnerability to depression and anxiety, and a higher stress level, which may cause differences. Additionally, key risk factors for stroke are more frequent in women, and the effects of diabetes mellitus (RR 2.28) and atrial fibrillation (RR 1.99) on stroke are stronger in women than in men. [25] In 1 meta-analysis, uric acid levels showed a J-shaped trend in Table 2 Crude and adjusted odd ratios (95% confidence interval) for stroke in hyperuricemia and control groups. Table 3 Crude and adjusted odd ratios (95% confidence interval) for ischemic heart disease in hyperuricemia and control groups. Characteristics Odd ratios for ischemic heart disease * Logistic regression model, significance at P < .05. † Models adjusted for age, income group, body mass index (BMI), smoking, alcohol consumption, anthropometry data (systolic blood pressure, diastolic blood pressure, fasting blood sugar, total cholesterol, triglyceride, and high density lipoprotein (HDL)-cholesterol), and nutritional intake (total calories, protein, fat, and carbohydrate). www.md-journal.com men and a linear trend in women for the risk of stroke, [24] while stroke risk increased significantly from 6 mg/dL uric acid, which is similar to normal levels. [26] In men, it can be assumed that there is a compensatory mechanism for a certain amount of uric acid. Hyperuricemia was associated with IHD (aOR 1.45 [1.33-1.59]) in this study, consistent with the risk of cardiovascular events (RR 1.35 [1.12-1.62]) [27] and coronary heart disease (RR 1.34 [1.19-1.49]) in 1 meta-analysis. [28] In this study, the results were consistent in all age groups, but the risk of coronary heart disease increased only in women (RR 1.446 [1.323-1.581]) in another meta-analysis. [14] The authors suggested that differences in epidemiology and mortality may influence the results; the recurrence rate and mortality after the first event were higher in women. A recent cohort study also showed an independent correlation between hyperuricemia and coronary artery disease (OR 1.509 [1.106-2.057]) only in women. [29] In our study, the association between IHD and hyperuricemia was significantly high only in older men. In a study of uric acid level and metabolic syndrome, men had higher cutoffs than women of all ages, which was close to hyperuricemia (6.5 mg/dL) in patients aged <50 years. [30] Based on these findings, we should focus on older men and women of all ages whose uric acid levels are within normal ranges. Additional studies are required to explain the practical role of age in adult men with hyperuricemia. Accumulating evidence indicates that hyperuricemia may be an indicator or contribute to the pathogenesis of heart failure, coronary artery disease, chronic kidney disease, atrial fibrillation, hypertension, and cardiovascular death. [31] High uric acid inhibits insulin signaling and increases oxidative stress and insulin resistance in cardiomyocytes both in vitro and in vivo. [32] Hyperuricemia is associated with a larger myocardial infarction area, lower left ventricular ejection fraction, and higher atrial fibrillation. [33] Moreover, high uric acid induces cardiomyocyte mitophagy activation through the reactive oxygen species/ CaMKIIδ/Parkin pathway axis, which is a pathogenic process of CVD. [34] Patients with hyperuricemia had a higher risk of CVD in 1 meta-analysis (standardized mean differences .264 [.161-.366]) and had increased carotid intima-media thickness compared to controls. [35] The possible mechanisms between uric acid and arterial stiffness include increased systemic inflammation and oxidative stress by hyperuricemia. [36] Uric acid has 2 contrasting roles as both a pro-oxidant and an antioxidant. In experimental studies, hyperuricemia promotes the occurrence and development of CVD by regulating endoplasmic reticulum stress, insulin resistance, oxidative stress, and endothelial dysfunction. [37] Although uric acid acts as a scavenger of free radicals and singlet oxygen, high uric acid levels lead to endothelial dysfunction and maximize platelet adhesion, [38] potentially initiating a cascade of coagulation, stimulating thrombus formation and arterial occlusion, which progress to intracranial atherosclerosis. [39] Recent studies have suggested an association between uric acid and both hypertension and metabolic syndrome, [37] which can cause stroke. In a recent animal study, increased uric acid levels activated the myocyte enhancer factor-2C-dependent and nuclear factor-κB pathways by let-7c and generating thrombosis. [40] Despite the large population database, this study had several limitations. First, data from the KoGES did not have all the records regarding potential confounders, including treatment of hyperuricemia, duration of disease, drug intake, and coronary angiography procedure; as such, the results should be interpreted with caution. Second, our results could be subjective or inaccurate compared to clinical data, as we used a questionnaire survey. However, the KoGES cohort study has been conducted consistently since 2004, and there is an advantage in terms of continuity. Above all, the fact that hyperuricemia was accurately diagnosed using blood test values of >160,000 people is a great advantage over any other study. Third, the causal relationship between hyperuricemia and CVD was not elucidated because of the cross-sectional study design. However, this study analyzed a large representative dataset of the general population in the country, resulting in strong statistical power. Lastly, our results might not be generalizable to younger people, as we only included participants >40 years of age. Despite these limitations, we demonstrated the association between hyperuricemia and CVDs, which differs according to age and sex. We found that hyperuricemia may be associated with CVD in women of all ages. An additional strength of this study was that we included anthropometric data and included a large number of asymptomatic low-risk participants. In conclusion, this study demonstrated the association between hyperuricemia and CVD, suggesting that clinicians should consider treating asymptomatic hyperuricemia. This study broadens previous findings on the potential association between hyperuricemia and CVD by considering many confounders and using a large population-matched cohort. Our study presents a possible answer to whether the level of uric acid for hyperuricemia can act as a cutoff value for the occurrence of 2 types of CVD.
2022-12-29T16:01:55.858Z
2022-12-23T00:00:00.000
{ "year": 2022, "sha1": "d8a326fa39ffb63b1dad099d9a7d26dd299a88d6", "oa_license": "CCBYNC", "oa_url": "https://doi.org/10.1097/md.0000000000032338", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "8fd2261071d74d99b597cdb95987273d3ac9973b", "s2fieldsofstudy": [ "Medicine", "Biology" ], "extfieldsofstudy": [ "Medicine" ] }
12988116
pes2o/s2orc
v3-fos-license
Development of a novel sports medicine rotation for emergency medicine residents Musculoskeletal complaints are the most common reason for patients to visit a physician, yet competency in musculoskeletal medicine is invariably reported as a deficiency in medical education in the USA. Sports medicine clinical rotations improve both medical students’ and residents’ musculoskeletal knowledge. Despite the importance of this knowledge, a standardized sports medicine curriculum in emergency medicine (EM) does not exist. Hence, we developed a novel sports medicine rotation for EM residents to improve their musculoskeletal educational experience and to improve their knowledge in musculoskeletal medicine by teaching the evaluation and management of many common musculoskeletal disorders and injuries that are encountered in the emergency department. The University of Arizona has two distinct EM residency programs, South Campus (SC) and University Campus (UC). The UC curriculum includes a traditional 4-week orthopedic rotation, which consistently rated poorly on evaluations by residents. Therefore, with the initiation of a new EM residency at SC, we replaced the standard orthopedic rotation with a novel sports medicine rotation for EM interns. This rotation includes attendance at sports medicine clinics with primary care and orthopedic sports medicine physicians, involvement in sport event coverage, assigned reading materials, didactic experiences, and an on-call schedule to assist with reductions in the emergency department. We analyzed postrotation surveys completed by residents, postrotation evaluations of the residents completed by primary care sports medicine faculty and orthopedic chief residents, as well as the total number of dislocation reductions performed by each graduating resident at both programs over the last 5 years. While all residents in both programs exceeded the ten dislocation reductions required for graduation, residents on the sports medicine rotation had a statistically significant higher rate of satisfaction of their educational experience when compared to the traditional orthopedics rotation. All SC residents successfully completed their sports medicine rotation, had completed postrotation evaluations by attending physicians, and had no duty hour violations while on sports medicine. In our experience, a sports medicine rotation is an effective alternative to the traditional orthopedics rotation for EM residents. Introduction Musculoskeletal complaints are the most common reason for patients to visit a physician and account for 92.1 million cases annually. 1 These disorders account for almost 30% of visits to primary care physicians 1 and are the most common class of complaints presenting to the emergency department (ED), representing 20% of visits. 2,3 While not all musculoskeletal disorders are emergent, they have a huge societal impact and may lead to significant disability. Therefore, physicians in many specialties, including Despite the prevalence of musculoskeletal disorders, competency in musculoskeletal medicine is invariably reported as a deficiency in medical education in the USA. [4][5][6][7][8][9][10][11][12][13][14][15][16][17][18][19] This shortcoming is well documented through studies at both the undergraduate and graduate medical education levels. 5,7,[11][12][13][14][15][16][17]20 These studies have shown that medical students and residents lack confidence in their mastery of musculoskeletal medicine 5,7,12,15 and are deficient in musculoskeletal physical examination knowledge and skills. 5,[15][16][17] This deficiency is noted even among orthopedic surgery residents 16 and program directors of internal medicine departments. 6 Given this lack of confidence and deficiency of knowledge and skills at all levels of medical education, it is no surprise that a survey of practicing primary care physicians revealed that more than half of them did not feel they had adequate training in musculoskeletal medicine. 1 Additionally, 56% reported that their only training in musculoskeletal medicine was in medical school, not residency. 1 Most importantly, even when this training does occur, it is usually brief and not always directly relevant to the common disorders seen in the outpatient setting. 18 Sports medicine clinical rotations improve both medical students' and residents' musculoskeletal knowledge. 11,15,[21][22][23] In two separate studies of medical students, the only factor leading to a significant increase in musculoskeletal knowledge and confidence among the 4th-year medical students was participation in a musculoskeletal clinical elective. 11,15,21 There is also a correlation between increased objective structured clinical examination scores designed to test medical knowledge and clinical judgment (not surgical skills) with increased sports medicine experience 22 in orthopedic residents. Additionally, in a study of family medicine residents, researchers found significant improvement in residents' basic musculoskeletal medical knowledge with the introduction of a sports medicine clinical rotation and dedicated curriculum. 23 Family medicine residents in programs accredited by the Accreditation Council for Graduate Medical Education (ACGME) are currently required to spend at least 200 hours dedicated to the care of musculoskeletal problems, including a structured sports medicine experience. 24 We have identified only study assessing the knowledge of musculoskeletal medicine among EM providers. 19 This study "identified significant deficiencies among these physicians at various stages of their careers at an academic medical center as measured by a validated examination of musculoskeletal knowledge with only 61% obtaining a passing score." Along with previ-ously described data, the existing literature suggests there is a knowledge and training gap in EM as well. Most EM residency programs include a 1-month orthopedic surgery rotation during the intern year. These rotations are common, but not mandatory. Anecdotally, this rotation often involves what residents describe as "scut-work", including preoperative preparation of patients, postoperative checks, and discharging of patients. This translates to the perception of minimal teaching because orthopedic residents and attendings are often in the operating room, leaving interns to do floor work. The rotation usually focuses on the surgical management of musculoskeletal disease, rather than on the less severe but more common musculoskeletal pathology. In particular, the musculoskeletal physical examination is underemphasized in this setting. Despite the fact that the evaluation of musculoskeletal complaints is an essential skill for the emergency physician, there is no standardized sports medicine curriculum in most EM residencies. The University of Arizona (UA) has two distinct EM residency programs, South Campus (SC) and University Campus (UC). The UC curriculum includes a traditional 4-week orthopedic rotation, which has consistently rated poorly on evaluations by residents. For this reason, with the initiation of a new EM residency at SC in 2010, we replaced the standard orthopedic rotation with a novel sports medicine rotation for EM interns. To our knowledge, this is the first rotation of its kind in an EM program. Educational objectives The goals and objectives for the sports medicine rotation at SC (Table 1) were developed based upon the Model Curriculum and Guidelines for Curriculum Development for Emergency Medicine Residency Training created by the Society for Academic Emergency Medicine and the Council of Emergency Medicine Residency Directors. 25 This model curriculum was designed to be used as a resource and guide in developing curriculum for EM residency programs. This document includes a comprehensive list of specific goals and objectives for core content material, including orthopedics and prioritizing content items, to indicate the depth and breadth of knowledge required of a specialist in EM. The primary objectives of our sports medicine rotation are to: 1) improve EM residents' musculoskeletal educational experience, and 2) improve their knowledge in musculoskeletal medicine by teaching the evaluation and management of many common musculoskeletal disorders and injuries encountered in the ED. These objectives are achieved through a 4-week rotation with sports medicine during the first year of residency and through participation in the management of ED patients during the 3 years of the residency. While residents are on their 4-week sports medicine rotation, they are assigned to attend daily sports medicine and orthopedic clinics affiliated with the UA, including Campus Health Services. In these settings, residents are supervised by either a primary care sports medicine (PCSM) or orthopedic attending physician; Table 2 shows a sample SM rotation schedule. Residents are responsible for evaluating and managing patients who present to clinic with a variety of musculoskeletal complaints under the direct supervision of their attending physician. Clinical experience occasionally includes time in an athletic training room with a certified athletic trainer, depending upon availability. Residents are also encouraged to attend sporting events covered by the PCSM faculty and fellows as available and appropriate. In addition, all residents are assigned textbook readings covering orthopedic emergencies, as well as selected current sports medicine articles, consensus statements, and guides to the musculoskeletal physical examination. Further, their rotation includes attending sports medicine didactics and journal club during the month of their rotation, usually occurring once per month. Comparatively, the EM residency program at the UA UC includes a more traditional orthopedic rotation. This rotation involves the care of inpatient orthopedic patients, including preoperative preparation of patients, postoperative checks, discharging patients, and taking calls from the inpatient floors. The EM residents regularly provide consults on orthopedic issues for patients in the ED and on the inpatient floors. The majority of the UC EM residents' orthopedic rotation is spent in the hospital; however, they spend 1 day/week working in the sports medicine clinic with a PCSM attending. Finally, while the formal sports medicine or orthopedic 4-week rotation occurs during the first year of residency, it is expected that the residents will continue to use and further develop the skills they have learned throughout their residency seeing patients in the ED. Evaluation and feedback Multiple methods are utilized to evaluate both the sports medi cine and orthopedic rotations, including faculty and chief resident assessment of resident performance, postrotation surveys by residents, and procedure log tracking. Residents on the sports medicine rotation are evaluated at the end of their rotation by PCSM faculty using a standard evaluation form based upon the six core competencies (patient care, medical knowledge, practice-based learning, interpersonal and communication skills, professionalism, and system-based practice). Residents on the orthopedic rotation are evaluated using a similar evaluation form based upon the six core competencies by orthopedic chief residents (postgraduate year 5) on service during the same block. Residents from both programs are also required to record dislocation reduction procedures performed in their procedure log. They are encouraged to log other procedures such as arthrocentesis; however, this is not a requirement. At the current time, no formal postrotation examinations are given at the conclusion of either the sports medicine or orthopedic rotations. Table 1 Goals and objectives for the sports medicine rotation identify anatomy, mechanism of injury, presentations, complications, and management and prognosis of common musculoskeletal injuries. Demonstrate ability to correctly perform a history and physical examination in patients with musculoskeletal disorders, with an emphasis on the shoulder, elbow, wrist/hand, hip, knee, ankle/foot, neck, and back. Develop an appropriate differential diagnosis for musculoskeletal disorders. interpret radiographs correctly in patients with orthopedic injuries. Define standard orthopedic nomenclature. Demonstrate ability to apply orthopedic devices, including compressive dressings, splints, and immobilizers. Demonstrate skill in performance of the following procedures: fracture/ dislocation immobilization, and reduction, arthrocentesis. Outline appropriate aftercare and rehabilitation of sports medicine and orthopedic injuries, including concussions. recognize, assess, and manage the rare but life-threatening sports and orthopedic injuries. Postrotation feedback by the residents on both rotations is obtained using anonymous surveys and, more informally, through direct feedback at the end-of-the-year resident retreat. Results We analyzed postrotation surveys from the sports medicine and orthopedic rotations completed by the residents, as well as postrotation evaluations completed by PCSM faculty and orthopedic chief residents for the first 5 years after the implementation of the sports medicine rotation. We also evaluated procedure logs for each graduating resident from both programs over the last 3 years. This included all five classes of interns who have completed the sports medicine rotation at SC and all three classes of SC residents who have graduated since the beginning of the SC residency program. SC and UC residents are provided with the same postrotation survey form. This off-service postrotation survey form was developed internally by program directors and has been used for over 5 years to evaluate resident experiences on offservice rotations. Residents are asked to complete the survey on the New Innovations platform (New Innovations, Inc., Uniontown, OH, USA) at the end of each off-service rotation. They are sent automatic reminders from New Innovations as well as follow-up reminders by program coordinators when these are not completed. These surveys consist of eight questions that use a Likert-type scale of 1-3 to assess residents' satisfaction with their rotation (Q1-7: 3= most of the time, 2= some of the time, 1= seldom. Q8: 3= very helpful, 2= helpful, 1= not helpful). Data from individual evaluations were extracted from New Innovations into a comma-separated file, merged, and uploaded into the IBM SPSS Statistics 22 (IBM Corporation, Armonk, NY, USA) software for analysis. All analysis was conducted using SPSS except for the cross-tab analysis, which was conducted using R software (version 3.2.0; The R Foundation for Statistical Computing, Vienna, Austria). Data collected from the off-service postrotation survey are presented in Table 3 along with the mean score for each evaluation item for the sports medicine and orthopedic rotations. The mean scores for each evaluation item indicate that the sports medicine rotation was reviewed more favorably by residents than the orthopedic rotation. A 3×2 cross-tab analysis was conducted to assess the differences in mean scores between the rotations. The results of Fisher's exact test indicate that there was a statistically significant difference between rotations on survey items 4, 5, 6, 7, and 8 as measured by a Bonferonni-corrected P-value of ,0.0055. While UC residents spend most of their orthopedic month rotating with orthopedic surgery, they do spend 1 day/week with PCSM faculty in their sports medicine clinics. Comments from UC residents regarding their experience in the sports medicine clinic included: More time in clinic with a focus on how to manage these patients in the ED would be more useful. In-house time is mostly spent doing floorwork, which has some benefit, but becomes less so after 4 weeks. [Sports medicine clinic] was a very good experience. In contrast, UC resident postrotation surveys regarding their inpatient orthopedic surgery experience contained comments such as: This rotation was the epitome of scut-work as an intern. […] there was very little training in actual orthopedics. This is a floor work rotation. There is hardly any formal teaching for us as residents. Not a ton of interaction with attendings. Some were not even interested in acknowledging us. Ten of the UC residents reported violating duty hour restrictions while on the orthopedic rotation over the last 5 years. Reasons given for these violations included working over 80 hours/week, working .16 hours during a single shift, as well as ,10 hours off in between shifts. There were no reported duty hour violations during the sports medicine rotation. Postrotation evaluations of the residents by PCSM faculty and orthopedic chief residents were also analyzed. Over the last 5 academic years, 32 SC residents have completed the sports medicine rotation and 89 UC residents have completed the orthopedic rotation. All 32 SC residents have completed postrotation sports medicine evaluations, while 57/89 (64%) UC residents have completed postrotation orthopedic evaluations. Although all the postrotation evaluation forms used for both programs are currently based upon the six core competencies, the format and grading system used for SC residents has changed twice over the last 5 years to more accurately reflect ACGME requirements for evaluation of EM resident physicians. While a full quantitative assessment of these evaluations cannot be completed because of these changes, all SC residents and all UC residents with completed postrotation evaluations over the last 5 years have received satisfactory rotation evaluations from the PCSM faculty and orthopedic chief residents. The number of dislocation reductions performed over the last 3 years for both SC and UC graduating seniors was calculated. These data were compiled from the graduating seniors' procedure logs documented in New Innovations from academic years 2012-2013, 2013-2014, and 2014-2015. Data on one of the SC graduates were excluded as an outlier because this resident had globally deficient documentation of all procedures. All included residents in both programs either met or exceeded the EM ACGME required ten reductions needed for graduation. The mean number of reductions performed by the last three SC graduating classes (n=21) on the sports medicine rotation was 19.3 (standard deviation [SD] 7.24, range: 10-35) and 27.1 (SD 11.25, range: 10-57) for the last three UC graduating classes (n=53) on the orthopedic rotation. An independent t-test analysis indicated that the mean difference in the number of reductions performed between programs is 0.029 (95% confidence interval [CI]: 3.26-14.07), which is statistically significant. Discussion The introduction of a sports medicine rotation in the PGY-1 year of EM training has been very well received by our SC residents. SC residents on the sports medicine rotation had a statistically significant higher rate of satisfaction of their educational experience when compared to the traditional orthopedic rotation. The mean scores from residents on all eight questions evaluated on their postrotation surveys were higher at SC than at UC; five out of the eight questions showed a statistically significant difference, including questions related to the quality of attending teaching as well as overall value to their education. While resident satisfaction does not measure resident knowledge, residents' perception of the quality of their education while on sports medicine was consistently higher on sports medicine rotation than on orthopedic surgery rotation. SC residents on the sports medicine rotation received significantly more attending feedback than UC residents on the orthopedic rotation. All SC residents successfully completed their sports medicine rotation and all had completed postrotation evaluations by PCSM faculty. Comparatively, only 64% of UC residents had completed postrotation evaluations by orthopedic chief residents, not orthopedic faculty. This was likely due to the minimal presence of attending availability and teaching as documented by UC residents on their postrotation satisfaction surveys. One resident commented: The current EM graduation requirements include satisfactory completion of at least ten dislocation reductions. All of our SC EM residents, had on average, exceeded the number of reductions required, thus meeting or exceeding graduation requirements for dislocation reductions. However, because of a name change in the SC residency program, residents had to manually transfer each individual procedure from their procedure log of one institution to the other's New Innovations system. As a result, it is likely that some of the residents may have stopped documenting their procedures once they had recorded the number needed for graduation. Currently, we have not implemented formal postrotation examinations for any of our off-service rotations at either program. In general, off-service postrotation examinations are not required and are infrequently used to determine successful rotation completion by residents. While EM residents take an in-service training examination yearly and are ultimately required to pass their American Board of Emergency Medicine certification examinations in order to become an American Board of Emergency Medicine board-certified physician, there is no dedicated orthopedic or sports medicine section of this examination. The orthopedic content is included in a large trauma section of this examination. To address this, we are currently working on a project to formally evaluate the core content musculoskeletal knowledge and physical examination skills of our current EM residents and faculty. In 1993, the American Boards of Internal Medicine, Family Practice, Pediatrics, and Emergency Medicine jointly established certification in PCSM. 26 While the American Academy of Family Physicians published core sports educational guidelines and has sports medicine as a required resident rotation, 26 there is no published literature or guidelines on sports medicine training for EM residents. Most EM programs rely on inpatient orthopedic surgery rotations to meet the musculoskeletal education curriculum requirements. To our knowledge, there has never been any formal evaluation of whether or not an inpatient orthopedic rotation is truly meeting the needs of our EM residents. Theoretically, the main advantage of the orthopedics rotation is the ability to perform dislocation reductions. Whereas our UC EM residents do not differentiate the number of reductions they perform during their orthopedic rotation versus the number they perform in the ED, anecdotally, they often get very few reductions on their orthopedic rotation. While the senior residents and attendings are in the operating room, the EM residents spend most of their daily tasks focused on the management of inpatient orthopedic surgical patients. There is very little time spent learning the basic clinical and cognitive skills needed to evaluate and treat most musculoskeletal complaints encountered in EM. Consistently, the orthopedic rotation is rated poorly by the residents on overall value to the residents' education compared to the sports medicine rotation. In addition, UC residents have had several duty hour violations during their orthopedic surgery rotation over the last 5 years; these violations included working over 80 hours/ week, working .16 hours during a single shift, as well as having ,10 hours off in between shifts. Our experience suggests that the implementation of a sports medicine rotation is a practical and useful alternative to the traditional orthopedic rotation for EM residents. SC residents' evaluation of the sports medicine experience has remained uniformly positive and has a statistically significant higher rate of satisfaction. All SC residents on average are able to meet or exceed graduation requirements as determined by the EM ACGME for the number of dislocation reductions, despite not rotating with orthopedic surgery; in addition, SC residents have not had any duty hour violations while on their sports medicine rotation over the last 5 years. Orthopedic rotations may not provide the optimal musculoskeletal curriculum for EM residents and may not be necessary to achieve competency in dislocation reductions as measured by the current residency graduation standards. Other programs may consider either adding a sports medicine component to their existing orthopedic rotation or completely replacing the orthopedic rotation with a sports medicine clinical experience. Future studies are needed and may focus on how to best achieve and evaluate clinical competency in musculoskeletal medicine for EM providers. Limitations There are several limitations to our study. This study was performed at a single institution and, therefore, may not be generalizable to other programs. Owing to the small SC resident class size, our sample size for SC residents is small. In addition, since the beginning of the SC residency, we have had turnover in our program coordinators as well as a transition in our New Innovations program for documentation of evaluations and procedures. This may have resulted in a decreased number of completed postrotation resident surveys Advances in Medical Education and Practice Publish your work in this journal Submit your manuscript here: http://www.dovepress.com/advances-in-medical-education-and-practice-journal Advances in Medical Education and Practice is an international, peerreviewed, open access journal that aims to present and publish research on Medical Education covering medical, dental, nursing and allied health care professional education. The journal covers undergraduate education, postgraduate training and continuing medical education including emerging trends and innovative models linking education, research, and health care services. The manuscript management system is completely online and includes a very quick and fair peer-review system. Visit http://www.dovepress.com/testimonials.php to read real quotes from published authors. Advances in Medical Education and Practice 2016:7 submit your manuscript | www.dovepress.com Dovepress Dovepress Dovepress 255 sports medicine rotation for emergency medicine residents and procedure logs. Finally, this study does not evaluate musculoskeletal knowledge or competency among the residents, but rather documents the successful implementation of an alternative to the traditional orthopedic inpatient rotation for EM residents at our institution.
2016-12-30T08:36:52.502Z
2016-04-21T00:00:00.000
{ "year": 2016, "sha1": "02696616f317afcd0538d24f7dca15bcfba82c6c", "oa_license": "CCBYNC", "oa_url": "https://www.dovepress.com/getfile.php?fileID=30002", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "b9a94351b45da4f11dd1bab6759808c630e8e393", "s2fieldsofstudy": [ "Medicine", "Education" ], "extfieldsofstudy": [ "Medicine" ] }
213181688
pes2o/s2orc
v3-fos-license
Dexrazoxane Protects Cardiomyocyte from Doxorubicin-Induced Apoptosis by Modulating miR-17-5p The usage of doxorubicin is hampered by its life-threatening cardiotoxicity in clinical practice. Dexrazoxane is the only cardioprotective medicine approved by the FDA for preventing doxorubicin-induced cardiac toxicity. Nevertheless, the mechanism of dexrazoxane is incompletely understood. The aim of our study is to investigate the possible molecular mechanism of dexrazoxane against doxorubicin-induced cardiotoxicity. We established a doxorubicin-induced mouse and cardiomyocyte injury model. Male C57BL/6J mice were randomly distributed into a control group (Con), a doxorubicin treatment group (DOX), a doxorubicin plus dexrazoxane treatment group (DOX+DEX), and a dexrazoxane treatment group (DEX). Echocardiography and histology analyses were performed to evaluate heart function and structure. DNA laddering, qRT-PCR, and Western blot were performed on DOX-treated cardiomyocytes with/without DEX treatment in vitro. Cardiomyocytes were then transfected with miR-17-5p mimics or inhibitors in order to analyze its downstream target. Our results demonstrated that dexrazoxane has a potent effect on preventing cardiac injury induced by doxorubicin in vivo and in vitro by reducing cardiomyocyte apoptosis. MicroRNA plays an important role in cardiovascular diseases. Our data revealed that dexrazoxane could upregulate the expression of miR-17-5p, which plays a cytoprotective role in response to hypoxia by regulating cell apoptosis. Furthermore, the miRNA and protein analysis revealed that miR-17-5p significantly attenuated phosphatase and tensin homolog (PTEN) expression in cardiomyocytes exposed to doxorubicin. Taken together, dexrazoxane might exert a cardioprotective effect against doxorubicin-induced cardiomyocyte apoptosis by regulating the expression of miR-17-5p/PTEN cascade. Introduction The incidence of cancer has increased in recent years, and it is speculated that 13.1 million people will die of cancer in 2030 [1]. Doxorubicin (DOX), an anthracycline antibiotic, is deemed to be one of the most effective frontline chemotherapeutic drugs for treating cancers [2]. While doxorubicin has a broad-spectrum anticancer activity, the severe adverse effects, especially life-threatening cardiotoxicity, limit its clinical application [3]. Free radical-mediated myocytes damage is the first and most thoroughly studied mechanism used to explain doxorubicin-induced cardiotoxicity [4]. Excess ROS could result in DNA damage and cardiomyocyte apoptosis [5]. Nevertheless, the precise molecular mechanism of the doxorubicin-induced cardiomyocyte apoptosis still remains poorly defined. MicroRNAs (miRNAs) are a class of noncoding RNAs about 22 nucleotides in length, which are reported to posttranscriptionally regulate target gene expression by directly binding to 3 ′ -untranslated regions (3 ′ -UTR) of target messenger RNAs [6]. It has been well recognized that a large number of miRNAs participate in regulating doxorubicininduced cardiotoxicity; thus, they could be used as potential cardiotoxicity biomarkers [7]. MiR-17-5p belongs to miR-17 family, which has been confirmed to be involved in the normal development of organisms and the survival and growth of malignant tumor [8]. A study reported that overexpression of miR-17-5p could suppress the inflammation in LPSinduced macrophages [9]. Furthermore, it has been found that miR-17-5p plays the role of oncogene in most tumors, promotes cell proliferation, and inhibits cell apoptosis [10,11]. Moreover, the recent study has shown that miR-17-5p is downregulated in breast cancer patients with epirubicin-(an isomer) induced cardiotoxicity [12]. Based on these findings, we postulate that miR-17-5p may take part in the regulation of doxorubicin-induced cardiotoxicity. Dexrazoxane (DEX) is the only cardioprotective medicine approved by FDA for preventing anthracyclineinduced cardiac toxicity [13]. Numerous studies have proved that dexrazoxane could chelate iron to decrease the generation of ROS, thus preventing ROS-induced cardiomyocyte apoptosis [14,15]. However, no research that has focused on miRNAs concerning the cardioprotective effect of dexrazoxane. In this study, we aim to investigate the molecular mechanism of the protective role of dexrazoxane in doxorubicininduced cardiotoxicity and to determine whether miRNAs are involved in this protective effect. 23, revised 1996). The mice (n = 32) were randomly distributed into a control group (Con), a doxorubicin treatment group (DOX), a doxorubicin plus dexrazoxane treatment group (DOX+DEX), and a dexrazoxane treatment group (DEX). DOX+DEX mice were pretreated with 0.1 ml dexrazoxane solutions (200 mg/kg/day, dissolved in 0.167 mol/l sodium lactate solution) 1 h before 10 mg/kg doxorubicin treatment three times a week. DOX mice were injected with the same volume sodium lactate solution and doxorubicin. DEX mice were injected with the same volume dexrazoxane and saline. Con mice were injected with the same volume sodium lactate solution and saline. All of the mice in the four groups were euthanized 7 days after the initial injection of doxorubicin, and the dose of doxorubicin was modified according to previous studies [16][17][18][19][20]. Materials and Methods 2.3. Cardiac Function Assessment. Echocardiography was measured using Vevo 770 and Vevo 2100 (VisualSonics) instruments from Peking University Third Hospital. Fraction shortening (FS) and ejection fraction (EF) were assessed with Vevo Analysis software (version 2.2.3) as previously described [21]. After echocardiography examination, mice were euthanatized by cervical dislocation, and the hearts were collected for cardiac histological analysis. Cardiac Histological Analysis. Histology assays were performed with hearts and sections as previously described [22]. The mouse heart tissues were collected and fixed with 4% paraformaldehyde. Tissues were processed as paraffin section and subsequently analyzed by hematoxylin-eosin staining according to the manufacturer's protocol (Sigma-Aldrich). The sections were imaged by microscopy. 2.7. LDH Assay. The lactate dehydrogenase (LDH) concentration was measured using an LDH assay kit according to the manufacturer's manual (Solarbio, Beijing, China) by a routine microtitre plate reader (wavelength: 572 nm). BioMed Research International the dark for 5 min. The cells were viewed using a fluorescence microscope with a blue/cyan emission filter as described previously [24]. 2.9. DNA Ladder Assay. Cells were lysed in lysis buffer (10 mM Tris-Cl pH 8.0, 150 mM NaCl, 0.4% SDS, 10 mM EDTA, and 100 g/ml protease K) and incubated at 37°C overnight with gentle agitation. DNA was extracted with phenol/CHCl3/isoamyl alcohol once and CHCl3/isoamyl alcohol twice. DNA fragmentation was detected by loading 10 μg of total DNA onto 2% agarose gel in Tris acetate/EDTA buffer and visualized by ethidium bromide staining as described previously [24]. 2.10. Western Blotting Analysis. The protein was extracted from mouse hearts and cardiomyocytes were lysed with cell lysis buffer supplemented with protease and phosphatase inhibitors. The concentration of protein was measured by Pierce BCA Protein Assay Kit. Equivalent protein (10-20 μg) was electrophoresed on sodium dodecyl sulfatepolyacrylamide gels (12%) and transferred to polyvinylidene fluoride (PVDF) membranes. The members were blocked with 5% nonfat milk for 2 h at room temperature. Then the membranes were probed with specific first antibodies(1 : 1000) at 4°C overnight, including the following antibodies:caspase3, Bax, PTEN, NF-κB, p38MAPK, phosphorylated-NF-κB, phosphorylated-p38MAPK, and GAPDH. Membranes were then incubated with secondary antibody (1 : 5000) for 2 h at room temperature. The signals were visualized with an ECL detection reagent. Densitometry analysis was then performed with Image J software (V1.8.0.112). 2.11. Cell Transfection. miRNA oligos were purchased from GenePharma Biotech (Shanghai). Cardiomyocytes were transferred with miR-17-5p mimics, mimics negative control, miR-17-5p inhibitors, or inhibitors negative control using Lipofectamine RNAiMAX (Invitrogen, USA) following the directions provided by the manufacturer. After 24 h, the cells were treated with or without dexrazoxane and doxorubicin for another 24 h. 2.12. RNA Extraction and Real-Time Quantitative PCR. The total RNA was isolated using TRIzol (Invitrogen, Carlsbad, CA, USA). Reverse transcription was performed using a kit from New England Biolabs. The levels of mature miRNA were performed using the QuantStudio3 Real-Time PCR system (Thermo Fisher Scientific, US) with SYBR Green (TaKaRa, Japan) according to the instructions. The data were presented at least three independent experiments. 2.13. 3′UTR Luciferase Assays. The 3′-untranslated region (3 ′ -UTR) of target genes and their mutant variant were synthesized and digested with SacI and XhoI to generate reporter vectors containing miRNA-binding sites (Shengong Co., China). HEK-293A cells were seeded in 96-well plates and cotransfected with luciferase reporter and miR-17-5p mimics using a transfection reagent (Vigofect, Vigorous Biotechnology, China). The cells were harvested 48 h later, and the luciferase activity was detected using the Dual-Luciferase Reporter Assay System (Promega). 2.14. Statistics. All results were analyzed by GraphPad Prism 6 software (GraphPad Software, CA, USA). The data were presented as the mean ± standard error of mean (SEM). Statistical comparisons between two groups were performed by Student's t-test, and among multiple groups were performed by one-way ANOVA followed by a Bonferroni correction. P < 0:05 was considered statistically significant. Dexrazoxane Mitigates Doxorubicin-Induced Cardiac Injury In Vivo. To explore the effect of dexrazoxane on doxorubicin-induced cardiotoxicity in vivo, we used doxorubicin-treated mice to establish a heart failure model. We observed that doxorubicin treatment resulted in significant decrease of body weight and increase of heart/body weight ratio compared with control mice, while dexrazoxane pretreatment could mitigate symptoms (Figures 1(a) and 1(b)). Echocardiography analysis of ejection fraction (EF) % and fraction shortening index (FS) % indicated that doxorubicin could induce heart function loss in vivo. However, dexrazoxane treatment could attenuate heart function loss significantly (Figures 1(c) and 1(d)). Hematoxylin-eosin staining showed that a large amount of inflammatory cells were accumulated in the heart tissue, and the structure of heart tissue was disordered in the doxorubicin (DOX) group compared with the control (Con) group. Nevertheless, dexrazoxane significantly decreased inflammatory cell accumulation and preserved the myocardial structure (Figures 1(e) and 1(f)). Together, these data suggested that dexrazoxane has a potent effect on preventing cardiac injury induced by doxorubicin. Doxorubicin Decreases Cell Viability and Promotes Cardiomyocyte Apoptosis In Vitro. We used MTT method to measure the viability of primary cardiomyocytes after doxorubicin treatment. As shown in Figure 2(a), doxorubicin treatment reduced cardiomyocyte viability in a concentrationdependent manner. Western blotting analysis showed that active (cleaved) caspase 3 substantially increased in a dosedependent manner after doxorubicin treatment in cardiomyocyte (Figures 2(b) and 2(c)). These results suggested that doxorubicin could reduce myocyte viability and induce cardiomyocyte apoptosis. The moderate injury for cardiomyocyte is 1 μM doxorubicin, when cell viability declined about 30% and cleaved-caspase 3 significantly increased. Therefore, we used 1 μM doxorubicin in the subsequent experiments to generate the in vitro doxorubicin-induced cardiomyocyte toxicity model. Dexrazoxane Ameliorates Doxorubicin-Induced Cardiomyocyte Apoptosis. Given that apoptosis plays an important role in doxorubicin-induced cardiotoxicity [25], we subsequently determined the effect of dexrazoxane on cardiomyocyte apoptosis. We treated cardiomyocytes with different concentrations of dexrazoxane prior to doxorubicin exposure. Western blotting showed that the expression of cleaved-caspase 3 in doxorubicin plus dexrazoxane-(DOX +DEX-) treated group was significantly lower than that of 3 BioMed Research International doxorubicin (DOX) alone. When the concentration is 200 μM, the effect of dexrazoxane reached its peak (Figures 3(a) and 3(b)). Thus, we applied 200 μM dexrazoxane to cardiomyocytes before doxorubicin treatment. The MTT assay showed that pretreatment with dexrazoxane could improve the cardiomyocyte viability, which was decreased by doxorubicin (Figure 3(c)). LDH, as another maker of cellular damage, was dramatically enhanced by doxorubicin and reduced obviously in DOX+DEX group (Figure 3(d)). Hoechst staining revealed that DNA condensation and fragmentation was induced by doxorubicin, which is an indicator of apoptosis. Nevertheless, the apoptotic cardiomyocytes were obviously reduced by dexrazoxane (Figures 3(e) and 3(f)). In addition, an occurrence of DNA laddering was discovered in doxorubicin-treated cardiac myocytes, which could be diminished by dexrazoxane (Figure 3(g)). Western blotting also revealed that active (cleaved) caspase 3 and Bax were decreased obviously in the DOX+DEX group compared with DOX group. Moreover, we found that doxorubicin treatment could increase the level of phosphorylated-p38MAPK and phosphorylated-p65, which is an important inflammation signaling pathways, while pretreatment with dexrazoxane could reverse this effect (Figures 3(h)-3(l)). Our study showed that dexrazoxane may protect doxorubicin-mediated cytotoxicity and apoptosis via p38MAPK/NF-κB signaling pathway. 3.4. mir-17-5p Directly Targets PTEN by Interacting with Its 3′UTR. The results in Figure 4(a) showed that several miRNA expressions were analyzed in doxorubicin-treated cardiomyocytes. MiR-17-5p expression change was most obvious in all the genes. Upon analyzing the sequence, miR-17-5p has been highly conserved in mouse, rat, and human ( Figure 4(b)). Next, we tried to identify miR-17-5p direct downstream targets using the target prediction programs TargetScan, miRBase, and PicTar. We analyzed 8 candidate genes with miR-17-5p binding sites in their 3 ′ -UTRs. We synthesized the wild type 3 ′ -UTR and mutated binding sites 3 ′ -UTR of each candidate gene into a pmiRGLO vector and did the dual-luciferase reporter assay to verify whether miR-17-5p directly bounds to these genes. Our data indicated that the phosphatase and tensin homolog (PTEN), an apoptosis-related gene, is a potential molecular target of miR-17-5p in cardiomyocyte (Figure 4(c)). To investigate miR-17-5p's direct target gene, we cotransfected miR-17-5p mimics with the luciferase reporters into HEK293A cells. The relative luciferase activity of the PTEN 3 ′ -UTR reporter was obviously declined compared with the control vector. However, when the miR-17-5p binding site in the 3 ′ -UTR of PTEN was mutated, the relative luciferase activity of the PTEN 3 ′ -UTR reporter was consistent with the control vector (Figure 4(d)). Western blotting showed that PTEN expression was significantly decreased after miR-17-5p mimic transfection and increased with the miR-17-5p inhibitor transfection in cardiomyocytes (Figures 4(e)-4(h)). Thus, the data suggest that miR-17-5p could directly bind to PTEN 3 ′ -UTR and inhibit its expression. Discussion In the present study, we demonstrated that dexrazoxane protects heart function and prevents doxorubicin-triggered apoptosis. Additionally, dexrazoxane could inhibit doxorubicininduced cardiomyocyte apoptosis by regulating the expression of miR-17-5p/PTEN cascade. As far as we know, this is the first report to show that dexrazoxane may protect cardiomyocyte from doxorubicin-induced injury by modulating miRNA expression and its downstream signal pathway. Apoptosis is known as type I cell death, which plays a key important role in the development and progression of cardiovascular disease [26][27][28]. Apoptosis could be initiated via extrinsic and intrinsic pathways [29]. A large number of BioMed Research International studies have proved that cardiomyocyte apoptosis is a vital feature of doxorubicin-induced cardiotoxicity [30]. Consistent with the previous studies [31,32], we observed a significant decrease of heart function in doxorubicin-treated mice, accompanied with an increase of heart/body weight ratio. Moreover, our in vitro experiments confirmed that doxorubicin could reduce cell viability and cause cardiomyocyte apoptosis, evidenced by increasing the expression of cleaved-caspase 3 and Bax. As mentioned above, dexrazoxane is the only cardioprotective drug approved by the FDA to resist doxorubicininduced cardiac damage [13]. However, the exact protective mechanism of dexrazoxane remains elusive. Multiple mechanisms have been proposed to contribute the protective effects of dexrazoxane. Some researches revealed that dexrazoxane could chelat iron to reduce the generation of ROS [4,33]. Bures et al. [34] found that dexrazoxane could prevent doxorubicin from binding to the topoisomerase 2β complex. Our results showed that dexrazoxane improved cardiac function and blocked cardiomyocyte apoptosis, which was in line with the results of Suzuki et al. [35]. The cardiotoxicity induced by doxorubicin has been associated with inflammatory cytokines, many of which are modulated by mitogen-activated protein kinases (MAPKs) [36]. Many studies have demonstrated that doxorubicin could increase the expression of phosphorylated-p38MAPK and phosphorylated-NF-κB, which play an important role in activation of inflammation [37,38]. The research of Thandavarayan et al. showed that Schisandrin B could prevent doxorubicin-induced inflammation through inhibition of p38MAPK signaling [39]. In addition to regulating inflammation, p38MAPK can also control cell cycle and apoptosis. Ni et al. and Shati et al. [40,41] found that doxorubicin could promote the activation of p38MAPK, thus promoting the occurrence of apoptosis . Similarly, we also found that doxorubicin could active the p38 MAPK and NF-κB, while dexrazoxane could repress this process. These results supposed that dexrazoxane could protect doxorubicin-induced apoptosis and inflammation by p38MAPK/NF-κB signaling pathway. There has been a great number of evidence supporting that miRNAs may take part in doxorubicin-induced cardiotoxicity [42][43][44]. Moreover, previous studies have manifested that miR-17-5p plays a vital role in tumor cell survival, and it has antiapoptotic properties [45,46]. In this study, the level of miR-17-5p was markedly declined in doxorubicintreated cardiomyocytes, which could be recovered by dexrazoxane, and overexpression of miR-17-5p could reduce doxorubicin-induced apoptosis in cardiomyocytes. PTEN (phosphatase and tensin homolog on chromosome 10) is a tumor suppressor, which dephosphorylates PIP3, thereby inhibiting the AKT/mTOR pathway [47]. Hu et al. [48] found that decreasing miR-21 could exert antiapoptotic effect by targeting PTEN in rats. In addition, Yuan et al. [49] determined that miR-19b and miR-20a may suppress myeloma cells apoptosis by targeting PTEN. Our studies also confirmed that PTEN is a pro-apoptosis gene. Fang et al. [50]reported that miR-17-5p could induce drug resistance an invasion of ovarian carcinoma cells by targeting PTEN signaling. Lu et al. [51] discovered that long noncoding RNA HOTAIRMI inhibits cell progression by regulating miR-17-5p/PTEN axis in gastric cancer. Consistent with the previous studies, in our work, we verified that PTEN is the target gene of miR-17-5p, and dexrazoxane might alleviate doxorubicin-triggered apoptosis via miR-17-5p/PTEN signal pathway. Conclusions In this study, we proved that dexrazoxane prevents doxorubicin-induced cardiotoxicity by ameliorating apoptosis. We demonstrate for the first time that miR-17-5p plays a key role in the cardioprotective effect of dexrazoxane, which may help to better understand the cardioprotection of dexrazoxane, and miR-17-5p could be a potential molecular target in doxorubicin-induced cardiotoxicity treatment in the future. Moreover, the present findings may offer a new insight to implicate novel drug targets and offer new therapeutic strategies to protect against doxorubicin-induced cardiotoxicity. Data Availability The datasets used and/or analyzed during the current study are available from the corresponding author on reasonable request. Ethical Approval This study was approved by the Animal Care and Treatment Committee of Beijing Hospital Animal Use and Care Committee and Beijing Normal University Animal Use and Care Committee and the Guide for Care and Use of Laboratory Animals (NIH Publication # 85-23, revised 1996). Disclosure Xiaoxue Yu and Yang Ruan are co-first authors.
2020-03-05T10:51:59.337Z
2020-03-01T00:00:00.000
{ "year": 2020, "sha1": "c1e935cb83b0656a895e2078cd9a0a8e40471b9b", "oa_license": "CCBY", "oa_url": "https://downloads.hindawi.com/journals/bmri/2020/5107193.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "95de63752cb44ed09d3299ebc282c6f91a8132bc", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
244854342
pes2o/s2orc
v3-fos-license
METAL CUTTING IS IT STILL OF INTEREST TO ANYONE? : For almost 100 years, the phenomena in the metal cutting process have offered researchers in the field a wide range of research topics, and at the same time, as much satisfaction, both in terms of deepening theoretical knowledge, especially in terms of the practical results obtained. Interest in this field has declined dramatically, however, since the beginning of the third millennium. Has the cutting process reached the limits of knowledge by their exhaustion or has it become inefficient for industry compared to other new processes for manufacturing metallic, non-metallic materials and composites? Why is the field no longer as attractive to researchers? Here is what this paper tries to clarify and propose to researchers in the field to reinvent the approach of the cutting process, as an incomplete explored and still excellent perspective, not only for the manufacturing industry, but also for the theoretical foundations of the cutting phenomenon. INTRODUCTION Chipping/metal cutting, as the initiates know, is the method by which, under the action of relative movements between the semi-finished part and the cutting tool, the surplus material is divided into layers, and then it is removed in the form of chips, until the shapes, dimensions and surface quality indicated in the execution drawing of a piece are obtained. Some processes of the method are known since prehistory and can be documented by the existence of artifacts, most often non-metallic, that have undergone such a transformation (carving, trepanation, drills, scratches, grinding, etc.), and then of the inscriptions and pictograms made on different supports describing various working processes (i.e. in ancient Egypt, a rotating tool -wimble was used to drill stones and found in the museums of antiquities of the world. During the Middle Ages (or rather, between Antiquity and the Renaissance) the cutting processes develop, less as a method but more as a number of objects subject to processing and as diversification of the fields of use, without mentioning, however, concerns to explain any process on the cause-effect relationship or to theorize the accompanying physical phenomena. By the end of the 19th century, the demand for objects processed by processes specific to the cutting method leads to the development of rudimentary installations -usually with action based on human force and less often of the water fall -and of tools, devices and other aids. After 1850, the first concerns began to appear regarding the explanation of how to form chips and the phenomena underlying this process, for the most common procedures such as drilling and turning. Thus, around 1870, the first attempts to explain the phenomenon of chip formation are mentioned, and between 1881 and 1883 Arnulph Henry Reginald Mallock reveals the importance of chip shearing and friction at the tool-part interface [1] as well as the influence of coolants-lubrication [2]. With the passage into the twentieth century, the concerns in the field intensify, in proportion to the scale of the industrial revolution and the transition to mass production, both from the point of view of materials subject to chipping and cutting tools, cutting processes and related machine tools, as well as in terms of researching the phenomenology of metal cutting and physical explanation of the cause-and-effect relationships involved. The experimental equation of F.W. Taylor from 1907 that links the durability of the cutting tool blade to the cutting rate can be considered as the beginning of the experimental modeling of the phenomena in the cutting, for the purpose of their management, control and prediction. The last century of the 2nd millennium is characterized by a large and rapid development of industrial production in which cutting processes occupy an increasingly significant percentage, so that the first years of the current millennium record a percentage of about 30% of the total manufactured parts that include cutting operations: turning, broaching, drilling, milling, threading, toothing, grinding, honing, lapping, etc. At the same time, the interest of producers has gone from quantity to quality, optimization, efficiency, maximization of profit, minimization of resources used, etc. [2,3]. Normally, the attention paid to the phenomenology of metal cutting has increased proportionally. There were approached research topics belonging to all the cutting processes, looking for answers regarding the influence of: -parameters of metal cutting process, -the cutting tool geometry, -the quality of the semi-finished materials and the tool materials, -coolant-lubricating fluids, -structural/ kinematic composition of machine tools, -tool fastening/fixing/entrainment systems, -the systems and fastening for semi finished (parts), on the following aspects (without their string being meant to be exhaustive): -cutting forces, -the power / mechanical work consumed by the operation, -the durability of the cutting tool edge, -machinability of the material, -the shape of chips, -the time of phase/ cutting operation, -dynamic stability of the process, -the temperature released during the process, -the quality of the worked surface, -precision of processing. The results of the researches in the field have been published -all over the world, especially in the countries that have been on the front line of industrialization -in the form of Research/Work Reports, Scientific Papers, Dissertations, PhD Theses. If all these results were gathered in one place -the current technique would allow the allocation of a "cloud" for this -it would result in a vast database, which analyzing it in terms of how it was or is or is or maybe will be used, we would find the following: -A very small part of this huge database has been capitalized by adding and supplementing theoretical knowledge that leads to the description and understanding of the phenomena of splinting. -Another small part of these researches were capitalized by the large companies producing cutting tools and machine tools respectively by compiling interactive libraries used to choose the type of tool, its geometry and the working parameters most appropriate to the desired machining by cutting. -Another part, it was used to improve the performance of various cutting processes, usually for particular cases that generated the need for research and therefore lost their validity over time or became obsolete. -Most of the results of the published applied researches have neither found their expected echo nor have been capitalized sustainably. The above approximations, of the proportions of use of the results of research in the field of metal cutting and the way in which it was made, can give an overall picture of the decrease in interest in research work in the cutting of metals, non-metals and composites.Let's try to see why, because "only knowing the cause, the effect can be eliminated" ( sublata causa, tolitur efectus). BASIC CONCEPTS First of all, let us try to delimit our area of interest in the present argumentation. The directions of research taken into consideration refer to the classical processing processes belonging to the cutting method. The research papers were carried out as a result of two distinct needs: 1. The first is in connection with the need to research the phenomenological bases of the method. These researches were approached first in universities with scientific and technical profile and then also in departmental research institutions, governmental or private. In these cases, the research funds are not allocated mainly to research in the field of metal cutting but to the accompanying physical phenomena (temperatures, vibrations, mechanical work of elastic / plastic deformation, erosions, frictions, chemical dissociations, etc.) and the topics are chosen according to the curiosity / interest or skills of the research team participating in the national or international research grants competitions. 2. The second "engine" is the emergence of new materials, with mechanical characteristics so different from those of the existing ones that for their processing by cutting, the manufacturing companies cannot afford to extrapolate the working parameters of the usual processing processes, nor do they assume the risks of not knowing the output quantities from the cutting process in the form of forces, moments, consumed powers, temperatures released in the process, emissions, the quality of the surfaces obtained, the micro-structural integrity of the processed parts, etc. Research funds in these cases are allocated on the basis of topics specified by the private beneficiary through a research contract. The goal is usually related to the behavior in chipping of new materials, recommended working regimes to improve the quality of the processed surface, reduce cutting efforts, introduce new types of cutting tools, coolants-lubrication or new load-bearing or kinematic structures or driving to advanced machine tools. Regardless of how the research was initiated or the one in which the financing of the research contracts was made, the type of research approached is experimental research and their results are almost without exception, experimental computational (empirical) relationships, nomograms for choosing cutting regimes -the depth of cutting, the feed and the cutting speed (ap, f, v ) -or pairs of their recommended values for various combinations of: cutting process, tool material, semifinished material, quality of the processed surface, type of coolant-lubrication, specific working conditions, etc. For example, Figure 1 shows a generic form of experimental relationship very often used in the last 50... 70 years to describe the output size of the process -Cutting force -1 (the cutting forces can be measured in terms of three components: cutting force Fc, feed force Ff and passive force Fp. -only the index in the formula changes: c, f, p). Analyzing the relationship, it is found that it takes into account the following components of the process:input variables -the three parameters of the working regime v (3), f (5) and ap (7), affected by the specific exponents xf, (4) yf (6) and zf (8),-product Π (9) of a series from i = 1 (12) to n (10) coefficients Ki (11) which take account of the influences: -coolant-lubrication, -tool material -steel, metal carbides, diamond, etc., -the material of the blank -steel, cast iron, nonferrous, non-metallic -by its hardness HB, -type of cutting -orthogonal, cutting, profiling, -type of feed -longitudinal, tangential, mixed, -the shape of the tool clearance face -with or without a splinter crusher, with or without a facet, etc. -a CFz coefficient (2) resulting from the processing of the experimental data and which, in principle, identifies all other working conditions during the experiments and which cannot be individualised by Ki-type coefficients. ARGUMENTATIVE EXAMPLES OF INTEREST IN CUTTING RESEARCH Experiments, as a rule, are carried out in a number relevant to a previously adopted experimental program, in correlation with the type of mathematical regression that will be used for the processing of experimental data. The collection of experimental data was done, initially manually using analog mechanical measuring equipment, then dynamometers with tensometric stamps and electronic axles and in the last quarter of a century piezoelectric dynamometers with digital transformation of the measured signals. The equipment for processing the experimental results has evolved from manually drawn and rationally interpreted graphs to systems for automatic data collection and processing using dedicated software and high-performance electronic computing systems. Figure 2 exemplifies the way of graphical processing of the experimental results, with the specification that each of the points on the graph are actually the "poles" of some "clouds" of experimental data. Fig. 2. Graphical representation of depth of cut a p influences upon cutting forces at orthogonal turning Not much different are other numerous cases of studies of the influences of the different parameters of the splinter or cutting tool on wear, tool durability, machining quality and accuracy, splinter dynamics [4], high-speed turning [5], etc. For example: - figure 3 shows the result of a study from 25 years ago on the wear on the laying face (VB) of the lathe knife reinforced with metal carbide plates depending on the depth of cutting (ap) when turning, for three types of semi-finished material; -figure 4 is shown the result of a study from 20 years ago on the degree of crumbling of chips at adaptive turning that has as an automatic adjustment parameter the longitudinal advance. The mentioned study also confirmed the aspects revealed by other researchers regarding the variation of the chip thickness which leads to the appearance of some phenomena such as: vibrations, variations in the size of components of the total turning force, acoustic phenomena, premature wear of the tool, a reduction of the dynamic stability field of the turning process [6]. Figure 5, the result of a study on the influence of the presence of coolant-lubrication in the processed area on the durability of the tool edge reinforced with metal carbide inserts is presented -the impact of the cutting parameters on the surface roughness and dimensional accuracy of hardened steel with CBN cutting tools was also experimentally studied, highlighting the variation of vibration, cutting forces, and tool wear under various cutting conditions. [7]. -other experimental study topics concerned the milling of composite materials for example to optimize and compare tilted helical milling processes in the case of carbon and glass fiber reinforced polymer composites. An uncoated carbide end mill were used and finally the microstructure were analyzed using optical-digital and scanning electron microscopy, respectively. [8]. -many other theoretical and/or experimental scientific papers aimed at streamlining the processing process, especially the optimization of the costs of processing by cutting through the control of working parameters. There are hundreds of such researches, but here we recall only the remarkable theoretical results obtained by the pioneers Konig & Depireaux [9], Spur Z. [10], Duca [11], Solomentsev [12] and practical ones of E. Dodon [13] and many others. FINAL REMARKS The authors hope that this paper will remind researchers in the field, especially PhD students and young researchers, that chipping/metal cutting was, is and will be for a few more decades one of the predominant methods used in the manufacture of material goods. Moreover, from the work emerges the idea that the physical phenomena that govern the phenomena during the cutting process are incompletely known, described mainly on the basis of experimental observations and is therefore far from being able to be conducted efficiently. In other words, the work is a plea, an encouragement for the sustained resumption of research in the field, introducing in the working instruments the latest conquests of the audio-video monitoring technique, of non-destructive investigation, of postelectron-microscopic examination, metallographic, spectrographic, x-ray diffractometric, etc., of experiment planning, of data acquisition and processing using computational computing and simulation techniques, etc. Consider the exposed ones as an encouragement, it being as necessary for the brilliant researcher as the beginner one, as is the sack for the bow of the most virtuous violinist.
2021-12-04T16:25:38.454Z
2021-12-01T00:00:00.000
{ "year": 2021, "sha1": "34f422150458ca8deea54a1f854e5cf654f52aeb", "oa_license": null, "oa_url": "https://doi.org/10.24867/jpe-2021-02-001", "oa_status": "GOLD", "pdf_src": "MergedPDFExtraction", "pdf_hash": "b5e2aa59223dfae08f68250bd297cea653d59bca", "s2fieldsofstudy": [ "Materials Science", "Engineering" ], "extfieldsofstudy": [] }
235907299
pes2o/s2orc
v3-fos-license
Novel FOXM1 inhibitor identified via gene network analysis induces autophagic FOXM1 degradation to overcome chemoresistance of human cancer cells FOXM1 transcription factor is an oncogene and a master regulator of chemoresistance in multiple cancers. Pharmacological inhibition of FOXM1 is a promising approach but has proven to be challenging. We performed a network-centric transcriptomic analysis to identify a novel compound STL427944 that selectively suppresses FOXM1 by inducing the relocalization of nuclear FOXM1 protein to the cytoplasm and promoting its subsequent degradation by autophagosomes. Human cancer cells treated with STL427944 exhibit increased sensitivity to cytotoxic effects of conventional chemotherapeutic treatments (platinum-based agents, 5-fluorouracil, and taxanes). RNA-seq analysis of STL427944-induced gene expression changes revealed prominent suppression of gene signatures characteristic for FOXM1 and its downstream targets but no significant changes in other important regulatory pathways, thereby suggesting high selectivity of STL427944 toward the FOXM1 pathway. Collectively, the novel autophagy-dependent mode of FOXM1 suppression by STL427944 validates a unique pathway to overcome tumor chemoresistance and improve the efficacy of treatment with conventional cancer drugs. INTRODUCTION Forkhead box (FOX) protein M1 (FOXM1) is a transcription factor with pronounced pro-oncogenic functions [1,2]. It is overexpressed in the majority of human cancers and impacts all hallmark tumor aspects, including proliferation, survival, metastasis, inflammation, angiogenesis, and treatment resistance [3][4][5]. Due to this, FOXM1 serves as a crucial regulator of tumor development, and its overexpression portends a poor prognosis for patients, promoting aggressive tumor phenotype and high resistance to current therapeutic approaches [3,5]. Inhibition of pro-oncogenic regulators with small molecules is a popular and established approach in current clinical practice. However, targeting of transcription factors has been particularly challenging. A growing number of direct and indirect pharmacological FOXM1 inhibitors have been identified, including thiostrepton [22], honokiol [23], bortezomib [24], siomycin A [25], curcumin [26], SR-T100 [27], FDI-6 [28], RCM-1 [29], and DFS lignan [30]. FOXM1 inhibition efficiently sensitizes cancer cells to conventional chemotherapy, yet the often unknown inhibitory pathways of these compounds or their off-target actions exhibit undesired secondary effects like general proteasome inhibition or possible activity toward other targets, especially other FOX proteins. Therefore, there is an urgent need to develop efficient and selective agents with a clear mode of action against FOXM1 activity. Here we use a gene network analysis approach to discover a novel small molecule STL427944 that selectively targets FOXM1 pathway. This compound suppresses FOXM1 protein through a novel two-step mechanism that includes translocation of nuclear FOXM1 protein to the cytoplasm and its subsequent autophagic degradation. STL427944 treatment results in sensitization of cancer cells to multiple chemotherapeutic agents. We also provide transcriptome-supported evidence that STL427944 exhibits selectivity toward suppressing FOXM1-controlled regulatory pathways. The unique mode of action revealed by our studies, which, unlike previously reported [31], does not respond to proteasome inhibitors, establishes a novel pathway to target this master regulator of chemoresistance in multiple cancers. RESULTS Transcriptomic analysis identifies small molecules disrupting FOXM1 pathway The development of pharmaceutical agents inhibiting prooncogenic proteins is a major area in cancer treatment research. Historically, these studies have been conducted in a target-centric way, focusing on molecules directly interacting with a protein of interest. However, this requirement for direct binding significantly limits the number of options, while usage of a single target for the initial screening increases the chances of identifying agents with unwanted nonspecific effects. Recently, Pabon et al. [32] adopted a different, network-centric strategy. Transcriptomic and proteomic data were used to identify agents affecting specific disease pathways, with the goal of revealing novel targets leading to specific inactivation of the whole pro-oncogenic pathway. This approach leverages the whole network of protein interactions that could impact the protein of interest by either direct binding to it or indirect binding to a member of the network [33]. We applied this new network-based screening concept to identify potential small molecule inhibitors of the FOXM1 pathway activity, using differentially expressed (DE) gene signatures from the National Institutes of Health's Library of Integrated Network-Based Cellular Signatures (LINCS) L1000 dataset. Unfortunately, LINCS database does not contain datasets describing transcriptomic effects of FOXM1 knockdown. However, our previous findings demonstrated that FOXM1 activity and protein level are strongly dependent on its interaction with nucleophosmin (NPM) [34]. We therefore compared transcriptional profiles between knockdowns of NPM1 gene (as a proxy to FOXM1 knockdown) and responses of the same cell types to thousands of distinct bioactive compounds. This screen resulted in 264 compounds that showed either a direct correlation with the NPM1 knockdown or an indirect correlation with the knockdown of an NPM-binding partner (for additional details, see [33]). Furthermore, excluding kinase inhibitors and compounds not available for purchase, we compared the profiles of these hits with the changes in expression of established FOXM1 targets [35,36] present in our dataset across seven cancer cell lines (A549, MCF7, VCAP, HA1E, A375, HCC515, and HT29). As expected, all eight genes are downregulated by NPM1 knockdown or knockdowns of major FOXM1 targets AURKB and MYC (Table 1). This analysis highlighted STL427944 (C 25 H 23 N 7 O 4 , PubChem CID 9592990) and benzamil (C 13 H 15 C l2 N 7 O, PubChem CID 108107), two top hits predicted to disrupt NPM-FOXM1 gene network. Consensus changes in FOXM1 targets expression suggested that STL427944 should be a more potent and universal inhibitor of FOXM1-dependent network than benzamil (Table 1), therefore, we proceeded with experimental characterization of STL427944 (henceforth referred to as "STL"). STL treatment suppresses FOXM1 protein levels in human cancer cells To experimentally confirm FOXM1-suppressing effect of STL, we used human cancer cell lines of different origin (Supplementary Table 1). Treatment with STL resulted in dose-dependent reduction of FOXM1 protein levels in all examined cell lines (Figs. 1, 2a). Prominent FOXM1 suppression was often achieved with STL concentrations as low as 5-10 μM (LNCaP, PC3, and A549 cells) with maximum efficiency reached at 25-50 μM. Since FOX proteins display significant structural homology [37], we evaluated FOXO1 and FOXO3A levels ( Supplementary Fig. 1) and confirmed that STL does not suppress FOX proteins in general. STL targets FOXM1 protein to lysosome-mediated degradation FOXM1 inhibitors may exert their action at multiple levels, including transcriptional, translational, and post-translational effects. To further investigate how STL suppresses FOXM1, we utilized an experimental model based on U2OS human osteosarcoma cells previously described as C3-luc [25] that express EGFP-FOXM1 fusion protein controlled by doxycycline-inducible promoter. Due to the positive autoregulatory loop [38], exogenous EGFP-FOXM1 protein also promotes the expression of endogenous FOXM1. Treatment of doxycycline-stimulated C3-luc cells with STL drastically decreased the levels of both endogenous and exogenous FOXM1 in a dose-dependent manner starting with 2.5 μM (Fig. 2a). To exclude possible STL effects on FOXM1 recognition by the antibody, the same samples were additionally probed for GFP levels with a similar result (Fig. 2a). The ability to suppress EGFP-FOXM1 expression driven by doxycyclinecontrolled promoter indicates that STL effect on FOXM1 is not dependent on any signaling pathway regulating the activity of endogenous FOXM1 promoter. We therefore evaluated possible effects of STL upon FOXM1 mRNA stability and translation efficiency. To investigate mRNA-related effects of STL, we evaluated the levels of all FOXM1, exogenous EGFP-FOXM1 only, and short-lived MCL1 transcripts after treatment with STL or general transcription inhibitor actinomycin D (ActD). While ActD prominently reduced the levels of inspected genes, STL treatment did not significantly affect them (Fig. 2b, Supplementary Fig. 2), ruling out the possibilities of STL acting as a global transcription inhibitor or inducing prominent FOXM1 mRNA degradation. At the same time, STL reduced the expression of FOXM1 target gene AURKB in a dose-dependent manner, indicating functional FOXM1 inactivation ( Supplementary Fig. 2). STL treatment also did not affect the level of short-lived MCL1 protein that was very sensitive to general translation inhibition by cycloheximide (CHX, Fig. 2c). Surprisingly, STL addition to cells already being treated with CHX did not cause FOXM1 repression. We therefore evaluated the kinetics of FOXM1 protein levels after addition of STL, CHX, or both, and confirmed that only STL alone was causing significant FOXM1-level reduction over time ( Supplementary Fig. 3). FOXM1 level in cells treated simultaneously with STL and CHX showed quick decrease but then remained stable for 24 h. These results indicate that FOXM1 is suppressed by STL at the post-translational stage, most likely through increased protein degradation that may be mediated via short-lived proteins. Since STL treatment partially replicates transcriptomic effects of NPM1 knockdown, we additionally tested if STL could reduce FOXM1 level through NPM suppression [34]. However, treatment with the highest STL concentration did not affect NPM protein levels in C3-luc cells (Fig. 2c), supporting the hypothesis of STL selectivity toward FOXM1. In general, intracellular proteins are degraded via two mechanisms: ubiquitin-proteasome pathway or autophagy. Proteasome inhibition by bortezomib or MG132 did not rescue FOXM1 from suppression by STL (Fig. 2d). On the other hand, inhibition of lysosome function by bafilomycin A1 (BafA1) completely prevented STL-dependent reduction of FOXM1 level (Fig. 2e), suggesting that STL facilitates lysosome-mediated destruction of FOXM1 protein. However, BafA1 treatment itself caused prominent cell stress (based on cell morphology, results not shown) and reduced the initial FOXM1 level independently of STL. We therefore performed a more detailed study of STL effects on autophagy and lysosomes. STL relocalizes FOXM1 to the cytoplasm and promotes its autophagic degradation Autophagic lysosomal activity is considered to be relatively low under normal conditions. We hypothesized that prominent lysosome-dependent FOXM1 degradation caused by STL should be associated with autophagy induction. Indeed, treatment of C3luc cells with 0.5 μM or higher concentrations of STL results in upregulation of both autophagy marker protein LC3-II and lysosomal membrane protein LAMP1 (Fig. 3a). The same effect was observed in OVCAR3 and HCT116 cells ( Supplementary Fig. 4), confirming that STL can universally induce autophagy in mammalian cells. Accumulation of LC3-II and LAMP1 was evident in 4 h after treatment start and gradually progressed with time; the addition of CHX in combination with STL prevented LC3-II increase over time, while LAMP1 levels were slowly decreasing ( Supplementary Fig. 3a). These results strongly support the idea of autophagy-dependent FOXM1 degradation because CHX is known to inhibit autophagosome maturation [39][40][41]. To perform a more specific test, we used chloroquine (CQ) that prevents autophagosome-lysosome fusion [42], inhibits autophagy without affecting lysosome-mediated degradation, and has less impact on initial FOXM1 levels in C3-luc cells. Addition of CQ completely rescued FOXM1 protein levels from suppression by STL (Fig. 3b) and thereby determined that autophagosome maturation into autolysosomes is essential for STL effect on FOXM1. Time-course evaluation of autophagic flux in C3-luc cells demonstrated that STL-dependent LC3-II accumulation occurs much faster than in the case of autophagosome degradation arrest by CQ, thereby confirming autophagy stimulation by STL ( Supplementary Fig. 5). While autophagosomes are cytoplasmic structures, FOXM1 is predominantly located in the nucleus [29]. It may undergo autophagic degradation either via nonspecific macronucleophagy [43] or after translocation to the cytoplasm. Arrest of nuclear protein export with leptomycin B (LMB) caused partial reversal of STL-induced FOXM1 suppression but did not prevent autophagy activation (Fig. 3b), suggesting that FOXM1 only becomes available to autophagosomes after its relocation to the cytoplasm. It also indicates that STL induces autophagy independently of FOXM1 suppression. To investigate this hypothesis, we stained drug-treated C3-luc cells with vital lysosome-specific dye LisoView (Fig. 4a, Supplementary Fig. 6). Confocal microscopy demonstrated that STL induced prominent formation of acidic intracellular vesicles that are assumed to be lysosomes or autolysosomes. As expected, BafA1 completely prevented the formation of these vesicles, confirming their lysosomal nature; however, addition of CQ to STL also returned lysosomal staining to baseline level and drastically reduced the number of vesicles, indicating that they originate through autophagosome maturation. Moreover, LMB did not prevent autolysosome formation, additionally proving that FOXM1 relocalization to the cytoplasm is crucial for its STL-dependent degradation. We further studied intracellular localization of FOXM1 and autophagosome marker LC3 using confocal microscopy ( Fig. 4b, Supplementary Fig. 7). Doxycycline-stimulated C3-luc cells demonstrate clear nuclear localization of EGFP-FOXM1 protein and low background levels of LC3 in the cytoplasm. As expected, STL treatment caused drastic reduction of nuclear EGFP-FOXM1 signal and prominent LC3 staining in the cytoplasm, indicating autophagy induction; the pattern of LC3 staining suggests that it delineates the cytoplasmic vesicles. Cells treated with STL + CQ displayed a high fraction of cells with cytoplasmic localization of EGFP-FOXM1, where it colocalized with LC3-positive puncta that commonly correspond to autophagosomes. Addition of LMB to STL retained EGFP-FOXM1 in the nucleus and prevented its Fig. 2 STL inhibits FOXM1 expression on protein level via autophagy-dependent mechanism. a C3-luc cells stimulated with doxycycline to induce expression of EGFP-FOXM1 fusion protein were treated with increasing concentrations of STL for 24 h. Total protein samples were analyzed via immunoblotting for FOXM1 and GFP expression, β-actin was used as an internal loading control. b Doxycycline-stimulated C3-luc cells were treated with 50 μM STL for 6 or 24 h and 10 μg/mL ActD for 6 h. Total RNA samples were analyzed for FOXM1, GFP, and MCL1 transcript levels via RT-qPCR, 18 S rRNA was used as a reference transcript. Data are presented as means ± S.D. and individual datapoints, N = 4, *exact p = 0.02857 (Mann-Whitney U test, two-tailed). c C3-luc cells were treated with indicated concentrations of doxycycline, STL, or CHX for 24 h. Total protein samples were analyzed via immunoblotting for FOXM1, NPM, and MCL1 expression, β-actin was used as an internal loading control. d Doxycycline-stimulated C3-luc cells were treated with STL for 24 h in the presence of bortezomib or MG132. Total protein samples were analyzed via immunoblotting for FOXM1 expression, β-actin was used as an internal loading control. e Doxycycline-stimulated C3-luc cells were treated with STL for 24 h in the presence of bafilomycin A1. Total protein samples were analyzed via immunoblotting for FOXM1 expression, β-actin was used as an internal loading control. Table 2 for colocalization test results). Taken together, these results imply that STL-dependent FOXM1 suppression is a two-step process: STL induces cytoplasmic autophagosome accumulation and then stimulates FOXM1 protein export to the cytoplasm, where it is transported to autophagosomes and subsequently destroyed. In the presence of CQ, FOXM1 is still transported to the cytoplasm, but accumulates in immature autophagosomes instead of degradation, resulting in the same FOXM1 levels detected in total cell protein samples (Figs. 3b, 4b). Nevertheless, FOXM1 sequestration in the cytoplasm should still functionally inactivate it. In agreement with this statement, we observed only a slight rescue of suppressed FOXM1 target genes in cells treated with CQ + STL combination, while LMB was able to return their expression back to baseline levels due to FOXM1 retention in the nuclei ( Supplementary Fig. 8). STL-induced FOXM1 suppression sensitizes cancer cells to chemotherapeutic agents Sensitization to antitumor drugs is the most well-characterized effect of FOXM1 downregulation in cancer cells. We therefore assumed that STL treatment should reduce chemoresistance as well and estimated the cytotoxic effects of STL alone or in combination with other agents. Considering that FOXM1 provides resistance to a broad spectrum of drugs, we used several agents with different mechanisms of action: direct DNA damage (carboplatin), DNA synthesis inhibition (5-FU), or cell division disruption (paclitaxel, docetaxel). Each drug was tested in model cell lines belonging to cancer type for which treatment with this particular drug was approved by FDA. Lung cancer (H1703, A549) and ovarian cancer (PEO1, OVCAR3) cells treated with sublethal concentrations of carboplatin display a prominent increase in FOXM1 protein levels. Addition of STL in combination with carboplatin efficiently prevented FOXM1 activation (Fig. 5a), resulting in decreased (PEO1, H1703 and A549) or the same (OVCAR3) FOXM1 protein levels in comparison with the corresponding control samples. STL alone did not exert prominent cytotoxic effects, but cells treated with carboplatin +STL combination displayed a significant increase in cleaved caspase-3 level when compared with samples treated with carboplatin alone, indicating a strong synergistic pro-apoptotic effect between two drugs. To test if STL can cause chemosensitization through other mechanisms besides FOXM1 suppression, we used PEO1 cells with stable shRNA-mediated FOXM1 knockdown (Fig. 5b). As expected, FOXM1-deficient PEO1 cells display increased sensitivity to carboplatin; however, STL caused no further increase in carboplatin cytotoxic effects in PEO1-shFOXM1 cells, suggesting that FOXM1 is the main mediator of STL effects on cell chemoresistance. At the same time, induction of autophagy by STL was still prominent in FOXM1-deficient cells, indicating that autophagy on its own does not significantly affect chemoresistance. While platinum-based agents damage DNA directly, 5-FU treatment results in indirect DNA damage due to inhibition of thymidine synthesis [44]. Similar to carboplatin effect, treatment of colorectal cancer cells with 5-FU resulted in FOXM1 upregulation without prominent cell death induction. Combination with STL efficiently prevents 5-FU-induced FOXM1 upregulation and drastically enhances the cytotoxic effects of 5-FU treatment (Fig. 5c). Thus, FOXM1 inhibition by STL can sensitize cancer cells to treatments based on both direct and indirect DNA damage induction. Taxanes are another class of anticancer drugs that exhibit decreased efficacy against tumor cells with high FOXM1 levels. However, unlike platinum-based compounds or 5-FU, taxanes do not induce prominent DNA damage, affecting mitotic spindle microtubule dynamics instead to disrupt cell division [45]. Accordingly, we did not observe uniform FOXM1 upregulation in prostate cancer cells treated with docetaxel or NSCLC cells treated with paclitaxel (Fig. 5d). Nevertheless, FOXM1 suppression by STL synergized strongly with both docetaxel and paclitaxel, enhancing apoptotic response. This effect indicates that the role of FOXM1 as a chemoresistance inducer is not limited to DNA damage response and can be much more universal. While caspase-3 cleavage is a common indication of apoptosis induction, cell death should be verified using other methods for better reliability. We therefore additionally verified the cytotoxic effects of STL in combination with other agents using either flow cytometry-based Annexin V assay (Supplementary Fig. 9a-b) or Trypan Blue exclusion assay with direct counting ( Supplementary Fig. 9c). The results of these experiments were in strong agreement with trends observed using immunoblotting approach. Subtoxic doses of STL (10 μM) also exerted a clear antiproliferative effect that was associated with moderate enrichment of cells in G1 cell cycle phase ( Supplementary Fig. 10). RNA-seq data suggest STL selectivity toward FOXM1 regulatory pathway STL is a novel agent, and its biological effects and targets are not fully characterized yet. We therefore attempted to investigate if it affects any other regulatory pathways besides FOXM1 network. To achieve that, we analyzed gene expression patterns in HCT116 cells and doxycycline-stimulated C3-luc cells treated with STL via full transcriptome RNA-seq ( Fig. 6; full processed data on gene expression changes are available in Supplementary Tables 2 and 3). Out of 16 275 protein-coding genes evaluated (Fig. 6a), we identified a set of 1341 genes displaying significant (2-fold or more) DE in both experimental models, with 577 genes being upregulated and 687 genes being repressed in both C3-luc and HCT116 cells (Fig. 6b). We therefore considered the genes displaying codirectional expression changes in both cell lines as the most reliable STL responders, combined them into "STL signature" gene list (1264 genes in total), and subjected to further pathway analysis. To predict possible regulators and pathways that may be responsible for STL-induced gene expression changes, we performed an integrative analysis of "STL signature" datasets using Ingenuity Pathway Analysis software package. Ingenuity algorithm predicted inhibition of FOXM1 and activation of p53 and p21 Waf1/Cip1 as central regulatory changes responsible for STL effects on gene expression (Fig. 6c, Supplementary Fig. 11a). Other elements with multiple predicted interactions include AREG and ERBB2, both inhibited, with their downstream effects being partially mediated through FOXM1, p21 Waf1/Cip1 , and p53. These changes in regulation networks are predicted to inhibit cell proliferation at mitosis stage and probably during S phase, while also promoting cell senescence. This prediction is in line with the observed antiproliferative effect of STL and enrichment of cells in G1 phase (Supplementary Fig. 10). All predicted signaling changes and outcome effects strongly suggest that STL treatment should exert clear antitumor effects. AURKB, and MYC pathways represent the activity of direct FOXM1 downstream effectors, while ATR and BARD1 pathways are responsible for DNA damage response. Moreover, ATR and E2F activity can be modulated by FOXM1 (see "Discussion"), implying that all pathways responsible for STL-induced gene expression changes converge to FOXM1. Taken together, transcriptomic data indicate very high probability of FOXM1 being the main mediator of the effects exerted by STL upon cell gene expression program. DISCUSSION In this paper, we identified a novel chemical compound STL that suppresses FOXM1 activity in a variety of human cancer cell lines (Fig. 1). This drug reduces FOXM1 protein level via a two-step mechanism: it (I) relocates nuclear FOXM1 to the cytoplasm and (II) induces autophagy that facilitates degradation of FOXM1 protein (Figs. 3-4). The exact mechanisms of FOXM1 transport to the cytoplasm and autophagy induction by STL are currently under investigation, but conceptually this is a novel mechanism of FOXM1 suppression. Previously, we identified several types of FOXM1 inhibitors: thiazole antibiotics/proteasome inhibitors [22,24,25] and honokiol [23]. Proteasome inhibitors act through stabilization of HSP70 protein that interacts with FOXM1 and prevents its binding to gene promoters [47]. Honokiol directly binds to FOXM1 protein and inhibits its transactivation potential [23]. In both cases FOXM1 expression is subsequently diminished due to disruption of a positive autoregulation loop [38,47]. Another group of FOXM1 inhibitors utilizes a different mechanism of action, directly inhibiting FOXM1 DNA-binding capability. Recently, a compound named FDI-6 was identified in a high-throughput screening; it interacts directly with FOXM1 protein, inhibits FOXM1 binding to genomic targets, and therefore suppresses FOXM1 target expression [28]. The authors also provided evidence that FDI-6 selectively targets FOXM1 but not other FOX family proteins. However, FDI-6 does not affect FOXM1 protein level itself [28,31]. Efficient FOXM1 depletion by STL relies not only on functional inactivation of FOXM1 by its relocalization to the cytoplasm but also on its subsequent autophagic destruction (Figs. 3-4). Recently, the possibility of lysosome-mediated FOXM1 degradation was reported in colon cancer cells. Treatment with DFS lignan resulted in FOXM1 suppression, while BafA1 and CQ were able to prevent this effect [30]. However, the study provides no information about autophagic activity in DFS-treated cells or the mechanism that makes FOXM1 available to autophagosomes. Without these important details, it is impossible to determine if DFS lignan effect is selective to FOXM1 or executed through general nucleophagy. Our research demonstrates for the first time that FOXM1 translocation to the cytoplasm is crucial for its autophagic degradation, suggesting that this process can be selective. STL-dependent FOXM1 nuclear export and autophagy stimulation seem to be independent from each other: chloroquine treatment does not prevent FOXM1 translocation to the cytoplasm, while FOXM1 retention in the nucleus does not affect autophagy progression (Fig. 4b). Also, lower concentrations of STL efficiently induce autophagy but cannot yet cause prominent reduction of FOXM1 levels (Fig. 3a). We therefore conclude that autophagy induction on its own is not sufficient to promote FOXM1 relocalization, so the latter should be regulated via a separate mechanism. This assumption further supports the idea of STL selectively targeting FOXM1. Sensitization of resistant cells to chemotherapy, especially to DNA-damaging drugs, is currently the most studied effect of FOXM1 depletion in cancer [8][9][10]12]. We expected that STL, being a FOXM1 inhibitor, should decrease drug resistance in human cancer cells when combined with standard chemotherapy drugs. Indeed, we observed clear synergy between STL and three categories of widely used anticancer drugs (carboplatin, 5-FU, and taxanes), each exploiting different mechanisms of anticancer action (Fig. 5). Combining different synergistic drugs is an efficient approach in modern cancer treatment, since it allows not only to improve eradication of cancer cells but also to use chemotherapeutic agents at lower doses, thereby reducing undesired adverse effects. Therefore, STL or its derivatives may turn out useful in clinical practice as support drugs improving the efficacy of existing treatment strategies. Given that the exact details of STL interactions are unknown, there was a possibility that STL actually regulates chemoresistance through other FOXM1-independent mechanisms. We have demonstrated that FOXM1-deficient cells could not be further sensitized by STL (Fig. 5b), confirming that chemoresistance inhibition is conveyed specifically through FOXM1. However, chemoresistance-independent secondary effects of STL might still be impactful and needed consideration. Transcriptome-based analysis predicted that, besides FOXM1 inhibition, STL may also activate p53-and p21 Waf1/Cip1 -dependent signaling networks (Fig. 6c, Supplementary Fig. 11a). These three proteins are very closely related to each other, forming a single regulatory core. FOXM1 inactivation was reported earlier to increase p53 and p21 Waf1/Cip1 levels in cancer and nonmalignant cells [48,49]. Upregulation of p21 Waf1/Cip1 is facilitated through loss of SCF ubiquitin ligase complex components SKP2 and CKS1B [48]. Alternatively, p53 typically acts as an upstream FOXM1 suppressor [50], but its activation in response to FOXM1 knockdown indicates more complex relations between two molecules [49]. It should be also considered that, even if STL stimulates p53 and p21 Waf1/Cip1 activity independently of FOXM1, these changes would still result in strong antitumor effects on top of FOXM1-mediated actions. Therefore, such secondary effects should not be regarded as significant disadvantages of STL. Since Ingenuity predictions are rather speculative, we verified them using GSEA analysis and confirmed the inhibition of FOXM1related gene signatures but not p53-related effects (Fig. 6d, Supplementary Fig. 11b). Additionally, it revealed the suppression of ATR pathway involved in DNA damage repair [51]. FOXM1 indirectly promotes ATR-dependent signaling through NBS1 [52,53]. We suppose that impairment of ATR-dependent signaling may contribute to increased vulnerability of FOXM1-deficient cells to chemotherapy, especially to DNA-damaging drugs. Additionally, crucial regulators of E2F pathway, E2F1 and E2F2, can be suppressed upon FOXM1 inhibition [54]. Based on these results, we conclude that most gene expression changes caused by STL Table 2. Colocalization of EGFP-FOXM1 with cytoplasmic LC3-positive vesicles or DAPI-stained nuclear DNA (See Fig. 4b and Supplementary Fig. 7 PEO1 cells were cultured in RPMI-1640 medium with 2 mM sodium pyruvate (Thermo Fisher Scientific). OVCAR8, SW480, HCT116, FET, and C3-luc cells were cultured in DMEM medium with 4.5 g/L glucose, 4 mM L-glutamine, and 1 mM sodium pyruvate (Thermo Fisher Scientific). For all cell lines, the growth media was supplemented with 10% fetal bovine serum (Thermo Fisher Scientific), 100 U/mL penicillin (Lonza, Basel, Switzerland), and 100 μg/mL streptomycin (Lonza). All cell lines were confirmed to be mycoplasma-negative by routine testing using PCR detection and DAPI staining with subsequent evaluation by fluorescent microscopy. Fig. 6 STL-induced transcriptome changes suggest strong antitumor effect mediated through FOXM1-p21-p53 regulatory networks. a Heatmap representation of the STL effect on global gene expression changes. HCT116 and doxycycline-stimulated C3-luc cells were treated with 50 μM STL for 24 h. RNA samples obtained from treated cells were subjected to RNA-seq, non-protein-coding genes were excluded from analysis. Data represent the average of two biological replicates for each condition. b Venn diagrams representing the numbers of DE genes (all, up-or downregulated) in HCT116 and C3-luc cells. Only genes with significant expression changes (2-fold or higher change, FDR < 0.1) were analyzed. c Ingenuity-predicted regulatory network facilitating STL treatment effects in C3-luc cells based on "STL signature" changes. d Results of GSEA performed for "STL signature" genes in C3-luc cells. In silico prediction of small-molecule inhibitors of FOXM1 regulatory network Chemical compounds and drugs The candidate small molecule inhibitors of FOXM1-regulated signaling network were identified by in silico analysis of data from NIH Library of Integrated Network-Based Cellular Signatures (LINCS) L1000 dataset [55]. LINCS L1000 Phase I and Phase II datasets were used (GEO accession IDs: GSE70138 and GSE92742) [56,57]. DE gene signatures of shRNA-mediated gene knockdowns (more than 3000 individual genes) across most common cell line models (A549, MCF7, VCAP, HA1E, A375, HCC515, and HT29) were compared with transcriptional profiles displayed by the same cell lines upon treatments with a wide range of bioactive compounds. A random forest classification model was trained using data for Food and Drug Administration (FDA)-approved drugs and then used to identify compounds that caused transcriptomic perturbations similar to the chosen genetic disruptions. For each compound, the probability of disrupting the signaling network associated with the protein of interest was evaluated in terms of several attributes, including direct correlation with the transcriptomic signatures of target protein knockdown and indirect correlations with knockdown signatures of other proteins (i.e., "guilt by association" approach suggesting that chemical inhibition acts broadly within a signaling subnetwork), in the subset of four or more cell lines (see [32,33] for detailed explanation). In the context of protein-signaling networks, a disruption of a physical target due to drug treatment should result in gene expression profiles similar to signatures associated with inhibition of its downstream targets or upstream regulators in the same network. Drug treatment of cultured cells Cells were harvested by trypsinization and seeded into tissue culture dishes to achieve 50% confluency. Cell treatment was performed the next day by aspirating the growth media from the cells and replacing it with growth medium containing selected concentrations of drugs. Control samples were treated with vehicle only, vehicle concentration did not exceed 0.3%. In the endpoint experiments involving treatment with CHX, bortezomib, MG132, BafA1, CQ, or LMB in combination with STL, cells were pretreated with the aforementioned compounds for 1 h before administration of full-treatment drug mixture. In the time-course experiments involving treatment with CHX or CQ in combination with STL, the aforementioned compounds were administered to the cells simultaneously with STL. C3-luc cells with doxycycline-induced expression of EGFP-FOXM1 protein were pretreated with 1 μg/mL doxycycline overnight, and all the following treatments were performed in the presence of 1 μg/mL doxycycline. After the desired periods of time, the cells were immediately harvested, washed once with cold phosphate-buffered saline (PBS), pelleted by centrifugation at 200 g for 5 min, and protein or RNA was purified as described below. Stable FOXM1-expression knockdown in PEO1 cells PEO1 cells were harvested by trypsinization and seeded into 12-well tissue culture plates to achieve 40% confluency. The next day, cells were incubated with MISSION lentiviral particles carrying pLKO.1 vector encoding control nontarget shRNA or shRNA against human FOXM1 transcripts (MilliporeSigma) in the presence of 10 μg/mL polybrene for 24 h. Infected cells were selected by cultivation in the presence of 1.5 μg/mL puromycin for seven days and then cultured as described above. Protein immunoblotting Total protein samples were purified from cells using RIPA lysis buffer (MilliporeSigma) supplemented with Halt protease inhibitor cocktail (Thermo Fisher Scientific), 2 mM sodium orthovanadate (New England Biolabs Inc., Ipswich, MA, USA), and 5 mM sodium fluoride (MilliporeSigma) according to the manufacturer's protocol. Protein concentrations were estimated using Bio-Rad Protein Assay (Bio-Rad, Hercules, CA, USA). About 15-30 μg of total protein were mixed with Laemmli sample buffer (Bio-Rad) with β-mercaptoethanol (Bio-Rad, final concentration 2.5%), heated at 98°C for 10 min, and separated in hand-cast 12% SDS-polyacrylamide gels or 12% Mini-PROTEAN TGX precast gels (Bio-Rad). After the electrophoretic separation, the proteins were transferred to Immobilon-P sq PVDF membrane (MilliporeSigma). Membranes were washed with tris-buffered saline (TBS, Alfa Aesar) for 10 min, blocked with 5% bovine serum albumin (BSA, MilliporeSigma) in TBS with 0.1% Tween-20 (TBST, Thermo Fisher Scientific), and probed with the primary antibodies diluted in 5% BSA in TBST overnight at 4°C (see Supplementary Table 4 for the list of antibodies used). Membranes were washed with TBST three times, 10 min each, and probed with HRP-conjugated secondary antibodies diluted in 5% skim milk (Research Products International, Mt Prospect, IL, USA) in TBST for 1 h at room temperature. Membranes were washed with TBST three times, 10 min each, protein bands were developed using SuperSignal West Pico PLUS substrate (Thermo Fisher Scientific) and detected using ChemiDoc MP System (Bio-Rad). For each immunoblot image in the paper, molecular weights of protein markers are indicated on the right. RT-qPCR analysis of gene expression Total RNA was isolated from cells using TRIzol reagent (Thermo Fisher Scientific) and the PureLink RNA Mini Kit (Thermo Fisher Scientific) with additional on-column DNAse treatment according to the manufacturer's instructions. RNA samples were quantified using NanoDrop One Spectrophotometer (Thermo Fisher Scientific). Reverse transcription was performed using High-Capacity cDNA Reverse Transcription Kit with RNase Inhibitor (Thermo Fisher Scientific), 500 ng of total RNA was used per reaction. Quantitative PCR analysis of gene expression levels was performed in ViiA 7 Real-Time PCR System (Thermo Fisher Scientific) using PowerUp SYBR Green Master Mix (Thermo Fisher Scientific) and primers listed in Supplementary Table 5. Amplification was performed according to the manufacturer's Fast Mode recommendations for 35 cycles, reaction specificity was checked by melt-curve analysis and agarose electrophoresis. Reaction efficiency was evaluated using standard curve approach and was within 95-105% for all primers. Transcript abundance was estimated using Pfaffl's method [58], 18 S rRNA and TBP transcripts were used as references for normalization. Vital fluorescent staining of lysosomes Cells were seeded into 35-mm cell culture dishes with glass bottom (MatTek Life Sciences, 200 Homer Ave, Ashland, M, USA) to achieve 30-40% confluency. The next day, the cells were treated with drugs as described above. After 12 h of treatment, LysoView 540 dye (Biotium, Inc., Fremont, CA, USA) was added to the treatment media up to 1x working concentration and cells were incubated at 37°C in a CO 2 incubator for 2 h. Cell imaging was performed with LSM 710 confocal microscope (Zeiss, Oberkochen, Germany) using excitation wavelength of 561 nm for LysoView 540 fluorescence or differential interference contrast for cell morphology. Digital images were processed and exported using ZEN 3.2 Blue Edition software package (Zeiss). Immunofluorescent staining of cell proteins Cells were seeded into Lab-Tek™ II Chamber Slides (Thermo Fisher Scientific) to achieve 30-40% confluency. The next day, the cells were treated with drugs as described above. After treatment, cells were briefly washed with cold PBS, fixed with 100% methanol at −20°C for 20 min, and washed with cold PBS three times. Cells were blocked with 2% BSA in PBS with 0.1% Tween-20 (PBST, Thermo Fisher Scientific) for 1 h at room temperature and stained with primary antibodies against LC3A/B (Cell Signaling Technology, D3U4C, 1:300) diluted in 2% BSA in PBST overnight at 4°C. Cells were washed three times with cold PBST and stained with secondary anti-rabbit antibodies conjugated with Alexa Fluor 594 (Jackson Immunoresearch, 711-585-152, 1:1000) diluted in 2% BSA in PBST for 1 h at room temperature in the dark. After immunostaining, the cells were counterstained with 0.5 μg/mL DAPI (R&D Systems, Minneapolis, MN, USA) diluted in PBST for 10 min at room temperature and washed three times with PBST. Chambers were removed from the slides and coverslips were mounted on slides using ProLong Diamond Antifade mountant (Thermo Fisher Scientific). Slides with mounted coverslips were kept at room temperature in the dark overnight and then stored at 4°C. Cell imaging was performed with LSM 710 confocal microscope (Zeiss, Oberkochen, Germany) using excitation wavelengths of 405 nm for DAPI, 488 nm for EGFP, and 561 nm for Alexa Fluor 594 detection. Images were taken by an operator blinded to treatment groups and not instructed to focus on specific features. Digital images were processed and exported using ZEN 3.2 Blue Edition software package (Zeiss). Protein colocalization analysis of immunofluorescent microscopic images Digital images obtained via confocal microscopy (24-bit TIFF, RGB color) were split into red, green, and blue channels using ImageJ software package [59]. The images for each channel were processed using JACoP plugin [60], signal threshold values were optimized for most efficient separation of signal from background and kept constant for all analyzed images. Manders' overlap coefficients were calculated, where applicable, for fractions of green pixels (EGFP-FOXM1) being colocalized with red pixels (LC3 staining, mostly cytoplasmic) or with blue pixels (DAPI staining, nuclear). Twelve individual cells taken from different fields of view were analyzed for each treatment condition. Annexin V-based detection of apoptotic cells Cells were seeded into 60 mm cell culture dishes (Thermo Fisher Scientific) to achieve 30-40% confluency. The next day, the cells were treated with drugs as described above. After treatment, cells were harvested by mild trypsinization, washed twice with ice-cold PBS, and 500 000 cells were resuspended in 100 μL of Annexin V Binding Buffer (BD Biosciences, San Jose, CA, USA). Cells were stained by incubating with 5 μL of APCconjugated Annexin V recombinant protein (Thermo Fisher Scientific) for 15 min in the dark, pelleted by centrifugation at 200 g for 5 min, and resuspended in 300 μL of Annexin V Binding Buffer containing 0.1 μg/mL DAPI (R&D Systems). Samples were analyzed using CytoFLEX flow cytometer and CytExpert software (Beckman Coulter, Brea, CA, USA). Trypan Blue exclusion cell viability assay Cells were seeded into 12-well cell culture plates (Thermo Fisher Scientific) at 50 000 cells/well. The next day the cells were treated with drugs as described above for 72 h. After treatment cells were harvested by mild trypsinization, washed twice with ice-cold PBS, and resuspended in 200 μL of Hanks' Balanced Salt Solution (HBSS) without Ca 2+ and Mg 2+ (Lonza). Numbers of viable and dead cells were assessed by direct counting using a hemocytometer in the presence of 0.4% Trypan Blue. Cell cycle assay Cells were seeded into 60-mm cell culture dishes (Thermo Fisher Scientific) to achieve 30-40% confluency. The next day, the cells were treated with drugs as described above. After treatment, cells were harvested by mild trypsinization, resuspended in 300 μL of ice-cold PBS, and fixed by the addition of 0.7 mL of ice-cold 70% ethanol in a dropwise manner with constant mixing. After addition of ethanol, samples were stored at −70°C overnight. Fixed cell samples were washed with ice-cold ethanol twice and stained with 1 µg/mL DAPI (R&D Systems). Stained samples were analyzed using CytoFLEX flow cytometer and CytExpert software (Beckman Coulter), at least 25 000 qualifying events were detected in each evaluated sample. Full-transcriptome RNA-seq RNA samples were analyzed for integrity using Agilent 4200 TapeStation (Agilent Technologies, Santa Clara, CA, USA). The levels of the remaining DNA were checked using Qubit fluorometer (Thermo Fisher Scientific). DNA amounts did not exceed 10% of the total amount of nucleic acid. Sequencing libraries for Illumina sequencing were prepared in one batch in a 96-well plate using Stranded CORALL total RNAseq library prep kit with RiboCop HMR rRNA Depletion Kit (Lexogen, Vienna, Austria). In brief, 260-660 nanograms of total RNA were used for the first rRNA depletion step, then followed by library generation initiated with random oligonucleotide primer hybridization and reverse transcription. No prior RNA fragmentation was done, as the insert size was determined by proprietary size-restricting method. Next, the 3′ ends of first-strand cDNA fragments were ligated with a linker containing Illumina-compatible P5 sequences and unique molecular identifiers. During the following steps of second-strand cDNA synthesis and dual-strand cDNA amplification, i7 and i5 indices as well as complete adapter sequences required for cluster generation were added. A number of PCR amplification cycles was 12, as determined by qPCR using a small preamplification library aliquot for each individual sample. The final amplified libraries were purified, quantified, and average fragment sizes confirmed to be 330 bp by gel electrophoresis using Agilent 4200 TapeStation (Agilent Technologies). The concentration of the final library pool was confirmed by qPCR and then subjected to test sequencing in order to check sequencing efficiencies and adjust accordingly the proportions of individual libraries. Sequencing was carried out on NovaSeq 6000, S4 flowcell (Illumina, San Diego, CA, USA), approximately 30 M 2 × 150-bp clusters per sample. Bioinformatical analysis of RNA-seq data Analysis of raw RNA-seq data was performed by Research Informatics Core at the University of Illinois at Chicago. Raw reads were aligned to the human hg38 reference genome in a splice-aware manner using the STAR aligner [61]. ENSEMBL gene and transcript annotations, including noncoding RNAs, were used. Expression levels of features, i.e., genes and noncoding RNAs, were quantified using FeatureCounts as raw read counts [62]. DE statistics (fold change and p-value) were computed using edgeR on raw expression counts obtained from quantification [63,64]. Raw expression counts were normalized within edgeR using TMM normalization. Nominal pvalues were adjusted for multiple testing using the false-discovery rate (FDR) correction of Benjamini and Hochberg [65]. Significant genes were determined based on fold changes lower than 0.5 and higher than 2.0, FDR threshold of 10% (q-value < 0.1) in the multigroup comparison. Processed data on gene expression levels are provided in Supplementary Regulatory pathway analysis was performed in Ingenuity Pathway Analysis software package (QIAGEN, Hilden, Germany) and GSEA software (University of California San Diego and Broad Institute, USA [66,67]), using "STL signature" gene list (see "Results"). Input data contained log 2 -foldchange values, p-values, and FDR q-values for each gene. For Ingenuity analysis, data were analyzed using direct and indirect interactions reported for human samples. Correlations derived from machine-based learning were not considered. For GSEA analysis, GSEAPreranked algorithm was used to analyze data using PID collection of canonical pathway gene signatures [46] with a number of permutations set to 1000. Pathways with FDR < 0.05 were considered significantly enriched. Statistical analysis At least three independent biological replicates were used for all experiments describing cell treatment with drugs, excluding RNA-seq, where two biological replicates were used, and confirmational Annexin V assay and cell cycle assay, where only one experiment was performed. RT-qPCR experiments were performed with two technical replicates for each biological replicate. For immunoblot experiments, the images shown in the paper represent the results that were consistent across several independent experiments. The statistical tests used in each experiment are described in the corresponding figure and table legends. Statistical significance was accepted with p < 0.05. Statistical analysis was performed in OriginPro 2016 software (OriginLab Corporation, Northampton, MA, USA). Plots were generated using GraphPad Prism 6 software (GraphPad Software, San Diego, CA, USA). Clustering of RNA sequencing data and heatmap plot generation were performed in Morpheus software (https://software.broadinstitute.org/ morpheus, Broad Institute, USA) using Euclidean metrics and complete linkage settings for clustering. DATA AVAILABILITY LINCS L1000 Phase I and Phase II datasets used for chemical compound screening are available from Gene Expression Omnibus (accession IDs: GSE70138 and GSE92742). Raw RNA-seq data on gene expression levels in C3-luc and HCT116 cells treated with STL are available from Gene Expression Omnibus (accession ID GSE162826). Processed RNA-seq data on gene expression levels in C3-luc and HCT116 cells treated with STL are included in this paper as Supplementary
2021-07-16T06:16:31.471Z
2021-07-01T00:00:00.000
{ "year": 2021, "sha1": "cc8392f368662f275b458991046807831a0d2bb6", "oa_license": "CCBY", "oa_url": "https://www.nature.com/articles/s41419-021-03978-0.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "2f5f96eec6c00d8d0fe058dffdd9d27a4892352c", "s2fieldsofstudy": [ "Medicine", "Biology" ], "extfieldsofstudy": [ "Medicine" ] }
87936225
pes2o/s2orc
v3-fos-license
Dry-fog Aeroponics Affects the Root Growth of Leaf Lettuce ( Lactuca sativa L. cv. Greenspan) by Changing the Flow Rate of Spray Fertigation The growth characteristics and physiological activities of leaves and roots of lettuce cultivated in dry-fog aeroponics with different flow rates of nutrient dry-fog (FL, 1.0 m s (cid:4) 1 ; NF, 0.1 m s (cid:4) 1 ) were investigated under a controlled environment for two weeks and compared to lettuce cultivated using deep-flow technique (DFT). The growth of leaves of FL and DFT was not different and was significantly higher than that of NF. The amount of dry-fog particles adhering to the objects was higher in FL than in NF, so that the root growth in NF was significantly higher than that of FL. The respiration rate of roots was significantly higher in dry-fog aeroponics, but the dehydrogenase activity in the roots was significantly higher in DFT. There were no differences in the contents of chlorophyll and total soluble protein in the leaves or the specific leaf area. Photosynthetic rate and stomatal conductance were higher in dry-fog aeroponics. The contents of nitrate nitrogen, phosphate and potassium ions in the leaves were significantly higher in DFT, but the content of calcium ions was significantly higher in FL. Thus, changing the flow rate of the dry-fog in the rhizosphere can affect the growth and physiological activities of leaves and roots. INTRODUCTION Dry-fog aeroponics is a new soil-less hydroponic technique that fills the rhizosphere with an extremely fine fog of atomized liquid fertilizer using a specialized nozzle as a double-fluid atomizer with nutrient solution and compressed air (Hikosaka et al., 2014). In dry-fog aeroponics, the fog in a chamber made of lightweight polystyrene is less than 10 m in diameter on average per droplet, so less nutrient solution exists in the hydroponic system than in the general aeroponics system that is used for research and commercial cultivation (Biddinger et al., 1998;Farren and Mingo-Castel, 2006;He et al., 2013). Dry-fog nutrients that are not absorbed by the roots condense and fall down the inner wall to accumulate at the bottom of the chamber, and the collected solution is atomized again. Thus, the nutrient solution is circulated in the system, and there is no wastewater outside of the system. Because of these features, dry-fog aeroponics is expected to be an effective technique for protected cultivation to address the food shortage in the future caused by decreases in both arable land and water resources and cultural eutrophication caused by nutrient wastewater. Spray culture using a fog box with nozzles as double-fluid atomizers (Ehara et al., 1966) or with an ultrasonic humidifier, which can atomize the very fine fog, has been used for aeroponic systems, and many bioreactors have been developed to generate nutrient fog (Mohammad et al., 2000;Chun-Zhao et al., 2003), but the advantage of these methods for practical cultivation is not well understood. The cultural feature of dry-fog aeroponics is the aerobic environment in the rhizosphere. In hydroponics, the dissolved oxygen level in the nutrient solution affects the respiration rate of roots and growth, which increase under aerobic conditions (Changhoo and Takakura, 1994;Yoshida et al., 1997). In dry-fog aeroponics, plant roots are hanged in the foggy nutrient solution and absorb water and nutrients directly from the fog particles. No part of the roots is soaked in the nutrient solution, and fresh airflow is always supplied to the root surface. Because the flow rate of dry-fog sprayed by a nozzle can be changed easily using a fan set in a chamber, it is possible to control the density and speed of the particles of nutrients and water adhering to the roots. The growth promotion and increased viability of roots in an aerobic rhizosphere filled with foggy nutrient solution and an increase in the water absorption efficiency by continuous flow of dry-fog on the surface of roots are expected in dry-fog aeroponics. There has been no report indicating a relationship between the flow rate of foggy nutrient solution sprayed in the rhizosphere and plant growth with aeroponics, therefore, we cultivated lettuce plants with the dry-fog aeroponic technique and investigated the effect of the flow rate by changing it with a controlled fan set in the rhizosphere on the growth characteristics and physiological activities of plants. Dry-fog aeroponic system Two dry-fog aeroponics systems (Hikosaka et al., 2014) consisting of a rhizosphere chamber made of polystyrene (1000 660 300 mm) and a dry-fog atomizing nozzle (developed and patented by Ikeuchi Co., Ltd., Osaka, Japan) were settled in a temperature-controlled room. Each chamber had 22 holes (20 mm across) for setting plants on the top, and only the roots remained in the chamber. An axial flow fan was equipped in one chamber, and the dry-fog flow rate was adjusted to 1.0 m s 1 (FL). The other chamber was not equipped with a fan, and the flow of the dry-fog did not occur in the chamber except for the spraying from the nozzle (NF). The flow rate in the chamber was measured at a depth position of 70 mm from the planting hole using an omnidirectional anemometer (testo445; TESTO, Lenzkirch, Germany) (Fig. 1). The nozzle continuously atomized fine foggy nutrients of a commercial liquid fertilizer (OAT Agrio Co., Ltd., Tokyo, Japan, EC 1.2 mS cm 1 , pH 6.0, N: 130 ppm, P: 60 ppm, K: 200 ppm, Ca: 115 ppm, Mg: 30 ppm, Fe: 1.4 ppm, Mn: 0.8 ppm, Zn: 0.05 ppm, Cu: 0.02 ppm, B: 0.75 ppm, Mo: 0.02 ppm) that always filled the chamber. As a control experiment, a deep-flow technique (DFT) hydroponic chamber (480 380 114 mm) was filled with 30 L of the same liquid fertilizer. To confirm the amount of dry-fog particles flowing through the rhizosphere, a slide glass that was painted with thin silicon oil (1000 cs) was exposed perpendicularly upstream of the dry-fog for one second. Then, the number and size of particles that were trapped in the silicon oil layer on a slide glass were observed and recorded with a microscope (Fig. 1, Photos). Plants and culture conditions Commercial leaf lettuce seeds (Lactuca sativa L. cv. 'Greenspan', Kaneko Seeds Co., Ltd., Tokyo, Japan) were sown on sponge blocks (20 20 30 mm) and were supplied with enough pure water. The air temperature and relative humidity in the cultivation room were maintained at 25°C and 60%, respectively. The photosynthetic photon flux density (PPFD) was 120 mol m 2 s 1 supplied by red (660 10 nm, 96 mol m 2 s 1 ) and blue (455 10 nm, 24 mol m 2 s 1 ) light-emitting diodes (LEDs) lamps (Legu LED, HRD Co., Ltd., Tootori, Japan) with a 16-h day length. After germination, half-strength liquid fertilizer (EC 0.6 mS cm 1 , pH 6.0) was supplied by bottom irrigation for two weeks, and 20 seedlings that grew uniformly were transplanted in each hydroponics system and were grown for two weeks. Measurements of growth and photosynthesis Every week after transplanting, six plants were sampled from each hydroponic system, and the fresh weight, the number of leaves and the fresh weight of roots were recorded. On the 2nd week after transplantation, a leaf disk (10 cm 2 ) was sampled from each plant and dried in an oven at 80°C for two days and its dry weight was measured to estimate the specific leaf area. Transpiration and the CO2 assimilation rates of a fully expanded mature single attached leaf were measured under controlled environmental conditions using an LI-6400 portable photosynthesis system (Li-Cor, Lincoln, NE, USA) on the 2nd week after transplanting. The measurement conditions were as follows: leaf temperature, 25°C; PPFD supplied by a red-blue LED light source (10% blue based on PPFD), 150 mol m 2 s 1 ; leaf vapor pressure deficit, 1.0 kPa; and ambient CO2 concentration, 500 mol mol 1 . Measurements of root activity On the 2nd week after transplanting, intact root sam-Environ. Control Biol. ples were taken from six growing plants that developed new roots under each hydroponic condition, and the rates of root respiration were measured polarographically at 25°C using a Clark-type gas-phase oxygen electrode (CB1D; Hansatech, Norfolk, UK) with incoming humidified ambient air (21% O2) under dark conditions. After measuring the respiration rate, the root was blotted by pressing slightly for 10 s between two sheets of filter paper (No. 1, ADVANTEC, Tokyo, Japan), then fresh weight was measured and the dehydrogenase activity of the roots was tested according to the triphenyltetrazolium chloride (TTC) method. Root samples (1.0 1.5 g FW) were cut into small pieces of 10 mm and placed in a tube containing 5 ml of equally mixed solution of 0.1% TTC and 0.1 M phosphate buffer (pH 7.0) for a reduction reaction for 2 h at 37°C. Then, 2 ml of 1 M H2SO4 was added to the tube to stop the reaction. The root was blotted with paper towel and homogenized in ethyl acetate. The extract was transferred to a tube and was quantified to 6 ml by adding ethyl acetate. The absorbance of the extract at 485 nm was recorded using a spectrophotometer, and the concentration of triphenylformazan (TPF) as a reaction product was calculated by comparing to a standard curve with diluting TPF after complete reduction of 0.1% TTC (Wako Pure Chemical Industries, Ltd., Osaka, Japan) with Na2S2O4. Measurement of leaf constituents The contents of nitrate nitrogen, calcium ions, phosphate ions and potassium ions in the leaves were measured every week, and the total soluble protein (TSP) and chlorophyll were measured on the 2nd week after transplantation. The leaves (3 g FW) were homogenized in deionized water. The homogenate solution was passed through a filter paper (No. 1, ADVANTEC, Tokyo, Japan) and was measured using RQflex (Merck Millipore, Darmstadt, Germany). Because RQflex has previously used for measuring the ion contents (Kintzios et al., 2004) and it was reported that the measurements were closely correlated with HPLC analysis (Ito et al., 2013). In present study, accurate quantitative correction was not carried out because of comparative experiments. Other leaves (2 g FW), which were sampled for measuring the concentration of TSP, were homogenized in 50 mM Tris-HCl buffer (pH 8.0) containing 10 mM MgCl2, 0.2 mM ethylenediamine tetra-acetic acid and 5 mM ditiothreitol on ice, and 1 ml of the extract was transferred to a tube. Then, 1 ml of 20% (w/v) trichloroacetic acid was added to the tube and left for more than 15 min. The supernatant was removed, and the pellet was resolved in 1 ml of 1 M NaOH for 24 h. The TSP concentration of the samples was measured by the Coomassie Brilliant Blue protein assay (Nacalai tesque, Kyoto, Japan) and calculated by comparing to a standard curve with diluted BSA (Nacalai tesque, Kyoto, Japan). The chlorophyll content of the leaves (1 g FW) was analyzed after extraction by grinding the leaves in pre-cooled 80% (v/v) acetone and quantified spectrophotometrically according to the method of Arnon (1949). Data analysis All of data were subjected to a one-way analysis of variance (ANOVA) and the mean differences were compared using the Tukey's HSD test when the F-test indicated a significant difference at P 0.01. The hydroponic culture experiment was repeated twice under the same conditions; each data point was the mean of 12 replicates, and a comparison with P 0.05 was considered significantly different. Flux density of dry-fog particles and lettuce growth The number of dry-fog particles trapped in the silicon oil layer on a slide glass was apparently superior in FL than in NF (Fig. 1 Photos), meaning that the rhizosphere with fast flow rate was filled with much of foggy water and nutrient solution compared to that with low flow rate. In FL, approximately three times the nutrient droplets were collected on the silicon oil layer compared to that in NF (Table 1). There are direct observation showing a relationship between the flow rate of dry-fog spray and the amount of dryfog particles in the root zone. The average diameter of the droplets was less than 10 m in both FL and NF and showed no significant difference. There was no difference other than in the flow rate for the supply of nutrient solution. Because a nozzle continuously sprayed dry-fog and saturated relative humidity was maintained in a chamber filled with dry-fog nutrients, more foggy nutrient droplets can contact the root surface under active flow conditions than under non-flow conditions. So, it was thought that the roots in FL could absorb water and nutrients easier than in NF when the roots had same volume and same surface area. The growth of leaves at each week after transplantation was significantly higher in FL than in NF but showed no difference between FL and DFT (Fig. 2). The growth of roots on the 1st week in both dry-fog cultures was significantly higher than that in DFT but showed no difference between the two dry-fog cultures. The root fresh weight in NF on the 2nd week was significantly higher than those in other two by more than double of DFT. As a result, the top/root fresh weight ratios (T/R) in dry-fog aeroponics were significantly lower at each week compared to that of DFT. We already reported in dry-fog aeroponics with altering fog particle size that the T/R of lettuce decreased with increased root growth compared to shoot growth in the Vol. 53, No. 4 (2015) early growth stage after planting (Hikosaka et al., 2014). The T/R of tobacco plants was affected by various environmental factors in especially rhizosphere (Andrews et al., 2006). These findings suggested that lettuce plant growth, especially the root growth respond sensitively to rhizosphere in aeroponics, where environment conditions might be less-stable compared to those in DFT and soil. There was no change in the T/R between the 1st and 2nd week after transplantation in FL, but it decreased significantly in the 2nd week compared to that in the 1st week in NF. In addition, well-developed lateral roots and root hairs on its surface were observed in dry-fog aeroponics and these root traits were especially pronounced in NF (Fig. 3). We hypothesized that, in the dry-fog culture, lettuce roots in the aerobic rhizosphere absorbed enough water and nutrients from the foggy particles flowing in the root zone by altering root morphogenesis resulted from developing branched roots with root hairs and by enlarging root biomass resulted from distributing new assimilates preferentially into the growth of roots rather than leaves, and these growth and developmental changes are adaptations to the aerobic rhizosphere environment by increasing root surface area to improve the efficiency of catching dry-fog particles and absorbing water and nutrients. We considered that the roots that grew in FL quickly adapted to aerobic rhizosphere conditions within one week after transplantation and that shoot growth increased in the 2nd week, but the roots that grew in NF could not completely adapt within one week and maintained assimilation partitioning to increase root growth in the 2nd week. Although root growth had priority over shoot growth for catching dry-fog nutrients efficiently, the shoot fresh weight showed no significant difference in either week after transplantation between FL and DFT, whose roots grew without any morphological changes. It is expected that well-developed roots in dry-fog aeroponics promote the absorption of water and nutrients and subsequent shoot growth. Therefore, we concluded that dry-fog aeroponics increases shoot growth after the adaptation of roots to the aerobic conditions in the rhizosphere, and further investigations are needed for the growth characteristics after three weeks. Physiological activity of the leaves and roots The photosynthetic rate of the attached mature leaves in the 2nd week after transplanting increased significantly by dry-fog aeroponics and was significantly higher in NF than FL ( Table 2). The stomatal conductance was also higher in dry-fog aeroponics, especially in NF, whose leaves absorbed more CO2 into the intercellular space than did those that were grown in DFT under the same atmospheric CO2 concentration. There were no differences in the contents of chlorophyll and TSP and SLA, indicating the thickness of a leaf, among all of the hydroponic systems, so it was thought that the photosynthetic rate of leaves grown under dry-fog aeroponics increased due to the elevated CO2 concentration in leaves with opening stomata and not due to an increased photosynthetic ability. In terms of the root morphogenesis in sprayponics, it is particularly worth nothing that root hairs formed on the surface of roots under the dry-fog aerobic conditions because of no related reports found so far. Bibikova and Gilroy (2003) reported that the development of the root hair was influenced by abscisic acid (ABA) production under water stress. Furthermore, the availability of phosphorus or nitrate in the rhizosphere affects the length, number and density of root hair (Bates and Lynchand, 1996;Ma et al., 2001;Hammac et al., 2011). Thus, the development of root hair in some kinds of plant species may be related to water stress or nutrient stress as a survival adaptation under the unhealthy rhizosphere environment. However, in this study, lettuce plants had never experienced both water and nutrient shortages because of the continuous spraying with nutrient solution at proper concentration all the day. So the development of root hair in dry-fog aeroponics was not affected by the water stress related ABA biosynthesis, because the stomatal conductance of dry-fog aeroponics, especially in NF with significantly developed root hair, was higher than DFT where plants were never exposed to water stress. This increased stomatal conductance at the 2nd week after transplanting to dry-fog aeroponics showed that the evapotranspiration demand was filled much more in dry-fog aeroponics than in DFT by the enough absorption Environ. Control Biol. of water resulted from increased surface area of roots and decreased T/R. The respiration rate of roots is an important physiological parameter for evaluating root activity for absorbing water and nutrients. There was no significant difference in the root respiration rate within dry-fog aeroponics (FL vs. NF) and that of the DFT was significantly lower compared to those of dry-fog aeroponics in the 2nd week after transpiration (Table 2). These results indicate that the roots increased not only the surface area of developing root hairs to improve the dry-fog trapping efficiency but also the root absorption activity of water and nutrients adhering to the root surface. On the other hand, the TTC reducing activity was significantly higher in DFT, but there was no difference between FL and NF (Table 2). TTC is absorbed into tissue cells and reduced to red-colored TPF, which has been used as an indicator of dehydrogenase activity in many studies of root viability. At the same time, TTC reduction activity is also used as a parameter to evaluate uptake of water and nutrients (Wang et al., 2006). It has been hypothesized that good growth of shoots needs both a high TTC reduction activity and a large root surface area (Wen-Zeng et al., 2011). The TTC reduction activities of the roots in FL and NF were approximately 60% and 70% of those in DFT (Table 2), but the root fresh weights were 144% and 233% of those in DFT in the 2nd week after transplantation (Fig. 2). Considering total root activity as a product of TTC reduction activities per FW and total FW of whole roots per plant, there was no marked difference in the total absorption activity per plant between DFT and FL, and NF was expected to have the highest absorbing activity. Because the TTC absorbed by roots is reduced mainly by dehydrogenase in mitochondria, the relationship between the TTC reduction activity and respiration rate of roots has been investigated (Louise et al., 2000). Furthermore, considering that the root morphology changed drastically in dry-fog aeroponics, the physiological activities of roots are expected to adapt to the dry-fog rhizosphere environment. Increases in both stomatal conductance and photosynthetic rate of the plants grown in dry-fog rhizosphere might be caused by improving root morphology and physiological activity to enhance absorption of water and nutrient solution. These were results of root growth adaptation to the Vol. 53, No. 4 (2015) dry-fog environment at two weeks after transplanting, however, their influences to the leaf growth could be reflected in a short while later. Additional researches are necessary to explain the changes in root growth adaptation and growth of plants cultivated in dry-fog aeroponics for a longer period. Leaf constituents The contents of nitrate nitrogen and phosphate ions of mature leaves were the highest in DFT in both weeks (Fig. 4). FL had significantly higher values than those of NF in the 1st week, but there was no significant difference between these values in the 2nd week due to the increased surface area of the roots in NF. Because the roots in NF had almost the same absorption activity of nutrients to FL, increases in both the biomass and the absorbing activity of the roots could promote the shoot growth after the 3rd week after transplanting. Although the roots in dry-fog aeroponics were less exposed to nutrient solution in the rhizosphere than in DFT and the leaf contents of nitrate nitrogen and phosphate were lower in both FL and NF than those in DFT with unchanged chlorophyll content, no nutrient deficiencies were observed in the leaves in dry-fog aeroponics. The content of potassium ions in mature leaves was also significantly higher in DFT in both weeks compared to that in dry-fog aeroponics. NF had a significantly higher value than FL in the 2nd week. The content showed no significant changes between the 1st and 2nd weeks in DFT and FL, but NF increased this content by 30%. The content of calcium ions in mature leaves significantly increased in the FL than DFT in both weeks, and the FL was higher than the NF in the 2nd week. Because calcium and potassium have a competitive relationship, enhancing potassium absorption in NF in the 2nd week inhibited an increase in calcium. There was no significant difference in the calcium content between NF and DFT in the 2nd week. The insufficient absorption and transportation of calcium causes the physiological disorder "tip-burn" in the shoots of lettuce (Bangerth, 1979;Barta and Tibbitts, 2000) and decreases the production of lettuce. Many studies have investigated the improvement of calcium absorption during the cultivation of leafy vegetables (Shibata et al., 1995;Goto and Takakura, 2003;Takahashi et al., 2012). Although tipburn did not occur in lettuce cultivation with dry-fog aeroponics, lettuce plants can grow well with healthy consumption values. We suggest that dry-fog aeroponics inhibits the occurrence of tip-burn in lettuce production with minimum amounts of nutrient solution and water. CONCLUSION In the dry-fog aeroponics, the flow rate of dry-fog through the rhizosphere can be controlled easily by using a fan. The shoot and root growth of leaf lettuce plants was altered depend on the flow rate, which affected on the amount of dry-fog particles that adhere to the surface of the roots. The roots grew and developed adaptively to dry-fog aerobic rhizosphere with well branching, root hairs and high respiration activity. These roots might be prefer to increase ability of water and nutrient absorption, lead to fill the evapotranspiration demand sufficiently and enhance the photosynthetic rate and stomatal conductance. Accordingly, dry-fog aeroponics is thought to be an effective cultivation system to promote plant growth. As the flow rate of dry-fog aeroponics indirectly affects on the plant growth by the root development, optimally-controlled flow rate might make it possible to accelerate the root growth adaptation, thereby leading to improved growth in the early stages after planting. Therefore, further investigations are necessary to better understand the optimal rhizosphere environment for root growth that makes the production yield maximum in dryfog aeroponics.
2019-03-31T13:41:43.816Z
2015-01-01T00:00:00.000
{ "year": 2015, "sha1": "10e2ad2e63baa433ec680a9b181af583273e8862", "oa_license": null, "oa_url": "https://www.jstage.jst.go.jp/article/ecb/53/4/53_181/_pdf", "oa_status": "GOLD", "pdf_src": "Adhoc", "pdf_hash": "41d7b0e93b4ce95f97f2dcf730441b218a7ce8e6", "s2fieldsofstudy": [ "Agricultural And Food Sciences" ], "extfieldsofstudy": [ "Biology" ] }
268731109
pes2o/s2orc
v3-fos-license
Reinfection of farm dogs following praziquantel treatment in an endemic region of cystic echinococcosis in southeastern Iran Cystic Echinococcosis (CE) as a prevalent tapeworm infection of human and herbivorous animals worldwide, is caused by accidental ingestion of Echinococcus granulosus eggs excreted from infected dogs. CE is endemic in the Middle East and North Africa, and is considered as an important parasitic zoonosis in Iran. It is transmitted between dogs as the primary definitive host and different livestock species as the intermediate hosts. One of the most important measures for CE control is dog deworming with praziquantel. Due to the frequent reinfection of dogs, intensive deworming campaigns are critical for breaking CE transmission. Dog reinfection rate could be used as an indicator of the intensity of local CE transmission in endemic areas. However, our knowledge on the extent of reinfection in the endemic regions is poor. The purpose of the present study was to determine E. granulosus reinfection rate after praziquantel administration in a population of owned dogs in Kerman, Iran. A cohort of 150 owned dogs was recruited, with stool samples collected before praziquantel administration as a single oral dose of 5 mg/kg. The re-samplings of the owned dogs were performed at 2, 5 and 12 months following initial praziquantel administration. Stool samples were examined microscopically using Willis flotation method. Genomic DNA was extracted, and E. granulosus sensu lato-specific primers were used to PCR-amplify a 133-bp fragment of a repeat unit of the parasite genome. Survival analysis was performed using Kaplan-Meier method to calculate cumulative survival rates, which is used here to capture reinfection dynamics, and monthly incidence of infection, capturing also the spatial distribution of disease risk. Results of survival analysis showed 8, 12 and 17% total reinfection rates in 2, 5 and 12 months following initial praziquantel administration, respectively, indicating that 92, 88 and 83% of the dogs had no detectable infection in that same time periods. The monthly incidence of reinfection in total owned dog population was estimated at 1.5% (95% CI 1.0–2.1). The results showed that the prevalence of echinococcosis in owned dogs, using copro-PCR assay was 42.6%. However, using conventional microscopy, 8% of fecal samples were positive for taeniid eggs. Our results suggest that regular treatment of the dog population with praziquantel every 60 days is ideal, however the frequency of dog dosing faces major logistics and cost challenges, threatening the sustainability of control programs. Understanding the nature and extent of dog reinfection in the endemic areas is essential for successful implementation of control programs and understanding patterns of CE transmission. Introduction Cystic Echinococcosis (CE), caused by the metacestodes of Echinococcus granulosus sensu lato (s.l.), is considered as a prevalent zoonotic disease of human and livestock worldwide.CE is transmitted between carnivorous and herbivorous mammals as definitive and intermediate hosts respectively [1].The infection occurred following accidental ingestion of eggs excreted in dog feces and dispersed in the environment.CE infection is highly endemic, and widespread in many parts of the world, especially in specific rural settings where humans and animals live in close proximity [2]. CE is present in the Middle East and Central Asia and poses a potential threat to the human population.Iran is considered as an endemic focus of CE, where high prevalence rate of infection with various genotypes of E. granulosus have been demonstrated in dogs as well as different livestock [2][3][4].CE is estimated to impose a substantial cost to the society, including losses associated with human surgical and other treatment costs and livestock production losses.In Iran the overall annual monetary burden of CE, has been estimated at US$232 million [5].Various biotic and abiotic agents play a role in the endemicity of CE.In addition, behavioral and socio-economic factors including close contact with dogs, outdoor activities, contaminated food and water are important factors in CE transmission to humans.Livestock husbandry is a major part of Iran's economy and many parts of the country are inhabited by farmers, as well as the settled, nomadic and seminomadic pastoral communities, practicing livestock husbandry and use of shepherd dogs [6].Pastoral lifestyle involves farm slaughter, home slaughter and unregulated slaughter sites, that facilitates the access of farm dog as well as free-roaming dogs to the livestock offal.This is one of the key determinants of CE transmission in many endemic areas of the world including Iran.In-depth understanding of the epidemiology of human and animal echinococcosis is required for developing effective control programs. According to the World Health Organization (WHO), monitoring of echinococcosis in animal populations is essential for disease control and prevention programs [7].Praziquantel (PZQ) dosing of dogs, livestock vaccination, improving abattoirs infrastructure, control athome slaughter, meat inspection and removing sheep offal from the environment, management of free roaming dog (FRD) population, registration of owned dogs, and public and professional education interventions are proposed as the main elements of CE control programs [8].However, due to the diverse epidemiological conditions of echinococcosis and its inherent complexities a one-size-fits-all strategy cannot be successfully implemented for interrupting transmission.Overall, four phases are considered for CE control including planning phase, attack, consolidation and maintenance [8]. Planning is an important phase of control as it needs to explore prerequisites including intersectoral coordination, sustainable funding, community participation and advocacy.One of the key elements in CE control plans is the availability of accurate baseline information from human and animal surveillance data [7,9].As part of the baseline information, one of the most important data needed about echinococcosis transmission, is the epidemiological data of echinococcosis in dogs, as the most important definitive host of the parasite.Generally, dogs can be infected through feeding on livestock offal remained after slaughter at home or at substandard abattoirs.Praziquantel dog dosing is a key measure for CE control in many endemic countries, however dogs can be reinfected sometime after PZQ administration.The rate of reinfection in dogs is an indicator of the intensity of CE transmission in endemic areas.The frequency of dog deworming is dependent on the rate of dog reinfection in endemic regions and this is a major factor in the planning of CE control programs. Theoretically, continuous dog dosing campaigns ensure critical decrease in CE transmission.However, in this circumstance, frequent dog dosing in endemic regions is associated with major logistic and financial problems including cost of the drugs, costs related to the drug delivery, staff education.Certainly, reinfection studies are essential to address various questions on the epidemiology of echinococcosis in dogs, and to provide baseline data for CE control and prevention. Several studies investigating reinfection of dogs have been conducted in CE endemic regions including northwestern province of Xinjiang in China, northern Kenya, northern Libya, southern Argentina, Tunisia, Morocco and Uruguay [10][11][12][13][14][15][16].Our knowledge on reinfection in dogs is very limited in the Middle Eastern countries including Iran.Therefore, understanding the nature and extent of dog reinfection in the endemic areas is essential for successful implementation of control programs and understanding patterns of CE transmission [17].Additionally, understanding the spatial heterogeneity of disease prevalence can be used to understand where intervention and control measures are needed most.Poor knowledge of this issue in endemic regions leads to the waste of time and energy in preventive measures against cystic echinococcosis. The findings may further assist relevant authorities in conserving and developing integrated management practices related to human and animal health, achieving an appropriate control approach for CE. The aim of this study was to determine the reinfection rate with E. granulosus in a population of owned dogs after PZQ administration in a natural setting.We also evaluated the prevalence of sampled areas using a spatial model to highlight regions with increased disease risk.The results of this study can add to our knowledge about the dynamics of CE transmission in endemic countries. Ethics statement This study was approved by the Ethical Review Committee of Kerman University of Medical Sciences and the Research Center for Hydatid Disease in Iran (RCHD), code No. IR.KMU.REC.1398.468.The study was performed between December 2019 and December 2020. Study location and sample collection Kerman is located on a high margin of Dasht-e Lut (Lut Desert) in the south-central part of Iran and 1,755 m (5,758 ft) above sea level, with the average annual rainfall of 142 mm.Kerman has a continental climate, with hot summers and cold winters and is surrounded by mountains.Based on the previous data of livestock population, the provincial veterinary authority in Kerman was asked to give us 100 epidemiological units (farms) selected by random sampling.One hundred and fifty owned dogs were included in the present study, belonging to 83 farms in 51 GIS points in the region.The study location coordinates of 51 points where the samples were collected, are provided in the supporting information (S1 Data File). The owners of the dogs participating in the survey were informed of the main research goals.All the dogs found in each selected farm were examined.The dogs included in this study were mostly working dogs as sheep and/or guard dogs.Dogs with severe health conditions and those dogs whose owners refused to participate were excluded from the study. We had thoroughly explained study objectives to the dog owners, ensuring they understood the importance of the study and fully comprehended the significance of sample collection.The dogs involved in this study were mostly tied up throughout the day, with limited freedom to roam.They were allowed to be freed during the night to carry out their role of guarding and protecting the properties and livestock.In this case, the fresh stool sample was collected from where the dog was restrained.In other cases, if a dog was observed defecating, the fresh stool was located, and the sample was collected from the uppermost surface of the feces with the observance of the health tips.All samples were labeled with dog data including dog name and/ or number, age and sex.The samples were immediately transferred and stored in -70˚C freezer for at least two weeks.A questionnaire was prepared for collecting data on the age, sex, risk factors of dog and human infection, dog dosing history, dog food, roaming status, home slaughter, and offal disposal behavior. Parasitological study Stool samples were collected before PZQ administration (step 0).In order to eliminate dog tapeworms, all dogs were received a single oral dose of 5 mg/kg PZQ under the supervision, according to the WHO/WOAH guidelines [7].Dogs were classified in three categories according to the body mass, small (<10 kg), medium (10-25 Kg) and large (>25 Kg) [18] and the dosage was administered to each animal accordingly.Re-samplings of the owned dogs were performed at 2 (step I), 5 (step II) and 12 months (step III) after initial PZQ administration [13]. Flotation-based methods using different high-density solutions are currently the method of choice for detecting taeniid eggs in canine feces.Using Willis flotation method, as one of the well-known floatation techniques of stool examination, the stool samples were microscopically examined in the laboratory.Briefly: a) 2 gr of feces was placed in a mortar, b) 100 ml of saturated NaCl solution was added and mixed with the stool, c) the fecal suspension was gently poured through a tea strainer into a test tube until a convex meniscus formed at the top of the tube, d) a coverslip was placed carefully on the top of the tube and the tube was left for 20 minutes, e) the coverslip was watchfully lifted off and transferred on a clean slide.Finally, the slide was thoroughly examined using an optical microscope [19]. Molecular study Genomic DNA of all stool specimens were extracted, using ExGene Stool DNA mini kit (Gen-eAll, South Korea), according the manufacturer's instructions.The primers, Eg1121a, 5 0 -GAATGCAAGCAGCAGATG-3 0 (forward) and Eg1122a, 5 0 -GAGATGAGTGAGAAGGA GTG-3 0 (reverse) were used for specific identification of E. granulosus sensu lato.This primer pair amplifies a 133-bp fragment of a repeat unit of the parasite genome with suitable sensitivity and specificity [20,21].The PCR was carried out in 25 μl reaction volumes in FlexCycler (Analytik Jena, Germany) machine.The thermal profile included an initial denaturation at 94˚C for 5 min; followed by 35 cycles, each of 30 s at 94˚C, 45 s at 50˚C and 35 s at 72˚C; and a final extension of 10 min at 72˚C.The amplification products were subjected to electrophoresis on 2% agarose gel in TAE buffer. Data analyses Kaplan-Meier method was applied to calculate cumulative survival rates.For reinfected dogs, time was defined as the infection time.For non-infected dogs, the time of last microscopic examination was defined as the censored time.To calculate the incidence rate of infection, number of positive cases was divided by total dog-time of follow-up.Log rank and Chi-square tests were used to investigate the statistical differences in CE prevalence between male and female dogs.All statistical analyses were performed in the R package. Spatial analyses A spatial analysis of the sampled locations was conducted using the INLA package [22] in R. Following the method from P. Moraga (2019) [23], an Stochastic Partial Differential Equations (SPDE) model was developed.The model predicts prevalence following a binomial distribution: With Y i representing the number of positive samples taken, N i the number of samples tested at and P(x i )prevalence at location x i . With β 0 representing the intercept (i.e. the average prevalence of the region) and S(x i ) is the spatial random effect which follows a zero-mean Gaussian process. A triangulated mesh was built to cover the city of Kerman and the surrounding area, which the SPDE model was created upon.The prevalence is predicted onto a raster of 150 by 150 pixels over a 0.6˚longitude by 0.6˚latitude square region around the city of Kerman.A relative risk of each pixel is calculated by the following equation: With R(xi) representing the relative risk at location x i .The model outputs were visualized using the R Leaflet package [24].Full model description and code available at https://github.com/MabEntez/Dog-reinfection-spatial-model-in-kerman.git. Results A total of 150 dogs were registered for determining Echinococcus infection during a year.The sex and age distribution of the dogs is demonstrated in Table 1.Before praziquantel administration taeniid eggs were found in 8% of dog samples using stool microscopy.However, using the more sensitive molecular methods, it was determined that 42.6% of the stool samples were infected with Echinococcus. A cohort of 150 owned dogs was registered to determine the reinfection rate with E. granulosus within the dog population.Results of survival analysis are shown in Table 2. Echinococcosis reinfection in different steps of the survey indicate 8, 12 and 17% reinfection rates in the steps I, two months post-treatment (mpt), II (5 mpt) and III (12 mpt), respectively (Fig 1).This means a survival rate of 92, 88 and 83% in the same time periods, i.e. no new event (new infection) was occurred in the dogs.The monthly incidence of reinfection in total owned dog population was 1.5% (95% CI 1.0-2.1,26/1744).Findings showed that 8% of the dogs were found infected only after 2 months following PZQ dosing.Over a 12-month period a number of dogs were dropped out of the study due to the absence, loss, or death of the dogs at the study sites, so that at step I, 71% of the initial dog population (106 dogs) were sampled.At step II, 59% (88 dogs) and at step III, 60% of the initial owned dog population (90 dogs) were resampled. There was a higher proportion of male to female dogs at a ratio of 3.6:1.The prevalence of E. granulosus between female and male owned dogs was not significant (P = 0.776).The monthly incidence of reinfection in male and female dogs was 1.5 and 1.3%, respectively.Dogs between the ages of 1 and 3 years were most common in the population.The monthly incidence of reinfection in pup/young dogs and adult/old dogs was found as 3.1 and 1.2%, respectively (p = 0.03) (Table 3).Dogs were mostly kept for the purpose of guarding the household and livestock.As shown in Table 1, all dog owners reported home slaughter and feeding dogs with livestock offal.Dog dosing with anthelminthic drugs was not reported by the dog owners. The main output of the SPDE spatial model is a relative risk of the sampled regions.For the sake of visualizing the outputs on the map, only pixels with a relative risk of less than 0.985 and more than 1.015 are presented on the map.This is shown in Fig 2 .The relative risk ranged from 0.9 to 1.21, with the β 0 (average prevalence in the population) at 0.415. Discussion The present study investigated the reinfection status of owned dogs in an endemic region of cystic echinococcosis in Iran.CE is endemic in Iran and the prevalence of Echinococcus granulosus in dogs in different parts of Iran has been estimated between 6.8 and 55.7% [3].PZQ dosing of dogs is the first line option for CE control programs, however many dogs got reinfected following PZQ treatment, and this presents a major challenge in the control programs.Reinfection rate of the owned dogs determines the optimum frequency of dog dosing [25].Moreover, the rate of dog reinfection is one of the best indicators of the potential of CE transmission between humans and animals.Therefore, the knowledge about echinococcosis transmission among owned dogs can illustrate the extent to which dogs have access to livestock offal.The frequency of dog dosing is a major issue regarding the logistics, costs and sustainability of programs [2,13].The SPDE model shows that there are distinct sample areas that show higher rates of infection, with a maximum increase of 20% in some areas.Areas with higher relative risk could be a sign of increased transmission or a neglect in control. According to the WHO/WOAH, the efficacy of a single dose of oral PZQ (5mg/ kg) for the owned dogs is 99.9%; consequently, any test-positive dogs after drug intervention is considered as a new infection.Pre-patent period of E. granulosus in dogs is approximately 45 days.This is a sufficient period of time for echinococcosis to be established in the canine hosts [26].Few studies have been performed in CE endemic countries to investigate E. granulosus reinfection rate of owned dogs.In the present study, a cohort of owned dogs in Kerman were followed for one year to evaluate dog reinfection status after the administration of a single dose of PZQ. Findings of the study indicate that in a one-year period, no infection event has occurred in 83% of the dogs, this means an annual reinfection rate of 17% in the owned dogs.This is in accordance with the findings of a study on dog reinfection in Libya.A study in northwest Libya on owned dogs 15 months after praziquantel treatment estimated the reinfection rate of dogs as 22% [10] with a significant decrease in the prevalence from 21.6% to 9%.In northwest China, E. granulosus reinfection rate following praziquantel treatment after 3 months was 25% [11].The corresponding number for the owned dogs in Argentina was 10% [14]. The prevalence of E. granulosus in rural dog population in Uruguay was 13.2%, and reinfection rates of dogs at 2, 4, 8 and 12 mpt were estimated at 0, 5.4, 18.6 and 27.9%, respectively [15].In a cohort study in Tibetan communities of Sichuan, rate of reinfection of dogs at 2, 5 and 12 mpt was investigated.In this study the rate of canine Echinococcus coproantigen prevalence was reported from 21% in the baseline to 9.6%, two months post-treatment [13].In three locations of the Middle Atlas, in Morocco, Amarir et al showed 23% of PZQ-treated dogs were re-infected 4 months following treatment.They concluded that 2 or 3 months interval between PZQ treatments in owned dogs controls the risk of reinfection of owned dogs and significantly reduces reinfection after the second treatment [27]. In a study in the northwestern Turkana district of Kenya, the natural infection rates with E. granulosus in dogs were assessed at 6, 12, 18, and 24-week intervals following PZQ.The prevalence in dogs were shown to reach to about 20% only six weeks following PZQ dosing.The prevalence of infection was finally returned to pretreatment levels within 24 weeks [16].Comparing to our study, this reflects the significant differences in the epidemiology and infection pressure between the two regions, the Turkana region with one of the highest incidence rates of CE and Iran as a country with moderate incidence of CE. In Tunisia the reinfection rate of rural semi-stray dogs with E. granulosus were examined following arecoline purgation at 2, 4, 8, and 12 months.Lahmar et al showed that reinfection with E. granulosus occurred very quickly after the end of PZQ treatment so that after 2 months the infection of dogs was not found significantly different compared to the day 0. Regular treatment of the dog population with PZQ every 60 days was recommended and it was suggested that dog dosing every 6 months, would decrease the pressure of E. granulosus infection in the rural regions of Tunisia [12].Findings of the present study indicate a monthly incidence of 1.5%, this means the reinfection of nearly 10% of dog population in a 6-month period.However, it should be noted that the major part of reinfection has occurred in the first two months following PZQ treatment, i.e. 8% of dogs were quickly reinfected only after 2 months, followed by the remaining 8% reinfected after 10 months.It can be concluded that these dogs belong to the owners with irresponsible dog ownership behaviors such as regular feeding their dogs with livestock offal as well as practicing home slaughter.According to our field observations home slaughter and feeding dogs with livestock offal is a popular practice in the study area and all dog owners reported these practices at least once within last 12 months.Cystic echinococcosis is a major zoonotic neglected tropical disease (NTD).Knowing E. granulosus reinfection rate of local dogs is an essential indicator of the potential of disease transmission in humans and livestock.In parallel with the findings of Lahmar et al in Tunisia and Moss et al in Sichuan, PZQ dosing of owned dogs every 2 months can significantly diminish dog infection and subsequent CE transmission in the endemic regions [12,13].Therefore, continuous PZQ dosing at least 2-3 times a year greatly reduces the risk of human and livestock CE [28].However, a practical and acceptable frequency of dog deworming in Iran have to balance the positive impact with operational sustainability and feasibility.Further field and modeling evidences are required to find the optimum dosing frequency with maximum effect on CE transmission.In addition, different reinfection patterns can be found in different endemic regions and in the case of Iran, where comprehensive studies on reinfection rates have been lacking, it is advisable to implement further studies in areas with different endemicity to understand the most suitable control strategy with regards to dog deworming. It should be noted that some other factors can be of potential importance in determining the time interval of deworming campaigns.One major factor is the species/genotype of E. granulosus sensu lato.According to the studies performed in Kerman, both E. granulosus sensu stricto and E. canadensis G6 genotype have been reported from human patients in Kerman [29].Developmental differences between species and genotypes of E. granulosus sensu lato, and variation in the pre-patent period in definitive host, is an important factor in the control programs [30].It has been demonstrated that the pre-patent period is usually 6 weeks in dogs but it may range from 34 to 58 days depending on different genotypes of the parasite as well as the breed of dogs [31].Therefore, a uniform anthelmintic deworming frequency cannot be proposed for all endemic areas and this frequency should be arranged according to several epidemiological data in each endemic region.In addition to the implementation of evidencebased deworming programs, EG95 livestock vaccination combined with effective science communication, public education and advocacy, can drastically reduce the incidence of CE in both humans and animals.Perhaps one of the biggest challenges in the endemic regions is the lack of knowledge about zoonotic diseases and their treatment, control and prevention.Increasing awareness through public education is one of the key measures for CE control in endemic communities. At the same time some other measures to reduce the chance of dog reinfection are necessary including control of slaughter at home and standard meat inspection at official abattoirs.Another aspect of home slaughter is the traditional slaughter of livestock that is common in many cultures and religions.Livestock sacrifice is particularly prevalent during the largest Muslim religious festival known as Eid-al-Adha (Eid Ghorban, feast of sacrifice), where animals like sheep, goats, cattle and camels are sacrificed.It is essential to note that during this religious event, animals are slaughtered outside the abattoirs at home or during street festivals, and this is potentially contributing to the increased risk of zoonotic parasites transmission through increased chance of infected organs or carcasses to be available to the carnivores [6,32].Unfortunately, the exact frequency of home slaughter practice in not known in many countries including Iran, however field observations in different parts of endemic areas indicate that this practice is prevailed throughout the country. Several echinococcosis control programs have been implemented in some CE endemic regions, however there is no ongoing control program at national or sub-national levels in Iran [33].Findings of a research project conducted in Kerman, Iran from 1991 to 1994 indicate a significant decline in the infection rate among dogs (dropping from 5% in 1993 to 1.5% in 1994) following culling of unowned free-roaming dogs and carrying out biennial PZQ treatment of the owned dogs [34].Based on our findings, there is a need for a control program with a focus on both regular dog dosing with PZQ and EG95 livestock vaccination in Kerman. Regular PZQ dosing of dogs is central to a successful control program.However other CE control measures are needed, such as improving abattoirs infrastructure, control at-home slaughter and removing sheep offal from the environment and public and professional education intervention.Managing dog populations is a complementary approach for reducing burden of echinococcosis in combination with other control measures such as PZQ dosing, and targeting livestock. Obviously, an effective CE control program requires consistent efforts and sustainable support.Control measures are facing serious logistic, financial and administrative constraints and need multi-/inter-sectoral coordination with a clear and comprehensive one-health approach linking knowledge and resources of veterinary and human medicine.An integrated management can be a solution to resolve the cost and logistic problems in the delivery of PZQ and EG95 vaccine.For instance, simultaneous EG95 vaccination of sheep and goat with enterotoxaemia vaccine can be considered as part of this integrated approach [9].Preventing dog reinfection is essential for a successful control program, however there is no effective vaccine against echinococcosis in the definitive host.Vaccination of both the definitive and intermediate hosts is an ideal approach for achieving animal CE control targets.Introducing an effective vaccine against the adult forms of E. granulosus can be a great advantage to enhance other control measures, however no sound evidence is available and high quality studies are required to investigate the potential for an effective vaccine against canine echinococcosis.The lack of sufficient evidence of natural safety in the final host is one of the most important reasons for not introducing a vaccine against the adult E. granulosus.Another potential advantage of developing a vaccine for canine echinococcosis is the lower number of dogs compared to livestock population.For example, in China's Xinjiang province, the dog population has been estimated at 1.5 million, compared to 60 million sheep and goat population in the province.Therefore in terms of logistic and supply fewer animals need to be vaccinated [35].In a recent study in Kerman, the population of free roaming dogs (FRDs) has been estimated at 6781 dogs i.e. 1.2 dogs per 100 people [36].According to this dog:human population ratio, total free roaming dog population in Iran can be estimated at 1,020,000 FRDs. WHO 2021-2030 road map has been delivered global targets and milestones for NTDs control [37].Echinococcosis is one of the NTDs whose goals have been set for the next ten years based on the control of the disease.In this setting by 2023, 2025 and 2030, intensified control measures for echinococcosis should be implemented in 4, 9, and 17 endemic countries, respectively.The roadmap emphasizes the need for technical requirements, strategy, service delivery, and program capacity to control echinococcosis.Regarding the technical progress, there is a huge gap in the diagnosis of echinococcosis.Planning, governance and program management, monitoring and evaluation, access, and logistics are also crucial in the strategy and service delivery. Principal steps and critical actions to reach the 2030 goal for echinococcosis include providing baseline data and strengthening integrated national monitoring of CE, developing guidelines for effective prevention and control, as well as strengthening ultrasound diagnosis, effective interventions and ensuring access to albendazole.Implementation of a successful CE control program will require key data including accurate surveillance data for echinococcosis of human, livestock and carnivores.Assessing the impact of control measures and justifying ongoing expenditure of costly control interventions is not possible without suitable surveillance information. Conclusion The results of this study improved our knowledge about the dynamics of canine echinococcosis in southeastern Iran.The findings assist authorities in achieving an appropriate control approach for CE and developing integrated management practices related to endemic zoonoses in the region.With regards to our findings, regular treatment of farm dogs with praziquantel every two months is ideal, however the logistics and cost issues in dog dosing should be considered in a sustainable CE control program.Systematic and in-depth understanding of the epidemiology of human and animal echinococcosis can be valuable for developing effective prevention and control programs according to the WHO road map for neglected tropical diseases 2021-2030. Fig 2 . Fig 2. The relative risk of infection with Echinococcus granulosus predicted from sample data by the SPDE spatial model.Map data from OpenStreetMap.The map contains information from OpenStreetMap and OpenStreetMap Foundation, which is made available under the Open Database License.Data are available on GitHub: https://github.com/MabEntez/Dog-reinfection-spatial-model-in-kerman.git.https://doi.org/10.1371/journal.pntd.0011939.g002 Table 1 . Demographic data of the farm dogs and dog ownership behavior of the farm dog owners in Kerman, southeastern Iran. https://doi.org/10.1371/journal.pntd.0011939.t001
2024-03-29T06:18:32.141Z
2024-03-01T00:00:00.000
{ "year": 2024, "sha1": "71549deb02ad9b515c80bdc416b4ad95a3898bb5", "oa_license": "CCBY", "oa_url": "https://journals.plos.org/plosntds/article/file?id=10.1371/journal.pntd.0011939&type=printable", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "eb79e5016bb6bd0bf89bc44b1a7323d3fb0bd054", "s2fieldsofstudy": [ "Environmental Science", "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
15571355
pes2o/s2orc
v3-fos-license
Coordinated Activation of Toll-Like Receptor8 (TLR8) and NLRP3 by the TLR8 Agonist, VTX-2337, Ignites Tumoricidal Natural Killer Cell Activity VTX-2337 (USAN: motolimod) is a selective toll-like receptor 8 (TLR8) agonist, which is in clinical development as an immunotherapy for multiple oncology indications, including squamous cell carcinoma of the head and neck (SCCHN). Activation of TLR8 enhances natural killer cell activation, increases antibody-dependent cell-mediated cytotoxicity, and induces Th1 polarizing cytokines. Here, we show that VTX-2337 stimulates the release of mature IL-1β and IL-18 from monocytic cells through coordinated actions on both TLR8 and the NOD-like receptor pyrin domain containing 3 (NLRP3) inflammasome complex. In vitro, VTX-2337 primed monocytic cells to produce pro-IL-1β, pro-IL-18, and caspase-1, and also activated the NLRP3 inflammasome, thereby mediating the release of mature IL-1β family cytokines. Inhibition of caspase-1 blocked VTX-2337-mediated NLRP3 inflammasome activation, but had little impact on production of other TLR8-induced mediators such as TNFα. IL-18 activated natural killer cells and complemented other stimulatory pathways, including FcγRIII and NKG2D, resulting in IFNγ production and expression of CD107a. NLRP3 activation in vivo was confirmed by a dose-related increase in plasma IL-1β and IL-18 levels in cynomolgus monkeys administered VTX-2337. These results are highly relevant to clinical studies of combination VTX-2337/cetuximab treatment. Cetuximab, a clinically approved, epidermal growth factor receptor-specific monoclonal antibody, activates NK cells through interactions with FcγRIII and facilitates ADCC of tumor cells. Our preliminary findings from a Phase I open-label, dose-escalation, trial that enrolled 13 patients with recurrent or metastatic SCCHN show that patient NK cells become more responsive to stimulation by NKG2D or FcγRIII following VTX-2337 treatment. Together, these results indicate that TLR8 stimulation and inflammasome activation by VTX-2337 can complement FcγRIII engagement and may augment clinical responses in SCCHN patients treated with cetuximab. Trial Registration: ClinicalTrials.gov NCT01334177 Introduction Natural killer (NK) cells play an important, well-documented role in cancer immune surveillance and form a bridge to transition innate immune responses to adaptive responses. Activating receptors, such as NKG2D expressed by NK cells, recognize stress-induced ligands on virally infected and malignant cells. Alternatively, NK cell recognition of antibody coated tumor cells through surface FcγRIII/CD16, provides a potent activation signal leading to antibody-dependent cell-mediated cytotoxicity (ADCC), [1,2]. Both pathways of tumor cell recognition trigger NK cells to secrete cytokines such as IFNγ, and release cytolytic proteins including perforin and granzymes, that induce tumor cell death through the activation of an apoptotic cascades. ADCC is a well-established effector pathway that contributes to the therapeutic activity of monoclonal antibodies (mAbs) such as cetuximab, an epidermal growth factor receptor (EGFR)-specific mAb approved for treatment of patients with squamous cell carcinoma of the head and neck (SCCHN). VTX-2337 is a selective toll-like receptor 8 (TLR8) agonist that is more potent than either resiquimod (R848) or 3M-002 (CL075) [3], which is currently in Phase 2 clinical development in multiple oncology indications. Treatment of peripheral blood mononuclear cells (PBMC) with VTX-2337 in vitro activates NK cells, enhances trastuzumab-, rituximab-and cetuximab-mediated ADCC, and augments tumor killing through other recognition pathways such as NKG2D [4,5]. Modulation of NK cell function by TLR8 agonists has important implications for enhancing the therapeutic activity of clinically approved mAbs. Increased ADCC by NK cells may lead to a more vigorous anti-tumor response in the short term, which can help shape tumor-directed adaptive immune responses with the potential for long-term, durable clinical responses [6]. Soluble mediators such as IL-18 are produced by activated macrophages and myeloid dendritic cells (mDC) and enhance NK cell responses invoked by other stimulatory pathways such as Fc receptors and NKG2D [7][8]. TLR ligation and downstream activation of NFkB leads to the synthesis and subsequent accumulation of pro-IL-1β and pro-IL-18 within responsive cells. While this priming step is necessary, the release of mature IL-1 family cytokines is dependent on cleavage of the pro-cytokines by activated caspase-1, which is recruited to the NOD-like receptor pyrin domain containing 3 (NLRP3) inflammasome complex. This second activation signal has generally been linked to perturbations in normal cell physiology, or damage signals, such as uric acid crystals, extracellular ATP, or lysosomal damage, rather than specific ligands [9][10]. Interestingly, TLR8 activation of mDC and monocytes by VTX-2337 in the absence of other activating signals, leads to release of both IL-1β and IL-18 and complements the activities of other mediators induced in response to TLR8 activation [3,[11][12][13]. In this report, we have elucidated the mechanism of coordinated TLR8 and NLRP3 activation by VTX-2337, which leads to the production and release of IL-18. We have also established that activation of this pathway is not limited to vitro assays, but also occurs in preclinical studies conducted in cynomolgus monkeys. Additionally, we have evaluated how VTX-2337mediated NLRP3 activation and the downstream production of IL-18 complement canonical pathways of NK activation, such as engagement of NKG2D and FcγRIII receptors. Finally, we describe increased NK cell function in SCCHN patients treated with VTX-2337 in combination with cetuximab. Our results suggest that patient NK cells become more responsive to which conducted the clinical study sponsored by VentiRx Pharmaceuticals. Role of funders: VentiRx Pharmaceuticals employees Gregory N. Dietsch and Robert M. Hershberg played an active role in the study designs described in this manuscript, were involved in data collection and analysis of study data, the decision to publish, and the preparation of this manuscript. The specific roles of these authors (GD, RH) are articulated in the "author contributions" section. Analysis of IL-1β secretion and caspase-1 activation in THP-1 cells Both wild type THP-1 and NLRP3 defective (NLRP3 def ) THP-1 cells were purchased from InvivoGen (San Diego, CA, Catalog numbers: thp-null and thp-dnlp). THP-1 cells or NLRP3 def THP-1 cells (180,000 cells/well) were seeded in a 96-well plate, differentiated with 12-O-tetradecanoylphorbol-13-acetate (TPA), and treated with VTX-2337 (3 nM to 10 μM) overnight to assess IL-1β secretion. Levels of IL-1β in culture supernatant were analyzed using an ELISA kit from eBiosciences. In some studies, HEK-Blue™ IL-1β cells (InvivoGen), were used to assess levels of IL-1β secreted into the media by THP-1 cells. The production of secreted embryonic alkaline phosphatase (SEAP) by HEK-Blue™ IL-1β cells in response to IL-1 was quantified using the QUANTI-Blue detection media and reported as OD650. Western blot analysis of IL-1 β, pro-IL-1β, and caspase-1 Protein levels of IL-1β, pro-IL-1β, and caspase-1 in both culture supernatant and whole cell lysate were evaluated using Western blot. After washing, the cells were re-suspended in serumfree RPMI with or without VTX-2337 (3 or 10 μM). Lysates of THP-1 cells were prepared by using the RIPA lysis buffer (0.15M NaCl, 1% NP-40, 0.1% SDS, 50 mM Tris pH8.0) with protease inhibitors. Proteins in culture supernatant from control or VTX-2337 treated THP-1 cells were precipitated using acetone. Proteins were separated on a 12% SDS-PAGE gel (Invitrogen) and transferred to nitrocellulose membrane. The membrane was first probed with anti-IL-1β mAb (clone 3ZD, from NCI-FCRDC), anti-caspase-1 mAb (clone D57A2, from Cell Signaling Technology, Boston, MA), or anti-β-actin (clone 13E5, from Cell Signaling Technology). After washing, the membrane was incubated with horseradish peroxidase-labeled secondary antibodies allowing the signal for IL-1β and caspase-1 to be detected by chemiluminescence (SuperSignal West Pico, Thermo Scientific, Rockford, IL). For β-actin, the membrane was incubated with DyLight-labeled secondary antibody and the signal was detected using LI-COR Odyssey. In vitro studies utilizing human PBMC Astarte Biologics (Redmond, WA) was contracted by VentiRx Pharmaceuticals to provide blood samples from healthy donors. Therefore, the protocol for the collection of blood by Astarte Biologics was approved by their Institutional Review Board (IRB), Schulman and Associates. The blood collection was performed following written informed consent. PBMC were isolated from the blood by Ficoll gradient centrifugation, suspended in RPMI, placed in 96well plates (200,000 cells/well), treated with serial dilutions of VTX-2337 (0.003 μM-10 μM), and incubated overnight. The culture supernatant was analyzed for levels of IL-1β and IL-18 by ELISA. To demonstrate that caspase-1 activation is required for VTX-2337 induced IL-1β and IL-18 production, the caspase-1 inhibitor z-VAD-FMK (10 μg/ml, Invitrogen) was added to the culture medium 30 minutes before the addition of VTX-2337 (1 μM). The PBMC were incubated overnight, and the levels of IL-1β, IL-18, IFNγ and TNFα were analyzed by ELISA using kits from eBiosciences. Real-time RT-PCR analysis of pro-IL-1β and pro-IL-18 mRNA expression Total RNA was extracted from THP-1 cells or PBMC isolated from blood obtained from healthy donors by Astarte Biologics, as previously described, was treated with VTX-2337 (3 or 10 μM, 24 h) using an RNAqueous4PCR kit (Ambion, Austin, TX). The cDNA was synthesized using the Superscript III reverse transcription kit (Invitrogen). Real-time Taqman PCR was run on an ABI 7900HT instrument using primer and probes from Applied Biosystems (Foster City, CA). The level of target gene expression was normalized to β-actin using the delta Ct method as previously described [14]. TruCulture 1 blood collection and whole blood culture system Whole blood from healthy volunteers was collected using the TruCulture 1 system (Myriad RBM, Inc., Austin, TX) as previously described [15]. As previously described, Astarte Biologics was contracted to VentiRx Pharmaceuticals to perform the collection. The blood was collected following written informed consent under a protocol approved by the IRB used by Astarte. The TruCulture 1 syringe tubes used in this study were preloaded with culture medium only, or VTX-2337 at a final concentration of 0.3 or 1 μM and stored at -20°C, then thawed the day before blood collection. Blood was drawn into the tubes and incubated in a dry heat block incubator for 24 h. Following the incubation, cells were separated from the supernatant using the supplier plunger, then frozen at -20°C until the time of cytokine quantification using the human MAP v.1.6 inflammation panel (Myriad RBM). Flow cytometric analysis of CD107a and IFNγ expression by NK cells To evaluate NK cell activation, PBMC were cultured overnight in RPMI + 10% Human AB serum with or without VTX-2337 (0.50 μM). Some samples were further stimulated by exposing the PBMCs to either K562 tumor cells at a 5:1 ratio or to plate-bound anti-CD16 mAb for the last 5 h of incubation. Brefeldin A was included during the last 4 h of incubation to block the secretion of IFNγ, while CD107a-PE was added for 5 h. Following the culture activation, cells were stained with fluorophore-conjugated antibodies to surface markers including anti-CD3-Alexa 488, anti-CD16-PerCP-Cy5.5, and anti-CD56-APC. After subsequent fixation and permeabilization, the cells were stained with anti-IFNγ-eFluor450. Samples were analyzed using a FACS Canto II instrument, and data collected in list mode were analyzed using Flow Jo software (Ashland, OR). In vivo administration of VTX-2337 in cynomolgus monkeys Studies in cynomolgus monkeys were conducted at Charles River Laboratories (CRL), Preclinical Services, (Shrewsbury MA) in strict accordance with the recommendations in the Guide for the Care and Use of Laboratory Animals of the National Institutes of Health. The study was reviewed and approved by the CRL Institutional Animal Care and Use Committee, under submission number DPKW-101. Study animals were colony animals that were returned to the colony on completion of the study. The male monkeys (2.9-4.9 kg) were housed individually (cage dimensions of 0.76 m wide x 0.74 m deep x 0.81 m in height), but commingled periodically as part of the environmental enrichment program. The animals were also given fruit, vegetable, or additional supplements as a form of environmental enrichment, as well as given various cage enrichment devices. Animals were given Certified Primate Diet #2055C (Harlan Teklad), two times daily and water ad libitum. Environmental controls for the housing were set to maintain 18-26°C, a relative humidity of 30-70%, a minimum of 10 room air changes/h and a 12-h light/12 h dark cycle. While doses of VTX-2337 were well tolerated, provisions including use of anti-inflammatory agents to moderate the immune response were considered in the study design. VTX-2337 was administered as a bolus subcutaneous (SC) injection in the intrascapular area at doses of 1 and 10 mg/kg. Blood samples were collected at baseline (pre-dose), and 6, 12, 24, and 96 h post injection to monitor levels of IL-1β and IL-18 in the plasma using the human MAP v.1.6 inflammation panel (Myriad RBM). Due to the routine, non-invasive procedures for dosing and blood collection, anesthetics were not considered necessary for the study. Administration of VTX-2337 to patients with head and neck cancer and immune monitoring of NK cell responses in treated patients The safety and tolerability of cetuximab in combination with VTX-2337 was evaluated in a Phase 1 clinical trial in adult patients with advanced recurrent squamous cell carcinomas of the head and neck (SCCHN) (Study A103; ClinicalTrials.gov NCT01334177). The study was conducted at a single study center (University of Washington, Seattle Cancer Care Alliance, Seattle, WA, USA) from June 2011 to June 2014 and was performed in accordance with good clinical practice guidelines and the ethical principles outlined in the Declaration of Helsinki. Approval for study procedures was obtained from the institutional review board of the study site, and all subjects provided written informed consent before study enrollment. Patients who were eligible for this study were adults with advanced or recurrent SCCHN that was no longer amenable to treatment by surgery or radiation therapy or patients with distant or metastatic disease. The primary objective this study was to determine the safety, tolerability and to assess the principal toxicities of VTX-2337 when given in conjunction with cetuximab. The secondary objective was to determine the pharmacodynamic response of VTX-2337 in combination with cetuximab. The primary endpoint was to determine the maximum tolerated dose (MTD)/recommended Phase 2 dose (RP2D) and to define the toxicities of VTX-2337 in combination with cetuximab. Secondary endpoints included the analysis of biologic correlative assays. The sample size was depended upon the observed safety profile, which determined the number of patients per dose level and the number of dose escalations. Study medications (cetuximab and VTX-2337) were administered in the clinic by appropriately qualified and trained personnel. This was an open-label study with no blinding. Each patient in this dose-escalation study was assigned to a dose level of VTX-2337 at the time of study enrollment. For each cohort, cetuximab was administered using a loading dose (400 mg/m 2 IV), followed by a weekly maintenance dose (250 mg/m 2 , IV). Each cetuximab dose was administered as an IV infusion: the initial dose was infused over 2 h, subsequent doses were administered over 1 h. VTX-2337 was administered by the SC route on days 1, 8 and 15 of a 28-day treatment cycle. The first cohort received a 2.5 mg/m 2 dose of VTX-2337 following cetuximab administration; this dose was escalated in subsequent cohorts using a 3+3 design to 3.0 mg/m 2 and finally 3.5 mg/m 2 . After successful completion of Cycle 1, patients were eligible to receive subsequent treatment cycles until the criteria for study discontinuation or withdrawal were met, including disease progression, intolerable toxicity, or death. A Consort Flow Diagram for the Clinical study evaluating VTX-2337 in adults with advanced or recurrent SCCHN is provided as Fig 1. The Study Protocol, VTX-2337 Phase 1 Trial in SCCHN Protocol A103 is available as supporting information, S1 Protocol. A TREND Statement Checklist for the study is provided as supporting information, S1 Checklist. Statistical analysis Statistical analysis was performed using GraphPad Prism, version 6 for Windows software (San Diego, CA). Differences between the treatment groups were analyzed using the two-tailed unpaired Student's t test or the Wilcoxon signed-rank test for matched pair analysis when normal distribution could not be assumed. A value of p<0.05 was considered statistically significant. Missing data was not imputed. The datasets from experiments and studies presented in Figs 2-6 are provided as supporting information: S1 File. VTX-2337 induces NLRP3 inflammasome activation and production of IL-1β and IL-18 by THP-1 cells and PBMC Treatment of THP-1 cells with VTX-2337 induced a highly significant (p<0.001, unpaired t test), concentration-dependent increase in IL-1β and IL-18 mRNA, consistent with TLR-mediated activation of NF-ĸB (Fig 2A and 2B). Western blot analysis showed induction of both pro-IL-1β (35 kDa) in THP-1 cell lysate and the release of mature IL-1β (17 kDa) into the culture supernatant ( Fig 2C). The active form of caspase-1 (p20, MW of 20kD), which is responsible for the cleavage of pro-IL-1β, was also induced in these cells by VTX-2337. Inhibition of caspase-1 with z-VAD-FMK resulted in a marked reduction of IL-1β release from VTX-2337treated THP-1 cells (Fig 2D). To demonstrate that activation of the NLRP3 inflammasome is required for the generation of mature IL-1β and IL-18 by THP-1 cells, the VTX-2337 response was evaluated in NLRP3-deficient cells. As shown in Fig 2E and 2F, the concentration-related release of mature IL-1β and IL-18 in response to VTX-2337 (0.03-25 μM), did not occur in NLRP3 deficient THP-1 cells. To look for comparable activity in primary human cells activated with VTX-2337, the induction of IL-1β and IL-18 was assessed using PBMC from healthy donors. In the absence of known activators of NLRP3, VTX-2337 induced the secretion of both IL-1β and IL-18 in a concentration-dependent manner (Fig 3A and 3B). As additional confirmation of this response, whole blood collected from multiple healthy donors (n = 11) was activated using multiple VTX-2337 concentrations, and production of IL-1β and IL-18 was assessed using the MAP1.6 inflammation panel to complement the ELISA results described above. As shown in Fig 3C and 3D), IL-1β levels increased from a mean of 2.6 ± 0.3 ng/mL in untreated controls to 729 ± 424 ng/mL (p < 0.001, Wilcoxon test) for cells activated with 1 μM VTX-2337. There was also a significant, (p<0.01 Wilcoxon test), concentration-dependent increase in IL-18 release in response to VTX-2337 activation. Caspase-1 activation and IL-18 production drive NK cell activation by VTX-2337 Caspase-1 activation is an essential step in the processing and secretion of the mature forms of IL-1β and IL-18 [1]. To demonstrate that caspase-1 is activated in response to VTX-2337, PBMC from multiple healthy donors were pretreated with the caspase-1 inhibitor, z-VAD-FMK (10 μg/ml), prior to activation with VTX-2337 (1 μM, 24 h). Pretreatment with z-VAD-FMK reduced IL-1β and IL-18 release from VTX-2337-activated PBMCs by 85-90% ( Fig 4A and 4B). Interestingly, z-VAD-FMK also blocked the production of IFNγ, but had little effect on the TNFα response induced by VTX-2337 (Fig 4C and 4D). We have previously reported that NK cells are a major source of IFNγ in VTX-2337-stimulated PBMC [3]. The reduction in IFNγ seen with caspase-1 inhibition is consistent with VTX-2337 driving mDC and monocytes to produce IL-18 that subsequently contributes to NK cell activation. To further evaluate the activity of IL-18 on NK cells, expression of intracellular IFNγ and CD107a, markers of activation and degranulation respectively [17], were monitored by flow cytometry. The number of IFNγ + NK cells increased from 0.30 ± 0.24 in untreated cultures to 1.17 ± 0.91 (p<0.01, Wilcoxon test) with VTX-2337 treatment (Fig 4E). The addition of a neutralizing anti-IL-18 mAb (20 μg/ml) resulted in a partial, but significant reduction in the percentage of NK cells activated by VTX-2337 (p<0.05, Wilcoxon test). Expression of the degranulation marker CD107a by NK cells showed a similar pattern, where VTX-2337 induced a significant increase in expression that was partially blocked by the anti-IL-18 mAb (Fig 4F). These results are consistent with our previous report demonstrating that NK cells respond directly to TLR8 agonists and IL-18 release from activated accessory cells can augment their activation [3]. VTX-2337 enhances NK cell responses to K562 target cells and FcγRIII (CD16) stimulation in vitro Previous studies have shown that IL-18 provides a stimulatory signal that works cooperatively with the engagement of activating surface receptors to enhance NK cell activation. Based on the release of both IL-12 and IL-18 from TLR8-activated mDC/monocytes, we hypothesized that VTX-2337 should augment NK cell activation in response to K562 target cells that express NKG2D ligands and signaling through FcγRIII cross-linking with anti-CD16 mAbs. To test this hypothesis, PBMC were initially pretreated with VTX-2337 (0.5 μM) for 24 h. NK cells were subsequently stimulated with either K562 tumor cells to activate the NKG2D pathway or plate-bound anti-CD16 mAb to signal through FcγRIII. In the absence of any stimuli, the percentage of IFNγ + , CD107a + NK cells was < 0.1% (Fig 5A). As single activation agents, VTX-2337, K562 cells, and anti-CD16 each produced a modest degree of activation, with the percentage of IFNγ + , CD107a + NK cells increasing to 7.33%, 4.41% and 0.47%, respectively ( Fig 5A). When NK cells were pre-treated with VTX-2337, then exposed to either K562 target cells or immobilized anti-CD16 mAb, there was a much greater effect than with any single stimuli (Fig 5A). Sequential activation with VTX-2337 followed by co-culture with K562 cells increased the population of IFNγ+.CD107a + NK cells to 46.4%, while activation by VTX-2337 followed by FcγRIII stimulation with immobilized anti-CD16 mAb increased this population to 61.4%. NK cell activation, where VTX-2337 pretreatment was followed by exposure to K562 cells or immobilized anti-CD16 was expanded to assess the response in 11 healthy volunteers. As shown in Fig 5B and 5C, VTX-2337 alone resulted in significant increases in IFNγ+ (p<0.01 Wilcoxon test) and CD107a+ (p<0.001 Wilcoxon test) NK cell populations. The percentage of activated NK cells was markedly augmented when VTX-2337 pretreatment was followed by co-culture with K562 cells or immobilized anti-CD16 treatment. The VTX-2337-mediated activation of NK cell subsets, CD56 bright and CD56 dim cells, was also assessed (S1 Fig). Both NK cell subsets were activated by VTX-2337 as demonstrated by increased IFNγ and CD107a expression, and the response was enhanced by stimulation of NKG2D by K562 target cells or FcγRIII using immobilized anti-CD16 mAb. In vivo administration of VTX-2337 in cynomolgus monkeys induces circulating IL-1β and IL-18 To demonstrate that VTX-2337 drives the production of both pro-IL1β and pro-IL18, and activates the NLRP3 inflammasome in the absence of any other activating signals, cynomolgus monkeys were treated with the compound and plasma was monitored for these mediators. Cynomolgus monkey were chosen as they are highly responsive to VTX-2337 and predictive of the human TLR8-mediated response. This is in contrast to rodent species, where agonists optimized for activity on human TLR8 have limited activity due to sequence differences in the molecules ecodomain [18]. Monkeys received a subcutaneous injection of VTX-2337 (1 or 10 mg/kg), and plasma was collected predose, 6, 12, 24, and 96 h post-injection. For the 10 mg/kg dose, mean plasma levels of IL-1β increased from baseline levels of 0.5 pg/mL, up to 9.12 ± 2.7 ng/mL (p<0.05, t-test) at 6 h post-administration of VTX-2337 (10 mg/kg, Fig 6A). Circulating levels of IL-18 also increased from a baseline of~1 pg/mL to 68.7 ± 4.4 pg/mL (p < 0.05, t-test) at 6 h in response to the VTX-2337 treatment (10 mg/kg, Fig 6B). Levels of IL-6 were monitored (Fig 6C), as this mediator is induced in response to TLR8 activation, but the release is independent of NLRP3 inflammasome activation. In addition, plasma levels of IFNγ were assessed as a measure of NK cell activation in response to VTX-2337 treatment ( Fig 6D). As expected, this biomarker was undetectable in plasma prior to treatment, and increased to 11.1 ± 5.4 pg/mL at 6 h in monkeys administered the 10 mg/kg dose of VTX-2337. Overall, these results demonstrate that the coordinated activation of TLR8 and NLRP3 by VTX-2337 observed in vitro studies also occurs in vivo. Additionally, the relatively low levels of plasma IL-1β and IL-18 relative to IL-6 is consistent with the hypothesis that production of IL-1 family cytokines is more tightly regulated to minimize collateral damage to the host [19]. Treatment of SCCHN patients with VTX-2337 enhances NK cell activation Thirteen patients with recurrent or metastatic SCCHN were enrolled on Study A103. Cetuximab was administered in combination with 3 escalating dose levels of VTX-2337: 2.5 mg/m 2 (n = 3), 3.0 mg/m 2 (n = 6), and 3.5 mg/m 2 (n = 4). The median age was 62 years (range, 51 to 78) and the majority of patients (10 of 13; 77%) were male. The study population was generally representative of patients with recurrent or metastatic SCCHN. A Consort Flow Diagram for the study is shown in Fig 1. NK cell activation was assessed in two SCCHN patients treated with 3.0 mg/m 2 VTX-2337. Blood samples were collected prior to VTX-2337 treatment and again 24 h following VTX-2337 administration. Isolated PBMC from the pre-and post-VTX-2337 treatment blood samples were placed into culture for 4 h without additional stimulation or with ex vivo stimulation using either K562-15mb-41BBL target cells [16], or immobilized anti-CD16 mAb, as previously described. Activation of the NK cell population was monitored by increased expression of CD107a. In blood samples collected prior to VTX-2337 administration, the prevalence of CD107aexpressing CD56+ NK cells in the control cultures was low (3.8-4.1%) for both patients, as shown in Fig 7. For Patient 1, the ex vivo stimulation with either the K562 cells or immobilized anti-CD16 mAb, produced only a small increase in the percent of CD107a+ CD56+ NK cells. In contrast, ex vivo stimulation of PBMC from Patient 2 with the K562 cells increased the percent of CD107a+ CD56+ NK cells to 18.8%, while the immobilized anti-CD16 mAb increase the percentage to 35.2%. Following VTX-2337 treatment, the percent of CD107a-expressing CD56+ NK cells in the unstimulated control cultures remained low for both patients. However, the introduction of a second activation signal by ex vivo stimulation with either K562 cells or immobilized anti-CD16 produced a more robust response than seen in pre-VTX-2337 blood samples. NK cells from Patient 1, which were initially refractory prior to VTX-2337 administration, became responsive to both NKG2D and FcγRIII signaling following VTX-2337 treatment. For Patient 2 where the NK cells from the pre-VTX-2337 blood sample were initially responsive to NKG2D and FcγRIII activating signals; the response was further augmented by VTX-2337 treatment. Discussion VTX-2337 is a potent TLR8 agonist that is currently in Phase 2 clinical development as an immunotherapy for multiple cancer indications, including SCCHN. VTX-2337 has been shown to stimulate monocytes and mDC to produce Th1-polarizing cytokines, activate NK cells, and enhance tumor directed ADCC [3,20]. An additional-and unexpected-activity of VTX-2337 observed in the current study was the induction of high levels of mature, secreted IL-1β and IL-18 in the absence of additional activation signals. This study therefore presents a cohesive model of NK cell activation by VTX-2337, which includes NLRP3 activation and the release of IL-18 from TLR8 expressing cells. The actions of IL-18 complement other NK cell stimuli, such as NKG2D ligands and FcγRIII activation by therapeutic mAbs, which engage immune cells in ADCC. The biology of the IL-1 cytokine family is intimately linked to the activation of the TLR family by pathogen-associated molecular patterns such as lipopolysaccharide and single stranded RNA. Agonists of multiple TLRs have been shown to induce the synthesis and subsequent accumulation of pro-IL-1β and pro-IL-18 [21]. When THP-1 cells were treated with VTX-2337, there was a dose-dependent induction of both IL-1β and IL-18 mRNA (Fig 2A and 2B). However, the release of mature IL-1 family cytokines requires the cleavage of pro-cytokines by activated caspase-1, a component of the NLRP3 inflammasome. NLRP3 activation requires a second signal, generally associated with perturbations in normal cell physiology such as ATP release, uric acid crystal-induced damage, reactive oxygen species, or alterations in lysosome integrity [9][10], as opposed to binding to a specific ligand. Interestingly, VTX-2337, in the absence of any other stimulatory signals, activates NLRP3, resulting in the cleavage and release of soluble IL-1β and IL-18. This signal cascade was shown to function in both the THP-1 cell line, and in monocyte populations present in PBMC from healthy donors (Figs 2 and 3). The release of mature IL-1 family members from VTX-2337-activated THP-1 cells was dependent on NLRP3, as demonstrated by inhibition of caspase-1 with z-VAD, (Figs 2E and 2F, 4A and 4B). In contrast, the NLRP3-independent TNFα response was not impacted by caspase-1 inhibition, as shown in Fig 4D. Previous studies have shown that VTX-2337 stimulates IFNγ production from NK cells and that IL-18 is an important co-regulator of this response [3]. Experiments described in this report confirm the role of IL-18, as anti-IL-18 reduces IFNγ production and CD107a expression (Fig 4E and 4F), while caspase-1 inhibition leads to a marked reduction in IFNγ secretion induced by VTX-2337 (Fig 4C and 4D). Collectively, these results demonstrate dual TLR8 and NLRP3 activation by VTX-2337 in mDC and monocytes populations, and NK cell activation through the downstream release of IL-18. Mechanisms that underlie NLRP3 inflammasome activation by VTX-2337 are under investigation, but may result from its lipophilic, basic amine structure. These chemical features allow the molecule to concentrate in the low pH environment of the lysosomes. Perturbation of organelle physiology leads to cathepsin B release and activation of NLRP3 [22]. When THP-1 or PBMC cells are pre-treated with the cathepsin B inhibitor, CA-074-Me, VTX-2337 was no longer able to stimulate the release of IL-1β (data not shown). This result is consistent with VTX-2337 providing the signal for the production of pro-IL1β and pro-IL-18 proteins via TLR8, while other features of the molecule activate NLRP3 to mediate the release of mature cytokines. NLRP3 inflammasome activation by VTX-2337 is not limited to in vitro systems or the result of high concentrations that cannot be achieved in vivo. The SC administration of VTX-2337 to cynomolgus monkeys resulted in a dose-dependent increase in plasma levels of IL-1β and IL-18, as well as IL-6, a TLR8-induced cytokine that is not dependent on NLRP3 activation (Fig 6). Additionally, the administration of VTX-2337 induced a transient increase in plasma IFNγ levels, consistent with results from in vitro studies showing NK cell activation due in part to the actions of IL-18 and IL-12, as previously reported [3]. While TLR8-inducible mediators, including IL-12 and IL-18, act cooperatively on NK cells, additional signals including tumor-expressed NKG2D ligands [4], and FcγRIII activation via ADCC, intensify the response. This paradigm for NK cell activation, where cytokine priming occurs in conjunction with signaling transduced by activating cell surface molecules, was demonstrated with cells from healthy volunteers (Fig 5C and 5D). It is recognized that cancer patients may have reduced NK cell function, which compromises their capacity to recognize and destroy abnormal tumor cells [23,24]. In SCCHN, cetuximab is frequently included as "standard of care", and therapeutic responses achieved with this mAb correlate with FcγRIII activation and enhanced ADCC activity [5]. Cetuximab-activated NK cells have been shown to have additional effector activities, including the production of IFNγ [6], which can also be augmented by VTX-2337 [20]. Thus the capacity of VTX-2337 to enhance NK cell function provides a strong rationale for evaluating this immunotherapy in SCCHN patients who are receiving cetuximab. In a clinical trial of VTX-2337 in patients with SCCHN, NK cells from peripheral blood were assessed for activation markers. Prior to VTX-2337 dosing, NK cells from one patient were poorly responsive to ex vivo activation, while a second patient showed moderate activation (Fig 7). Following VTX-2337 treatment, both patients had robust NK cell responses to either the K562 target cells or to FcγRIII activation by immobilized anti-CD16. These data support the hypothesis that TLR8 activation primes NK cell responses in cancer patients. Additional activating signals, such as ligands for some NK cell receptors, or the engagement of FcγRIII during ADCC, as would occur with the administration of cetuximab to SCCHN cancer patients, complement the effects of TLR8 activation. The activation of NK cells through the actions of VTX-2337 may facilitate the development of an adaptive, tumor-directed immune response, with the potential for long-term cancer remission. Enhanced NK cell: dendritic cell cross-talk and subsequent priming of antigen-specific CD8 T cells has been demonstrated in vitro by Stephenson et al. [20]. TLR8 activation also induces IL-12 from mDC and IFNγ from NK cells, which work cooperatively with IL-18 release to drive Th1 cell development [25]. The production of these mediators, together with increased antigen processing and presentation by TLR8-activated accessory cells [3,26,27], set the stage for a seamless transition from NK cell-mediated tumor killing to the development of a tumor antigen-specific cellular immune response. In summary, while previous studies have demonstrated NK cell activation by TLR agonists [28][29][30], this study elucidates a novel mechanism of action for VTX-2337, involving both the production of pro-IL-1β and pro-IL-18 and activation of NLRP3. The release of IL-18 and other TLR8-induced mediators augment NK cell activation signals in cancer patients. Preliminary results from a recently completed Phase 1 study in SCCHN patients suggest that VTX-2337 "primes" NK cell function. Enhanced NK cell-mediated lysis of tumor cells should complement other TLR8 responses, facilitating the development of a durable, tumor-specific, adaptive immune response.
2018-04-03T00:40:00.233Z
2016-02-29T00:00:00.000
{ "year": 2016, "sha1": "14799cbf0d3d95b86f3fd8795205684964ce04d0", "oa_license": "CCBY", "oa_url": "https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0148764&type=printable", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "14799cbf0d3d95b86f3fd8795205684964ce04d0", "s2fieldsofstudy": [ "Biology", "Medicine" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
211230864
pes2o/s2orc
v3-fos-license
Wolbachia limits pathogen infections through induction of host innate immune responses Background Wolbachia has been reported to suppress a variety of pathogen infections in mosquitoes, but the mechanism is undefined. Two possibilities have been proposed. One is that Wolbachia activates host immune responses, and the other one is that Wolbachia competes with pathogens for limited nutrients. Methodology/Principal findings In this study, we compared host immune responses and the densities of two different strains of Wolbachia in naturally occurring parental and artificially created hybrid host genetic backgrounds. No significant difference in Wolbachia density was found between these hosts. We found that Wolbachia could activate host innate immune responses when the host genetic profile was different from that of its natural host. When these hosts were challenged with pathogenic bacteria, mosquitoes in new host-Wolbachia symbioses had a higher survival rate than in old host-Wolbachia symbioses. Conclusions/Significance The presence of Wolbachia per se does not necessarily affect pathogen infections, suggesting that a competition for limited nutrients is not the main reason for Wolbachia-mediated pathogen suppression. Instead, host immune responses are responsible for it. The elucidation of an immunity nature of PI is important to guide future practice: Wolbachia may be genetically engineered to be more immunogenic, it is desired to search and isolate more strains of Wolbachia, and test more host-Wolbachia symbioses for future applications. Our results also suggest Wolbachia-based PI may be applied to naturally Wolbachia-infected mosquito populations, and extend to the control of a broader range of mosquito-borne diseases. Methodology/Principal findings In this study, we compared host immune responses and the densities of two different strains of Wolbachia in naturally occurring parental and artificially created hybrid host genetic backgrounds. No significant difference in Wolbachia density was found between these hosts. We found that Wolbachia could activate host innate immune responses when the host genetic profile was different from that of its natural host. When these hosts were challenged with pathogenic bacteria, mosquitoes in new host-Wolbachia symbioses had a higher survival rate than in old host-Wolbachia symbioses. Conclusions/Significance The presence of Wolbachia per se does not necessarily affect pathogen infections, suggesting that a competition for limited nutrients is not the main reason for Wolbachia-mediated pathogen suppression. Instead, host immune responses are responsible for it. The elucidation of an immunity nature of PI is important to guide future practice: Wolbachia may be genetically engineered to be more immunogenic, it is desired to search and isolate more strains of Wolbachia, and test more host-Wolbachia symbioses for future applications. Our results also suggest Wolbachia-based PI may be applied to naturally Wolbachia-infected a1111111111 a1111111111 a1111111111 a1111111111 a1111111111 Introduction Mosquito-borne diseases are one of the major public health problems. With increasing globalization, urbanization and global warming, the threat of mosquito-borne diseases is growing. Traditional and emerging mosquito-borne diseases, such as malaria, dengue, West Nile fever, Japanese encephalitis, chikungunya fever and Zika, have seriously affected human health and economic development [1,2]. However, lack of effective vaccines and specific drugs for mosquito-borne diseases (such as dengue), as well as the development of resistance to therapeutic drugs in some pathogens (such as malaria), have contributed to this situation. Therefore, one of the main measures for the prevention and control of mosquito-borne diseases is still mosquito control. Chemical control has been the main method in mosquito control programs. However, continuous and large-scale insecticide usage has led to the emergence and development of resistance in mosquito vectors [3], and the negative effects of insecticides on human health and the environment should not be ignored [4,5]. Recently, several biological approaches were called upon for the control of mosquito populations, including the introduction of Wolbachia [6][7][8]. The endosymbiotic bacterium Wolbachia is maternally inherited, infecting >65% of all insect species and~28% of the surveyed mosquito species [9,10]. Wolbachia can regulate the host's reproductive processes. For example, cytoplasmic incompatibility (CI) interferes with the normal development of a zygote formed by a sperm of Wolbachia-infected mosquito and egg of an uninfected or an incompatible strain of Wolbachia-infected mosquito [11]. CI provides a reproductive advantage to infected females over uninfected females, resulting in the invasion of Wolbachia into a population. Wolbachia can also inhibit pathogen infection of the host via pathogen interference (PI) phenomenon [12]. Studies have shown that Aedes aegypti mosquitoes artificially infected with Wolbachia have increased resistance to dengue virus, Zika virus, chikungunya virus, yellow fever virus, Plasmodium gallinaceum, filaria and certain bacteria [7,[13][14][15][16]. After transient somatic infections of Wolbachia, Anopheles gambiae has significantly reduced infection intensity of Plasmodium berghei [17]. Bian et al. established a stable Wolbachia infection in Anopheles stephensi which conferred resistance in the mosquito to Plasmodium falciparum [18]. Micieli et al. reported that Wolbachia infection of Cx. quinquefasciatus laboratory strain increased host resistance to West Nile virues infection [19]. Currently, Wolbachia-infected Ae. Aegypti mosquitoes have been released in dengueendemic area as a population replacement strategy. For example, in northern Australia and central Vietnam such mosquitoes were released to replace the local Wolbachia-negative Ae. aegypti population and reduce dengue virues(DENV)-transmission capacity [20]. A mathematical model predicts that establishment of wMelPop-infected Ae. aegypti at high frequency in a dengue endemic setting would result in complete abatement of DENV [21]. However, the long-term effects of artificial release of Wolbachia-infected mosquitoes remain to be assessed, such as whether the Wolbachia will still be capable of inhibiting the virus after repeated vertical transmission in the mosquitoes, whether the pathogens will gradually adapt to Wolbachiainfected host through mutations, or changes in mosquito itself can increase vectorial capacity despite of the presence of Wolbachia infection. Host, Wolbachia, and virus genetic evolution could all influence the long-term success of Wolbachia programs [22]. Elucidating the mechanisms of PI phenomenon will be of great importance in maximizing the effects of Wolbachia-based mosquito-control strategies, extending the sustainability of this method, quickly understanding and correctly solving problems that may arise in the future. Till now, the mechanism underlying PI is still not completely understood. Currently two major explanations have been proposed. One is that Wolbachia activates the mosquito innate immune responses, and thus-primed immune system helps the host to fight subsequent pathogen infections [14,17,23]. The other one is that Wolbachia competes with pathogens for nutrients such as lipids [24,25]. Although existing studies suggest that the innate immune response may play a leading role in Wolbachia induced PI in mosquitoes, we should also notice that those studies were all based on artificially or naturally Wolbachia-infected host, using uninfected host as a control. Compared with the control group, the presence of Wolbachia in the infected host may both up-regulate host immune response and compete for nutrients. The effects of these two concomitant processes on the replication of pathogens are indistinguishable. Alternatively, a comparison between infected populations may help to elucidate the role of immune responses in PI. To that end, we choose Culex mosquitoes in which Wolbachia is prevalent. Culex mosquitoes are an important vector of lymphatic filariasis and several viral pathogens, including West Nile virus [26]. The most prevalent Culex species in China is Cx. pipiens pallens. Our previous study [27] revealed that the bi-directional incompatibilities between naturally existent populations from different geographic locations were dependent on the presence of Wolbachia, i.e. they were Wolbachia-induced CI. For example, Nanjing (NJ) and Tangkou (TK) populations were naturally infected with bi-directionally incompatible Wolbachia. Based on the fact that Wolbachia is maternally inherited, in this study, we propose to cross preexisting host-Wolbachia symbioses obtained in Nanjing and Tangkou to create new host-Wolbachia symbioses. Comparing the transcriptomes in the old and new host-Wolbachia symbiotic combinations in which nutrient competition is constantly present, we aim to delineate the contribution of innate immune responses to PI in Wolbachia-infected mosquitoes. Tetracycline treatment Tetracycline treatment to eliminate Wolbachia from Culex populations was carried out according to published methods [28]. Tetracycline (Amresco) at a concentration of 0.05 mg/ ml was used for the treatment through both larval and pupal stages. Eggs were placed on tetracycline water solution to hatch. Surviving larvae were transferred to fresh tetracycline solution every 24 hours. A normal infusion was prepared in parallel and fed to larvae in tetracycline solution. After continuous tetracycline treatment for 6 generations, Wolbachia-negative Culex populations were established. Establishment of new host-Wolbachia symbioses To separate virgin females and males, pupae from each population were put into 15 ml tubes with water for individual emergence. Then, male and female adults were raised in 30.5×30.5×30.5 cm cages. Females 1 day post-eclosion and males 2 days post-eclosion were used in crossing experiments. Each set of crossings included combination groups of Wolbachia-negative virgin males from TK with Wolbachia-positive virgin females from NJ (NJ♀×TK tet ♂), Wolbachia-negative virgin males from NJ with Wolbachia-positive virgin females from TK(TK♀×NJ tet ♂). While combinations of virgin males and females from the same populations as controls (NJ♀×NJ♂ and TK♀×TK♂). Females and males placed in the same cages were given 2 days to mate. Females were blood fed after mating, then the egg rafts were given 48 hours after oviposition to hatch. Females of the first filial generation of these crossings, namely NJ♀×TK tet ♂, TK♀×NJ tet ♂, NJ and TK were collected 2 days post-eclosion for RNA extraction and sequencing. RNA sequencing and analysis Total RNA of 15 female mosquitoes of each group (NJ♀×NJ♂, NJ♀×TK tet ♂, TK♀×TK♂ and TK♀×NJ tet ♂) was extracted using TRIzol reagent (Thermo Fisher Scientific, USA) following the manufacturer's protocol. cDNA library construction and sequencing were performed according to standard procedures by Beijing Genomics Institute (BGI-Shenzhen, China) using BGISEQ-500 platform. At least 60 Mb clean reads of sequencing were obtained for each sample. Since no genomic sequence in any database was available for Cx. pipiens pallens, Trinity [29] was used to perform de novo assembly with clean reads, then Tgicl [30] was used on cluster transcripts to remove abundance and retain Unigenes. After assembly, Unigenes functional annotation was performed with 7 functional databases (NR, NT, GO, KOG, KEGG, SwissProt and InterPro), then all the clean reads of each sample were mapped to the Unigenes with Bow-tie2 [31] software and the gene expression levels were calculated with RSEM [32]. Based on the gene expression levels, the DEGs (differential expression genes) between samples or groups were identified with PossionDis [33] (Fold Change > = 2.00 and FDR< = 0.001). The DEGs were classified based on the GO annotation results and official classification. Pathway analysis was performed to provide further information on the DEGs' biological functions. The DEGs were also subjected to KEGG pathway classification and functional enrichment. As a biological replicate of this experiment, total RNA of another 15 female mosquitoes from each group was extracted for cDNA library construction and sequencing. A total of eight libraries were sequenced and analyzed. Validation of immunity-related DEGs by real-time quantitative PCR Each total RNA template was obtained from a pool of 5 female mosquitoes and extracted as described above. We generated three biological replicates for each group. For each biological replicate three independent total RNA templates were obtained. Totally, we have 3×3 total RNA templates for each group. The cDNA was synthesized with PrimeScript RT reagent kit (Takara, Otsu, Shiga, Japan) according to the manufacturer's protocol. PCR was performed on the LightCycler 96 Real-Time PCR System (Roche, Switzerland) using SYBR Green Master Mix kit (Roche, Switzerland). Primers specific for real-time quantitative PCR are listed in Table 1. We amplified 23 different genes from each template. Each gene amplication was carried out in triplicate. For each reaction, 10 μl of SYBR Green Master Mix was used, 1.0 μl of each primer solution at 10 μM and 8 μl of diluted cDNA were added. PCR cycling protocol was as follows: initial 50˚C for 2 min, denaturation for 10 min at 95˚C, followed by 40 cycles of 95˚C for 15 s, 60˚C for 1 min. The housekeeping gene Rps6 was used as an internal control and the data were analyzed with LightCycler 96 Software v1.1 (Roche, Switzerland). Quantitation of relative mRNA expression was calculated using 2 −ΔΔCt method [34]. Significance was determined based on comparison of the ΔCT of each gene in old and new host-Wolbachia symbioses using Student's t-tests. � P<0.05; �� P<0.01.Immunity-related DEGs were further analyzed with PathVisio software 3.3.0. obtained from wikipathways (WP3830_92694) which is based on the Toll and Imd Pathways in Drosophila melanogaster. Microbial challenge and survival experiments Microbial challenge and survival experiments were performed in the same way as described in [35]. In brief, an acupuncture needle (0.20×25mm) was dipped into a concentrated overnight bacterial culture of Gram-negative (Escherichia coli) or Gram-positive (Micrococcus luteus) bacteria or sterile LB culture (negative control) and pricked mosquitoes (female 2 days post eclosion) in the rear part of the abdomen. For each mosquito population, three parallel groups with each group consisting of 15-20 adult females were inoculated per bacterial species [36]. A total of three biological replicates of the infection experiment were performed. Survival curves are significantly different between mosquitoes in old and new host-Wolbachia symbioses (compared using log-rank test). Statistical analysis All statistical analyses were carried out using SPSS Statistics 17.0. Data accessibility The data supporting the results of this article have been submitted to NCBI Sequence Read Archive (SRA) repository (Accession number: SRP155507). The materials and methods part has been submitted to protocols.io. (DOI:http://dx.doi.org/10.17504/protocols.io.xcafise) (Fig 2A and 2B). The intersection and union of the DEG heat map for the original and new host-Wolbachia symbioses are shown in Fig 2C and 2D. The identified DEGs were then assigned to the three standard subcategories of "molecular biological function", "cellular component" and "biological process" in GO enrichment analysis ( Fig 2E and 2F). In parallel, the unigenes were mapped onto the canonical pathways in KEGG to identify possible active biological pathways that contain DEGs. Twenty most significant DEGs in new vs. old host-Wolbachia symbioses are shown in Fig 3. RNA-seq data analysis of a biological replicate are presented in supplementary materials and shown in S1 and S2 Figs. Innate immune responses are elevated in hosts with hybrid genetic profiles Based on the transcriptome assays, we compared mosquito innate immune responses in the original and new host-Wolbachia symbioses. As shown in Fig 3 and S1 Table, Genes in Toll and the immune deficiency (Imd) signaling pathway were up-regulated in both TK♀×NJtet♂ (compare to TK) and NJ♀×TKtet♂ (compare to NJ) groups. The differential activations of immune responses in hosts of different genetic profiles to the same Wolbachia were confirmed by real-time PCR quantification of genes in the Toll and Imd pathways (Fig 4). Our results Microbial challenge and survival experiments Toll and Imd pathways are expected to protect mosquitoes from Gram-positive and Gramnegative bacterial infections respectively. Our results showed that both Toll and Imd pathways were up-regulated in new host-Wolbachia symbioses. To test if the up-regulation of these pathways can help mosquitoes to fight pathogen infections, we challenged mosquitoes in old and new host-Wolbachia symbioses with Gram-negative bacteria (Escherichia coli) and Gram-positive bacteria (Micrococcus luteus). Results showed that mosquitoes in new host-Wolbachia symbioses had higher survival rate than in old host-Wolbachia symbioses when challenged with either E. coli (P<0.05, for NJ♀ and NJ♀×TKtet♂ groups: chi square = 4.685, df = 1, P = 0.0304, for TK♀ and TK♀×NJ tet ♂ groups: chi square = 4.395, df = 1, P = 0.0298) or M. luteus (P<0.05, for NJ♀ and NJ♀×TKtet♂ groups: chi square = 4.565, df = 1, P = 0.0326, for TK♀ and TK♀×NJ tet ♂ groups: chi square = 5.730, df = 1, P = 0.0167) (Fig 6). Discussion Population replacement aimed at Wolbachia-mediated PI is moving from benchtop to the field. Elucidation of PI mechanism may help to augment the efficacy of Wolbachia-based vector control, prolong its usage, and expedite comprehension of and solutions to unexpected problems in future practice. Insects have established a highly efficient innate immune system to distinguish between self and non-self molecules and resist infections. Host innate immune system recognizes pathogen-associated molecular patterns (PAMPs) via pattern-recognition receptors (PRRs) and initiates a cascade of responses [37]. PRR signaling is thought to be critical for the host to fight pathogens [38]. Studies in Drosophila melanogaster have shown that two main PRRs, Toll and Imd, are involved in arthropod immune responses. Gram-positive bacteria trigger the Toll pathway, fungi and Gram-negative bacteria trigger the Imd pathway, mediating innate immune responses and resulting in the production of antimicrobial peptides (AMPs) [39]. In mosquitoes, PI has been most thoroughly characterized in Aedes aegypti. Xi et al. propose that Wolbachia infection activates the innate immune response of Ae. aegypti by up-regulating the level of Toll pathway genes and the expression of antimicrobial peptides such as defensins, which enables mosquitoes to resist DENV. They found that when the Toll pathway inhibitor cactus gene was silenced, the extent of dengue infection in mosquitoes was reduced by 4.0-fold. When the Toll pathway was inactivated by silencing myd88, the virus load in mosquitoes increased 2.7 times compared to the control group [14,23]. They also found that the elevation of reactive oxygen species (ROS) was a result of Wolbachia infection and was involved in the activation of the Toll pathway. Toll activation leads to the expression of antioxidants to alleviate oxidative stress and, as a "side-effect", increases antimicrobial peptide production resulting in an enhanced resistance to pathogen infections [40]. Kambris et al. observed up-regulated immune genes in Anopheles gambiae somatically infected with Wolbachia and highly significant reductions in Plasmodium infection intensity. This effect was diminished after knockdown of TEP1 gene [17]. A different explanation of PI is that both Wolbachia and viruses such as DENV are heavily dependent on host lipids and other resources for survival, and a potential competitive effect could contribute to PI [24,25]. Schultz et al. found that infection with Wolbachia inhibited the replication of ZIKV in mosquito cell lines, and increased supply of cholesterol moderately restored the replication of ZIKV [41]. However, there lacks reported research that extends this finding to adult mosquitoes. While these previous studies on PI mechanism provided insightful information, they fell short of pinpointing the causes. In these studies, at least two coexistent factors were confounding each other, i.e., induction of innate immunity and competition for nutrients could both be effected by the presence of Wolbachia that was artificially introduced. It was difficult to rule out one of the two plausible explanations. In this study, we used preexisting host-Wolbachia symbioses (NJ Wolbachia-NJ mosquito & TK Wolbachia-TK mosquito) obtained in Nanjing and Tangkou to create mosquito populations representing new host-Wolbachia symbioses NJ♀×TK tet ♂ and TK♀×NJ tet ♂. In the new and original mosquito populations, Wolbachia was always existent, so that nutrient competition was constantly present. Our results showed that Wolbachia densities in the new mosquito populations did not change significantly. Thus, comparing the new and old host-Wolbachia symbioses, we can exclude nutrient competition factor and focus on the contribution of innate immunity. To find out if host immune system was activated by Wolbachia in altered host genetic background, we compared the transcriptomes in the old and new mosquito populations. Our results showed that both genes in Toll and those in Imd signaling pathways were up-regulated in new host-Wolbachia symbioses, indicating that Wolbachia may induce stronger immune responses in a new host than in the original host. As initially reported in D. melanogaster, Toll does not directly recognize PAMPs in insects. Instead, PAMPs are detected by PGRPs (peptidoglycan-recognition proteins) and GNBPs (Gram negative-binding proteins) which activate proteolytic enzymes, leading to the cleavage thus activation of cytokine Spaetzle. Spaetzle binding crosslinks the ectodomains of Toll, and activates Toll receptor. Through the adaptor proteins MYD88, Tube and Pelle, Toll can then activate NF-kB protein DIF in immune-responsive tissues by dissociating DIF from the ankyrin-repeat inhibitory protein Cactus, leading to the production of AMPs [42]. Our transcriptome results showed that cecropin B was up-regulated in both TK♀×NJ tet ♂ and NJ♀×TKtet♂, while cecropin A and defensin A were up-regulated in TK♀×NJ tet ♂ but downregulated in NJ♀×TKtet♂ (S1 Table). When further tested with real-time RT-PCR, only defensin A was consistently up-regulated in TK♀×NJ tet ♂ and down-regulated in NJ♀×TKtet♂, both cecropin A and B were up-regulated in new host-Wolbachia symbioses (Fig 4). Although posttranslational regulations (e.g. nuclear translocation) of upstream factors may be sufficient to induce the transcription of AMPs, both transcriptome and real-time RT-PCR results showed that GNBPB3, PSH, GRASS, SPIRIT, SPZ1B and DIF were all up-regulated in the new host-Wolbachia symbioses. Toll pathway inhibitor CACTUS was down-regulated in the new host-Wolbachia symbioses. Imd pathway can be triggered by ligand binding to PGRP-LC [43]. The activation signal is transduced through intracellular adaptor IMD protein into two downstream branches. One branch has TAK1 acting as the downstream factor of Imd/FADD, which in turn activates IKKβ and IKK-γ homologues and directs phosphorylation of NF-kB transcription factor Relish. Activated Relish then translocates to the nucleus and promotes the transcription of AMPs [44]. The other branch activates the transcription factor AP-1 via JNK signaling [45,46]. As some factors in Toll pathway, RELISH in Imd pathway was up-regulated in the new host-Wolbachia symbioses. As a result of Toll and Imd pathway activation, cecropin A and B were consistently up-regulated as the host was replaced with a different genetic background. Cecropin A and B up-regulation are correlated with improved protection against challenge infections of bacteria. In contrast, defensin A was only up-regulated in TK♀×NJtet♂ and not in NJ♀×TKtet♂. One possible explanation is that an interplay between Toll and Imd pathways with participation of other factors results in the change in defensin A expression. Different Wolbachia may activate these pathways differentially and the balance between them determines if defensin A is up-regulated or down-regulated. It is also possible that different strains of Wolbachia have different sensitivities to defensin A, and those strains such as the one from Nanjing may have evolved more effective means to selectively down-regulate defensin A to assure their survival. This would be consistent with previous findings that not all strains of Wolbachia are equally susceptible to host immune responses [47]. Our results also showed, unlike cecropin A, defensin A was not correlated with protection against bacterial infections. These results are consistent with previous studies. For example, in a report by Pan et al., Ae. Aegypti infected with wAlbB showed defensin A up-regulation in the midgut but down-regulation in the rest of carcass [48]. In this study, the up-regulation of both Toll and Imd signaling pathways was not sufficient to significantly reduce the density of Wolbachia observed in the new hosts. It is unknown if this reflects a lack of enough genetic differences between the mosquitoes and between the Wolbachia strains. It is also unknown if an elevated overall immune response is able to suppress Wolbachia activity without altering its density. Nevertheless, the presence of Wolbachia helps to maintain the nonsterilizing immunity. Because the downstream effectors are not target-specific, the activated immune responses can also affect some pathogens. This has been tested in our challenge bacterial infections. When artificially infected with Gram-positive and Gramnegative bacteria, mosquitoes in new host-Wolbachia symbioses have significantly higher survival rates than the mosquitoes in original host-Wolbachia symbioses. Whether a similar effect can be observed in viral infections remains to be answered. There have been a number of reports on the contribution of innate immunity to the blocking of viral replications in insects [23,40,49]. Xi et al. reported that Toll pathway in Aedes aegypti controls dengue infection [23]. In a Drosophila model, Rancès et al. demonstrated that Toll pathway has an inhibitory effect on dengue in the presence or absence of Wolbachia, although neither Toll nor Imd pathway is necessary for Wolbachia-induced inhibition [47]. Because a host deficient in both Toll and Imd has not been tested, and other pathways such as JAK-STAT have been reported to suppress dengue replication, it remains possible that at least one of the Toll and Imd pathways has to be in place in order for Wolbachia to inhibit the viral replication [50]. In our study, both Toll and Imd were up-regulated by Wolbachia in new host genetic backgrounds. Whether these up-regulations will result in enhanced resistance to viral infections warrants future investigation. In our study, Wolbachia was constantly present, so a competition for nutrients was also constitutive. In addition, Wolbachia densities in the original and new hosts were comparable, so the levels of nutrient deprivation would be comparable. It was unlikely that nutrient competition caused the difference in inhibition of pathogen proliferation and improvement of host survival. Instead, the elevated immune responses, likely induced by a "mismatch" between host and Wolbachia hence stronger antigen recognition, were responsible for the protection against subsequent infections. An immunity-mediated PI can also better explain the fact that naturally Wolbachia-infected insects retain their vectorial capacity. For example, Aedes albopictus is naturally infected with Wolbachia, but it can still transmit a variety of pathogens including dengue. In these hosts, native Wolbachia may have been recognized as self as a result of co-evolution. After all, immune responses induced by Wolbachia cause stress in the host and may deem undesirable in the absence of more pathogenic infections. An alternative explanation for natural Wolbachia infection not inducing PI is a reduced density and a more restricted tropism in the native hosts such as Aedes fluviatilis [7]. While the observed difference in resistance to bacteria is most likely caused by immunity, a contribution from nutritional factors to PI cannot be ruled out. It is possible that nutrient competition results in certain level of inhibition in all the mosquito hosts, and immune responses provide a further enhancement in those new hosts. In Drosophila melanogaster, Wolbachia has been found to cause virus interference without inducing overt up-regulation of immunity [51,52]. At least for viral infections, Wolbachia can assert inhibition by depriving the host cells of essential nutrients. By comparing mosquitoes that are all infected with Wolbachia, our study demonstrates the contribution of host innate immunity to PI phenomenon. Similar studies may be carried out using other genera of mosquitoes that are medically more important, such as Anopheles and Aedes. The elucidation of an immunity nature of PI is important to guide future practice. For example, Wolbachia may be genetically engineered to be more immunogenic. In current vector population replacement measures, it is difficult to predict how long the released insects will remain refractory to pathogen infections. In the event that these insects do acquire increased vectorial capacity, a possible solution may be to re-introduce a new strain of Wolbachia. Perhaps it is desired to search and isolate more strains of Wolbachia, and test more host-Wolbachia symbioses for future applications. Our results also suggest Wolbachia-based PI may be applied to naturally Wolbachia-infected mosquito populations, and extend to the control of a broader range of mosquito-borne diseases. A competition for nutrient may still be effected by Wolbachia, but this does not negate the potential of immunity-based strategies. Future practice may even forego the use of Wolbachia and focus on the introduction of non-self antigens into the genome of vector insects using transgenic techniques. Potential advantages of transgenic modification of host genome may include less technical difficulty and increased stability. For some insects, a stable Wolbachia infection may be difficult to achieve, such as in Anopheles gambiae. An immunogen-expressing transgene in vector genome may also be more stable since it is not subject to elimination due to chemical exposure. Accession numbers The GenBank (http://www.ncbi.nlm.nih.gov/Genbank) accession numbers for sequences mentioned in the paper are: RPS6(XM_001848257. Supporting information S1 Fig. Volcano plot, heatmap of hierarchical clustering and GO classification of DEGs. As a biological replicate, total RNA of another 15 female mosquitoes of each group was extracted. cDNA library construction and sequencing were performed as the first time. At least 60 Mb clean reads of sequencing were obtained for each sample.35,236 (NJ♀×TKtet♂), 34,965 (TK♀×NJ tet♂), 34,845 (NJ♀×NJ♂) and 34,708 (TK♀×TK♂) unigenes were generated. A total of 28,476 unigenes were annotated against the NCBI NR protein database, 16,973 in GO function categories, and 21,332 unigenes were mapped onto the canonical pathways in KEGG. (A and B) Volcano plot of DEGs. The unigenes up-or down-regulated more than two-fold when compared between old and new host-Wolbachia symbioses are displayed in red or blue, respectively. Y axis represents -log10 transformed significance. X axis represents log2 transformed fold change. Red points represent up-regulated DEGs. Blue points represent downregulated DEGs. Gray points represent non-DEGs. TK♀×NJtet♂ had 5,742 up-regulated unigenes and 4,143 down-regulated unigenes in comparison to the control TK♀×TK♂, and NJ♀×TKtet♂ had 4,226 up-regulated unigenes and 4,122 down-regulated unigenes in comparison to the control NJ♀×NJ♂. (C and D) The intersection and union of the DEG heat map for the original and new host-Wolbachia symbiosis. X axis represents comparison for clustering analysis. Coloring indicates fold change (high: red, low: blue). (E and F) The identified DEGs were then assigned to the three standard subcategories of "molecular biological function", "cellular component" and "biological process" in GO enrichment analysis. X axis represents number of DEG. Y axis represents GO term. (TIF) S2 Fig. Up-regulation of Toll and IMD pathway genes in new host-Wolbachia symbioses. In parallel, the unigenes were mapped onto the canonical pathways in KEGG to identify possible active biological pathways of DEGs in biological replicate of RNA sequencing experiment. Twenty most significant DEGs in new vs. old host-Wolbachia symbiosis are shown here. X axis represents enrichment factor. Y axis represents pathway name. The color indicates q value (high: white, low: blue), a lower q value indicates a more significant enrichment. Point size indicates DEG number (A bigger dot refers to a larger amount). Rich Factor refers to the value of enrichment factor, which is the quotient of foreground value (the number of DEGs) and background value (total Gene amount). A larger Rich Factor value indicates a higher level of enrichment. (TIF) S1 Table. Up-regulated Toll and Imd signaling pathway genes in new host-Wolbachia symbioses. (XLSX)
2020-02-22T14:03:52.457Z
2020-02-20T00:00:00.000
{ "year": 2020, "sha1": "b38847f8ed521847a116e301049d2eb0a367ac44", "oa_license": "CCBY", "oa_url": "https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0226736&type=printable", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "ed2da052bad5b307612c04ea5de7a0e50e8f4570", "s2fieldsofstudy": [ "Biology", "Environmental Science" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
211260426
pes2o/s2orc
v3-fos-license
Clinical profile of early-onset dementia from a geriatric clinic in South India Background: Early-onset dementia (EOD) defined as dementia with clinical onset before the age of 65 years, has estimated proportion ranging up to 45.3%. Although EOD leads to severe psychosocial consequences that affect people in their latter part of working age, the literature from India is limited. Objective: The aim of this study is to investigate the profile of patients with EOD attending Geriatric Clinic and Services, National Institute of Mental Health and Neurosciences, Bengaluru, Karnataka, India. Materials and Methodology: All records of patients attending the Geriatric Clinic and Services, diagnosed with EOD between January 2017 and June 2018 with their details pertaining to sociodemographic, clinical, risk factors, and behavioral problems were examined. Results: Of the 320 patients with cognitive complaints seen during the period of 18 months, 108 (33.75%) patients had a diagnosis of EOD. The mean age at onset of illness was 55.38 (Standard deviation - 6.53) years (range - 34–65 years). Of these 58 (53.6%) patients found to have Alzheimer's dementia (AD), 31 (28.7%) have fronto-temporal dementia (FTD), 6 (5.5%) have vascular dementia (VaD), 3 (2.7%) patients have Parkinson's disease-related dementia, and 6 (5.5%) have unspecified dementia. Discussion: During the 18 months, the EOD patients constituted one-third of all dementia patients visiting Geriatric Clinic. Degenerative etiology was the main diagnostic cluster. The most common type was AD, similar to senile type of dementia, was followed by FTD and VaD. The study showed a delay of 3.18 years in seeking consultation. Conclusion: EODs seems to have higher degenerative etiology and with higher associated behavioral and psychological symptoms. There is a need for setting up specialized memory clinics. INTRODUCTION Dementia is a neuropsychiatric condition with progressive cognitive decline leading to dependency for basic activities of daily living (ADL). [1] Increase in life expectancy and medical advances lead to demographic changes in India with considerable increase in elderly population. As per the Dementia India report, 2010, there are 3.7 million dementia patients and this number is expected to increase exponentially in the near future leading to a major public health problem. [2] Dementia is considered a condition usually affecting patients above the age of 65 years with risk doubling every 5 years. [3] Although less common, dementing process can start before 65 years and rarely, it can start even below the age of 45 years. The dementing illness in people with onset before the age of 65 years is referred to as early-onset dementia (EOD). [4] Various studies utilized different age cutoffs to study presenile dementia. Presenile dementia is a term which was used to define the same condition, but it is no longer used now. [5] Young-onset dementia is another term which is used synonymously with EOD. [6] There has also been subclassification of presenile dementia into EOD (45-65 years) and YOD (17-45 years) based on estimated age at onset. [7] The proportion of EOD from clinic-based samples across regions range from 7.3% to 44%. [8,9] A study from the eastern part of India by Nandi et al. reported that one-fourth of Journal of Geriatric Mental Health | Volume 6 | Issue 2 | July-December 2019 dementia cases seen in cognitive specialty had early onset. [10] Another study from a memory clinic in south India reported that EOD constituted 49.9% of all dementia cases. [11] EOD unlike late-onset dementia is heterogeneous condition with diverse etiology. Degenerative conditions still constitutes the major proportion in terms of etiology in EOD. Other conditions such as infectious diseases, autoimmune conditions, recurrent seizures, head trauma, substance use, and vitamin deficiency contribute to a smaller proportion. Among the etiological conditions, similar to late onset, Alzheimer's dementia (AD) is the most common phenotype in EOD patients. EODs are unique in many aspects compared to late-onset dementia. EOD's neurobiology provides insights into the underlying etiology, risk factors, and genetic factors about late-onset dementia. It also provides opportunity to test new therapeutic agents as EOD often presents as pure phenotypes. From psychosocial perspective, EOD has significant impact as it affects working age group. EOD leads to the loss of earning member, financial crisis, caregiver burden, and medical costs. Another pertinent thing in people with EOD, there is often delay in diagnosis or misdiagnosis as another condition. This might be due to atypical presentation [12] or prodrome of psychiatric features preceding the cognitive decline. [13] Despite its importance there is dearth of literature on EOD, especially from India. Hence, we took on this study with an objective to look into the prevalence, sociodemographic features, clinical profile, and the management of patients with EOD presenting to our clinic. MATERIALS AND METHODOLOGY The current study was conducted in the National Institute of Mental Health and Neurosciences (NIMHANS), Bengaluru, Karnataka, India. The design adopted for the study was retrospective and had approval from the Institutional Ethics Committee. The sample for the study was patients seeking treatment from Geriatric Clinic and Services, NIMHANS with the diagnosis of dementia as per the Diagnostic and Statistical Manual of Mental Disorders, Fifth Edition (DSM 5) diagnostic criteria and who had onset before the age of 65 years. The time period for the study was from January 2017 to June 2018. The medical records of all these patients were reviewed pertaining to their sociodemographic details, clinical features, phenotypes, behavioral problems, and pharmacological treatments. As part of evaluation, these patients had been initially evaluated by a senior resident undergoing training in geriatric psychiatry and the final diagnosis was made after discussion with senior consultant specialized in geriatric psychiatry. The neuropsychiatry evaluation in our clinic involves thorough history collection, physical examination, and reviewing relevant investigations. The assessments done for each patient include Hindi Mental State Examination (HMSE), [14] ADL were assessed with Everyday Abilities Scales India (EASI), [15] Geriatric Depression scale, [16] Neuropsychiatry Inventory (NPI), [17] and Clinical dementia rating scale [18] The methodology of patient evaluation in our clinic was as described in our earlier report by Bharath et al. [19] The age of most patients was confirmed through the valid Government-issued identity certificate/cards with information on date of birth. In minority of cases where there was discrepancy, age estimation was done through collateral information, using personal life events and historical dates. This was a valid and simple way of age estimation in India. [20] The exclusion criteria included patients with cognitive deficits primarily due to traumatic brain injury, referred cases with cognitive impairment secondary for psychiatric evaluation. Data collected were analyzed using descriptive statistics such as mean, standard deviation (SD), median, frequency, and percentages to describe the data distribution. RESULTS Of the total 320 patients with cognitive concerns seen in the past 18 months (2017-2018) in the facility, 108 (33.7%) patients had a diagnosis of EOD. The mean age of the patients at presentation was 58.54 (SD -6.54) years (range -36-68). The mean age of onset of dementia in our sample was 55.38 (SD -6.53). The number of men was 58 and women were 50. The average number of years of education was 9.87 (SD -5.42) years. Among these patients, 100 were married and 8 were either single/separated/widowed. The mean duration of illness prior to consultation was 3.18 (SD -2.35) years. The mean HMSE score of the patients was 14.99 (SD -8.77) (median: 8.77, range: 0-30) and mean EASI score 7.13 (SD -3.34) (median: 7.0, range: 0-12). There was positive family history of dementing illness in 23.1% of sample. Vascular risk factors such as hypertension and diabetes mellitus were present in 35.2%. Vitamin B12 deficiency and hypothyroidism that are potentially known to cause cognitive deficits were found in 9.3% and 10.2%, respectively. However, these patients were subsequently treated and none reverted back to normal. These patients were finally diagnosed to have degenerative dementing conditions [ Table 1]. We found 69 (63.9%) of these EOD patients were brought for the first comprehensive evaluation (no prior consultation with any doctor for cognitive symptoms). Among the remaining EOD patients, 39 (36.1%) were either referred or came for the second opinion. There were few predominant reasons for consultation found in the study. Thirteen EOD patients were brought predominantly for cognitive Journal of Geriatric Mental Health | Volume 6 | Issue 2 | July-December 2019 on enquiring about reason for consultation/precipitated consultation. In sample 16 (14.8%) EOD patients require either supervision or prompting for ADL and 43 (40.9%) require complete assistance in ADL. DISCUSSION In our study, the EOD formed nearly 34% of the persons attending with complaints of cognitive impairment in those 18 months period, is a substantive number. The current geriatric psychiatry unit takes referral of any patient with mental health and cognitive decline above 60 years of age and below the age of 60 years if they predominantly are related to cognitive decline. Literature on dementia in people below 65 years of age is sparse compared to late-onset dementia. Population-based studies conducted in US, Japan, and Italy estimated EOD prevalence as 42.3-98.1/100,000 persons. [21][22][23] Proportion of EOD in specialized memory clinics ranged from 7.3% in Japan to 44% in Greece. [24,25] A study done from the eastern part of India by Nandi et al. reported that 24.5% of dementia patients in their clinic were below 65 years of age. [10] The proportion of EOD in our study sample is 33.7% which falls within the range of previous studies by Table 5. [10,[24][25][26][27][28][29] The mean age of patients at presentation in our study was 58.54 years, which is a little higher than previous study reports (51.5-56.5 years) [24,26,28] but comparable to study by Nandi et al. (56.5 years). [10] This could be due to the setting of our study which is a geriatric psychiatry clinic predominantly catering to patients older than 60 years. As shown in Table 1, the prevalence of EOD increases with increasing age in keeping with previous studies. [11,13,14] Male preponderance was seen in our study (M: F -1.16) similar to reports with gender ratios ranging from 1.12 to 1.9. [11,22] The mean years of education in our sample was lower (8.87 years) than that reported by Nandi et al. [10] (11 years) indicating lower cognitive reserve in our sample. It also leads to higher impairment and more challenges during cognitive assessment. The most common phenotype in our study was AD followed by FTD. This was similar to study by Alladi et al., with AD as the most common phenotype, but the second most common phenotype was FTD. [11] The mean duration of illness before presentation was comparable to previous studies. [10] The mean HMSE score of 14.99 indicates that majority performed below the 10 th percentile cutoff of 19 at presentation. A mean EASI score of 7 out of possible 12 indicates significant disability at presentation, reflecting the degree of care required for these patients. Nearly a quarter of our patients had positive family history which was higher than previous report of 17% from Nandi et al. study. [10] Preventable and modifiable risk factors were present in 54.7% % indicating more aggressive screening for vascular, endocrine, and dietary risk factors. The investigative work-up for people with early-onset cognitive impairment includes blood investigation (complete blood counts, electrolytes, thyroid function tests, Vitamin B12 levels, ESR, liver function, and renal function tests), STD profile, CSF studies, autoimmune profile, brain imaging, and electroencephalogram. [30] This was evident from larger study from Sweden which found that majority of patients with EOD underwent extensive diagnostic evaluation and multiple specialist consultations compared to late-onset dementia. [31] In our clinic, we recommend most of these investigations for people with EOD, but depending on their affordability, patient families consent for only important investigations. As clinician, we help the patient families in choosing the most relevant investigation depending on the history and differential diagnosis. Various studies reported on different etiologies in EOD. These range from neurodegenerative, infective, metabolic, endocrine, and traumatic. The proportion of AD ranged from 17% to 38.8% in EOD samples. [11,[16][17][18]20] In contrast, we found much higher AD prevalence accounting for 53.6% in our sample. FTD accounted for 28.7%, which is similar to 27% in Nandi et al. study [10] from India and Papageorgiou et al. study [25] from Greece but much higher than studies from USA, UK, Japan, and France (3%-15.9%). [24,26,28] The variable prevalence of FTD could be explained by study setting, source of sample, and geographical region. FTD owing to the behavioral symptoms at the onset are more likely to visit psychiatric clinics. The proportion of VaD (5.6%) and DLBD and PD dementia (2.7%) is remarkably low. This could be due to the most of VaD and DLBD patients were either seen/referred to neurologist in our facility. Mixed and unspecified etiology accounted for <10% in our study indicating that in most cases efforts were made to reach a consensus diagnosis. More than half (51.9%) of the patients presented beyond mild stage that indicates lack of awareness and possible lack of access to health care. Around 2/3 rd of the patients (63.9%) presented for the first evaluation to our tertiary care center indicating the lack of geriatric/cognitive disorder specialists in the peripheral setting. BPSD, the usual reason for a psychiatric consultation were present in 81.4% in our sample. There are very little data on BPSD in EOD from previous studies. Reports on BPSD from Indian studies on late-onset dementia vary based on the type of dementia studied, setting, and the assessment method. As per 10/66 study, 70% have BPSD, with depression being the most common disorder. [32] Comparison between the Late-onset AD and Early-onset AD in terms of neuropsychiatry symptoms by Ferreira et al., found no difference. [33] In our study, depression is the second-most common BPSD after sleep disturbances. Apathy, aggression, agitation, and psychotic symptoms account for 16%-18% each indicating the need for psychiatric evaluation and management. This was also reflected in the psychotropic usage with nearly half the patients receiving either antipsychotic or antidepressant or both. These values might not be comparable to community-based or neurology clinic samples indicating that full picture of dementia can only be obtained by including samples from both neurology and psychiatry clinics. However, the prescription practice is similar to our earlier study on dementia patients with donepezil being the most common AChEI and quetiapine being the most common antipsychotic drug. [34] Dementia leads to significant disability and decreased life span. The mean age at onset of dementia in sample was 55.38 years. Assuming the average retirement age to be 60 years (most states) in India, there is a mean 4.62 years loss of productive life in terms of employment and income. The average reported life span of EOD was 6.08 years. [35,36] Studies on AD reported that early-onset AD has heritability ranging from 96% to 100% compared to 69.8% in late-onset AD. The heritability is majorly due to autosomal recessive gene and to minor extent due to autosomal dominant, X-linked and mitochondrial genes. [37] Studies on Apo E allele in AD reported that proportion of APOE2 found to be more frequently found among Early-onset AD (18%) compared to Late-onset AD (10%). Whereas the frequency of ApoE4 was more among Late-onset AD. [38] In contrast to the above study previous study from our center did not find any difference in frequency of ApoE4 with regard to age of onset in AD. [19] Studies have reported usefulness of genetic testing as a part of investigative work up in EOD with unclear phenotype. [39] The lifetime risk of AD is 10%-12% and it doubles if there is positive history in a first-degree relative. It is likely that families with EOD and those with positive history of dementia will be more apprehensive of developing dementia. There will also be request for screening and genetic testing for unaffected relatives from these families. One-third of people with cognitive decline had EOD and in that 23.1% had a positive family history. In this scenario, there is need to develop services including genetic counseling and genetic testing in selective patients. There are guidelines that are recommended for genetic counseling for these families/patient caregivers. [40] To summarize, we found that nearly one-third of patients with cognitive disorders has early onset presenting to geriatric psychiatric services. Till now, there are only two epidemiological studies from India last been 2011. These studies were done in specialised memory clinic. From an epidemiological point of view, our study is probably first coming from a geriatric psychiatric setup. In terms of clinical significance, our study has shown there was significant delay in seeking treatment even among EOD patients. Another important finding from a psychiatrist's point of view which was not investigated in previous studies was BPSD in EOD patients. Nearly 80% of patients in our study have BPSD symptoms stressing the need for specialized geriatric psychiatric clinics for the management of these symptoms [ Table 6]. Strengths of the study Diagnosis of dementia was given as per the existing system wherein, all patients are seen by specialist psychiatrists specialized in managing dementia. DSM 5 criteria and standardized scales were used for diagnosis and staging the severity of dementia. Patients who did not complete minimum standard evaluation were excluded from the study. Limitations of the study Patients who are primarily seen by a neurologist and seen in the Geriatric clinic briefly were not included in the study. Patients with suspected infectious etiology, autoimmune, tumors, and traumatic brain injury were usually referred to neurology/neurosurgery by general medical officer who did initial screening. This limits patients with these etiology and thus kind of phenotype presenting to our clinic. Implications This study highlights the need for dedicated memory clinics catering to all age groups where both psychiatric and neurology services can be availed. This review also shed light on the challenges ahead in terms of service delivery and long term care in people with EOD in India. Preventable and modifiable risk factors were present in 54.7% showing that early detection and intervention at general physician/primary health center will either prevent or delay the onset of dementia. Two-thirds of sample presented in moderate and severe stage in our sample. There is misconceptions among general public about dementia as disorder of late-life which needs to be addressed. There is a need to increase awareness among general physicians to pick up the early signs and symptoms of dementia in young and middle age patients. There also need to increase the liaison between cognitive disorders clinics, neurologic clinics, geriatric clinics, and psychiatric facilities to provide comprehensive care for these patients. Models of care from other nations for people with EOD need to be studied in our setting. [41] CONCLUSION Dementia should not be considered a disease of late life, various studies reported on dementing illness starting very well in middle age. EOD constitutes a significant proportion among people with dementia. EOD presenting to geriatric psychiatry clinics are slightly different from neurology and memory clinics with respect to their subtypes with higher degenerative etiology and with higher BPSD. There was a delay of 3.18 years in seeking consultation, and two-thirds of cases come beyond the mild stage of dementia. There is a need to increase awareness among public as well as among general physicians for early identification and appropriate referral. There is a need for setting up specialised memory clinics to screen and assess persons with cognitive decline, especially in the working-age group.
2020-02-20T09:02:50.644Z
2019-07-01T00:00:00.000
{ "year": 2019, "sha1": "9f69bd17e2383a3e1b3ebc51aae76e6d4807da0b", "oa_license": "CCBYNCSA", "oa_url": "https://doi.org/10.4103/jgmh.jgmh_16_19", "oa_status": "GOLD", "pdf_src": "WoltersKluwer", "pdf_hash": "fea14930871dc260e0521a17d1e8a08fb35a9bf3", "s2fieldsofstudy": [ "Medicine", "Psychology" ], "extfieldsofstudy": [ "Medicine" ] }
231740514
pes2o/s2orc
v3-fos-license
Bandgap optimization in combinatorial graphs with tailored ground states: Application in Quantum annealing A mixed-integer linear programming (MILP) formulation is presented for parameter estimation of the Potts model. Two algorithms are developed; the first method estimates the parameters such that the set of ground states replicate the user-prescribed data set; the second method allows the user to prescribe the ground states multiplicity. In both instances, the optimization process ensures that the bandgap is maximized. Consequently, the model parameter efficiently describes the user data for a broad range of temperatures. This is useful in the development of energy-based graph models to be simulated on Quantum annealing hardware where the exact simulation temperature is unknown. Computationally, the memory requirement in this method grows exponentially with the graph size. Therefore, this method can only be practically applied to small graphs. Such applications include learning of small generative classifiers and spin-lattice model with energy described by Ising hamiltonian. Learning large data sets poses no extra cost to this method; however, applications involving the learning of high dimensional data are out of scope. distribution [8]. This development has significantly eased the approximation of the required gradients. However, these methods have a critical drawback. These techniques only work for finite temperature probability distribution. Consequently, the model trained using these techniques is often temperature-dependent and shows disagreement with the data as the temperature is lowered [9]. As an example, the negative log-likelihood of a model trained using this technique is presented in Fig1. It can be seen that the minimum is close to the training β (inverse temperature), which was chosen as β = 1. A possible reason for this problem is that the training results in a locally optimal solution. Using quantum annealers adds another layer of complication because the simulation temperature is not known and depends on the graph size [9]. In contrast, this work is based on the band gap's maximization, while the ground states are chosen as the data states. This approach guarantees that the states' probability distribution gets closer to that of the data set as the temperature is reduced. Moreover, it ensures that the model adequately represents the data set for a broad range of temperatures. However, the downside of this approach is that there is no guarantee of the existence of parameters for every data set. This fact can be easily motivated by noticing that the number of ground states can be more than the number of model parameters and may result in an over-constrained optimization problem. Such problems do not exist at a non-zero temperature as all the states appear with non-zero probability. In this paper, a Mixed Integer Linear Programming (MILP) formulation is presented to estimate Potts model parameters. Two variations of the algorithm are presented. The first algorithm assigns a prescribed data set as the model's ground states while maximizing the bandgap. The second algorithm identifies a set of ground states with a prescribed multiplicity while maximizing bandgap. It should be noted that the computational complexity of both the algorithms grows exponentially with the size of the problem. Therefore, these methods are only suited for small graph structures. These problems arise in designing energies of smaller motifs in a lattice structure. The paper is organized as follows: The formulation for the Potts energy is reviewed in section 2. Concepts like the ground state, bandgap, and probability of a state are also reviewed. A theorem is presented to estimate the efficiency of the developed algorithms quantifiably. The problem statement is summarized in section 3. The developed algorithms are presented in section 4. A case study for the Ising model is presented in 5. Few details on the computational complexity are also outlined. Section 6 provides a summary of the paper. Mathematical Formulation Potts model is a type of a discrete pairwise energy model on an undirected simple graph. In lieu of introducing some useful terms, following definition for graph is used: Graph: A graph, G, is a pair of sets (V, C), where V is the set of vertices and C is the set of edges/connections. For each element e ∈ C there is a corresponding ordered pair (x, y); x, y ∈ V i.e. C ⊆ V × V. A Graph, G = (V, C) is undirected if an edge does not have any directionality i.e (x, y) ≡ (y, x). A graph is simple if (x, x) ∈ C for all x ∈ V. Also, this work requires the graph to be finite, i.e., the number of vertices is finite. Next, the definition of Potts energy is introduced. Potts model Consider a finite undirected simple graph G(V, C). The number of vertices are denoted by N V = |V| and the number of edges are denoted by N C = |C|. The indices of connections and vertices are related using the maps, π 1 and π 2 such that for a connection with index, k ∈ {1, .., N C }, the index of the corresponding vertices are π 1 (k) and π 2 (k) where, U (s) is the energy of labeling a vertex with label s, and V (s i , s j ) is the energy of labeling two connected vertices as s i and s j . The parameters H i and J k are referred to as the Field strength and Interaction strength, respectively. Since the graph is undirected, following symmetry is imposed: The parameter set is represented as a vector, θ = [θ 1 , . . . , θ Nv+N C ] T . In this work, it is specialized to following form: Ground states and band gap For a given set of parameters, θ, the set of ground states (S G (θ) ⊆ S) is the set of states with minimum energy, E 0 (θ)), i.e. S G (θ) = argmin S∈S E(S|θ), E 0 (θ) = min S∈S E(S|θ) In contrast, all the non-minimal states are referred to as exited states. The set of all excited states, denoted by S E (θ), can be evaluated as: The cardinalities of the set of ground states (S G ) and excited states (S E ) are denoted by N GS and N ES , respectively. All excited states may or may not have the same energy. However, the minimum excited energy referred to as the 'first excited energy' is used in defining the band gap and is evaluated as: It should be noted that no assumption is made on the multiplicity of states with energy E 1 (θ). The band gap(a positive quantity) defines the energy gap between S G and S E . It is estimated as: Probability distribution At any given temperature, T , the probability of occurrence of a state, S is described by the Boltzmann distribution as: where β = 1/k B T is the inverse thermodynamic temperature, k B is the Boltzmann constant and Z denotes the partition function which is estimated as Parameter estimation Given a data set, S D ⊆ S, the parameters set, θ, is optimized such that the states in S D have higher probability of occurrence at a prescribed β value. Mathematically, this procedure entails minimization of negative log-likelihood as defined below: It can be observed that at high temperatures i.e. β → 0, all states occur with equal likelihood and therefore On the other hand, at low temperatures i.e. β → ∞, only ground states occur with equal probability and occurrence of any other state has probability 0. Consequently, the value of η in this limit is finite only when S D ⊆ S G . It is evaluated as: It is desirable to estimate parameters such that the ground state replicates the data set, and the bandgap is maximized. The reason will be apparent after the next theorem (proof in Appendix A). Theorem: For a given set of parameters, θ D , such that (i) S G (θ D ) = S D (ii) ∆E > 0, following statements hold true: (a) η(θ D , β) monotonically decreases with β and the low temperature limit (c) For any > 0, there exists a β * such that for all β > β * , η(θ D , β) − η ∞ (θ D , β) < where β * is estimated as: The consequence of this theorem is that it guarantees that if the parameters are chosen appropriately, η will approach to its global minimum in the low temperature (high β) limit. Moreover, at a finite β, η is bounded from above by a decreasing function. It can be seen in Fig2(a), that the bound gets tighter for higher values of ∆E. It is also shown that the trained model is efficient in the range of β determined by [β * , ∞). Fig2(a) shows that a higher bandgap allows a broader range of temperatures. Problem Statement Given a finite undirected simple graph G(V, C), find parameters, θ that maximizes the band gap in following two situations: Case 2: Ground state multiplicity, N GS , is prescribed. To make this optimization problem well posed, it is additionally imposed that H min Moreover, the functions U (s) and V (s i , s j ) are predetermined and not calibrated in the optimization process. of MILP problem is given in Eq (7) where x is the decision variable of size N , I is the set of indices of x which are integers and the matrices A eq , b eq , A and B are used to define linear constraints. Optimize: min The MILP formulation for the two cases is presented next. In both cases, the decision variables include the parameters, θ, and some auxiliary variables. These variables are introduced along with the algorithm description. Moreover, the algorithms do not enforce that ∆E > 0. Therefore, the results are accepted only if this condition is met. Algorithm 1: Parameter Estimation for Potts model with DAta Set (PEPDAS) The energies of individual states can be evaluated as a matrix product operation (shown in Section 2 ) which works well with linear programming framework. However, the calculation of band gap requires calculation of a minimum of energy over S E . This operation introduces a non-linearity. Thus, following auxiliary variables are introduced to pose this optimization as a linear programming problem: The decision variable in this formulation are given as: Consider a data set, S D = {S 1 , ..., S N DS }. The optimization cost (−∆E) is estimated by substituting the E(S 1 ) as that of ground state and E 1 for the 1st excited state energy. Thus the cost is evaluated as: The energy of all data states are explicitly equated as follows: The 1 st excited energy, E 1 is estimated by bounding it from above by energies of all the excited states. It is bounded from below by the energy of state corresponding to the index at which m i = 1. The upper bound on E 1 insures that if m i = 1, then E 1 (θ) = E(S i ). These conditions can be imposed using following set of equations and inequality: Most computing software only allows integer valued variables. In such a case, the binary value of variable m can be explicitly enforced by setting following bounds on integer valued m: This formulation is presented in Box 1 in the matrix format. Optimization cost: Equality constraints: In this formulation only the variable N GS is provided by the user in stead of S D ata. This condition adds the complexity of locating the ground states and evaluating the ground state energy, E 0 (θ). This problem is resolved by including following auxiliary variables: • E 0 (real valued scalar): It represents the ground state energy. The decision variable in this formulation are given as: The optimization cost is given as: The estimation of E 0 is done using the same idea of bounding E 0 from above and below. The bound is tight only for indices where l i = 1. For the estimation of E 1 , the upper bound is lifted on indices corresponding to ground states. This allows to estimate minimum over non-optimal states. Moreover, index of 1 st excited state cannot coincide with ground state i.e. l i = 1 and m i = 1 cannot occur simultaneously. These conditions are imposed using following inequalities and equations: The condition of binary valued variables is imposed on integer variables as follows: This formulation is presented in Box 2 in the matrix format. Results and discussions In this section, an example is presented to show the efficiency of both the algorithms. It is shown by example that the predicted η decays and is bounded. Moreover, the PEPGSM method can predict ground states that provide higher bandgap compared to randomly picked ground states. Next, the computational cost of this method is discussed. Examples The parametric estimation of Ising model is presented as an application of this method. In this model, the states take a binary form i.e. N L = 2. Traditionally the labels are denoted as {+1, −1} and the corresponding energy functions are defined as: Therefore, the energy can be effectively written as: This model is applied on a 10-noded Peterson graph with |H| ≤ 1 and |J| ≤ 1. First, the graph is trained by prescribing up to 4 data states using the PEPDAS method. Next, the graph is trained by prescribing the number of states from 1 to Optimization cost: Equality constraints: Bounds: Computation size One of the limiting features of these algorithms is that it grows exponentially with the graph size. An exact number of variables and equations is provided in Table2. It should be noted that the number of states, N T S = N N V L and is the reason for the large size of the decision variable. The system of equations and inequalities in both algorithms have large sparse blocks which provide some computational easing. It should also be noted that the sparsity of graph, G, does not give considerable advantage in the algorithm as the size of the problem is mainly dictated by the number of labels, N T , and the number of vertices, N V . Conclusion Two algorithms were developed and analyzed for estimating parameters of Potts model. The functionality of each method is as follows: 1. PEPDAS method estimates the parameters to exactly replicate the ground states as the prescribed data set. 2. PEPGSM method estimates the parameters to identify ground states based on their prescribed quantity. Both algorithms maximize the band gap between the ground and excited states of the model. It was shown that models optimized in this manner have a higher probability of being in the ground state for a broader range of temperatures. The upper bounds on the optimized model's performance are also estimated. This efficiency is measured in terms of the range of temperature for which ground states' likelihood remains in the desired range. The examples included in the paper show promising practical results on small graphs. As suggested in the main body of the paper, these methods do not scale well with the graph size, and their usage should be restricted to small problems. Supplementary Data The codes are available at https://github.com/sidsriva/PEP A Proof of theorem (a) Since S G (θ, β) = S D , the Negative Log Likelhood, η(θ D , β), is estimated as: The derivative is estimated as:: where Since ∆E > 0, the expected energy is strictly bounded below as E(E) > E 0 . Consequently: In the low temperature limit, Eq(2) estimates that the probability of all excited states approaches 0 while all ground states are equally likely with probability (N GS ) −1 . Therefore, the value of η in this limit is estimated as Eq(4). (b) Let S G ∈ S G and P = p(S G |θ D , β) so that η(θ D , β) = −N GS log P . The probability of occurrence of a ground state is given by N GS P and occurrence of a excited state is given as (1 − N GS P ). Moreover, for any finite value of β both of these probabilities are finite. Therefore, the expectation of energy, E, can be bounded as Substituting in Eq (10), Substituting P = e −η/N GS gives the following differential inequality Consider the differential equation for β ∈ [0, ∞), with initial condition ξ(θ D , 0) = η(θ D , 0) = N GS log N T S . Noting that N GS e −ξ/N GS − 1 = N GS P − 1 > 0, this ODE is integrated to give the following solution: Using Comparison Lemma [10], for all 0 < β < ∞, This proves the upper bound. The lower bound is a direct consequence from monotonicity proved in part 1. B.1 K-3 graph A fully connected 3-noded graph is optimized for 4 data states. The energy of the graph is modeled using Ising model Eq(9) with |H| ≤ 1 and |J| ≤ 1. The optimized parameters using the (1) Minimization of Negative Log-likelihood, and (2) PEPDAS method are presented in Fig.4 (a) (b) (c) Figure 4: (a) Training data set of states with green representing a '+1' state and red representing a '-1' state. (b) Optimized graph using minimization of Negative Log-likelhood at β = 1 (c) Optimized graph using PEPDAS method. The field terms are mentioned in blue color and interaction terms are mentioned in red color B.2 Peterson graph A Peterson graph is first optimized for upto 3 user prescribed data states using PEPDAS method. Then it is optimized for 3 ground states using PEPGSM method. The energy of the graph is modeled using Ising model Eq(9) with |H| < 1 and |J| < 1. The optimized graphs are presented in Fig5 and their respective Negative log likelhood is presented in Fig.6.
2021-02-02T17:46:51.382Z
2021-01-31T00:00:00.000
{ "year": 2021, "sha1": "8b09192f1c21a27f0788ba5615d3eda0a2df018e", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "8b09192f1c21a27f0788ba5615d3eda0a2df018e", "s2fieldsofstudy": [ "Computer Science", "Physics" ], "extfieldsofstudy": [ "Computer Science", "Mathematics", "Physics" ] }
257353530
pes2o/s2orc
v3-fos-license
Nonlocality under Computational Assumptions Nonlocality and its connections to entanglement are fundamental features of quantum mechanics that have found numerous applications in quantum information science. A set of correlations is said to be nonlocal if it cannot be reproduced by spacelike-separated parties sharing randomness and performing local operations. An important practical consideration is that the runtime of the parties has to be shorter than the time it takes light to travel between them. One way to model this restriction is to assume that the parties are computationally bounded. We therefore initiate the study of nonlocality under computational assumptions and derive the following results: (a) We define the set $\mathsf{NeL}$ (not-efficiently-local) as consisting of all bipartite states whose correlations arising from local measurements cannot be reproduced with shared randomness and \emph{polynomial-time} local operations. (b) Under the assumption that the Learning With Errors problem cannot be solved in \emph{quantum} polynomial-time, we show that $\mathsf{NeL}=\mathsf{ENT}$, where $\mathsf{ENT}$ is the set of \emph{all} bipartite entangled states (pure and mixed). This is in contrast to the standard notion of nonlocality where it is known that some entangled states, e.g. Werner states, are local. In essence, we show that there exist (efficient) local measurements producing correlations that cannot be reproduced through shared randomness and quantum polynomial-time computation. (c) We prove that if $\mathsf{NeL}=\mathsf{ENT}$ unconditionally, then $\mathsf{BQP}\neq\mathsf{PP}$. In other words, the ability to certify all bipartite entangled states against computationally bounded adversaries gives a non-trivial separation of complexity classes. (d) Using (c), we show that a certain natural class of 1-round delegated quantum computation protocols that are sound against $\mathsf{PP}$ provers cannot exist. Intro Axioms of quantum theory define states of a quantum system to be operators on a Hilbert space. Consider a state ρ AB that is shared between two parties A and B. Such a state can fall into one of the following categories. Either it is a separable state, i.e., a state that can be written as σ AB = i p i σ (k) B . If not, it is called entangled. This separation relies purely on a mathematical formalism. The question of Entanglement = Nonlocality (ENT ? = NL) asks whether there is an operational (theory-independent) distinction between separable and entangled states. More formally we ask if it is true that for every entangled state ρ AB there exists a game/experiment between A, B and a verifier V that distinguishes between two cases: (i) A and B share ρ AB versus (ii) A and B share only a separable state σ AB . The game is akin to the multi-prover interactive proof setup. More concretely, A and B are spatially separated so that they cannot communicate, then V exchanges messages with both of them and at the end decides if A and B held only a separable state. The verifier V wins the game if her answer is correct with high probability. Whether such a game exists for every entangled state is the ENT ? = NL question. The seminal paper Bell [1964] initiated the study of quantum nonlocality, that is, of those correlations obtained when performing local measurements on entangled states that do not have a classical analogue. It showed that there exists an entangled state (an EPR pair) for which there exists an experiment distinguishing cases (i) and (ii). Initially, it was believed that ENT = NL. This is in fact true for pure states [Gisin, 1991]. However, in 1989, Werner introduced (Werner [1989) entangled states, for which a local model can be constructed. What this means is that all correlations obtainable with these entangled states can be reproduced in a setting where A and B hold only a separable state. The first simulation held only for projective measurements but it was later generalized to all POVMs [Barrett, 2002]. This implies that no experiment distinguishes the two cases for these states and that the relation between ENT and NL is more nuanced. These surprising results suggest that no operational definition exists behind the mathematical concept of entanglement, at least in this model. As a consequence new models were introduced and in some cases, equality was proven. One can tweak the model in many ways, e.g., allowing quantum communication between V and A, B, considering multi-round games instead of a one-question, one answer setting of the Bell scenario, or allowing more than one copy of ρ AB to be accessible at a time. It is probably fair to say that a satisfying answer to the ENT = NL has not been found to date. For a more detailed history of the problem see Section 2. Contribution. We define a new notion of non-locality. We call a state ρ not-efficiently-local (NeL for short) if there exists a probability distribution arising from local measurements of ρ such that no efficient, non-communicating parties (sharing a separable state) can simulate this distribution. In the spirit of the extended quantum Church-Turing thesis, by efficient we mean implementable by BQP circuits. We ask the following question "How hard is it to fake entanglement?" We prove that a particular way of "faking entanglement", i.e., the Hirsh local model ( [Hirsch et al., 2013]) for 1-round protocols, can be implemented in PP. More concretely, for some entangled state ρ AB , we show how to simulate any strategy of A, B ∈ BQP having access to a ρ AB by A , B ∈ PP with access only to some separable state σ AB . This means that local simulation in the Bell scenario is at most as hard as PP. This implies that ENT = NeL =⇒ BQP = PP. To the best of our knowledge computational complexity of local models was never studied before. Delegation of Quantum Computation. Surprisingly the fact that Hirsch's local model can be implemented in PostBQP has implications for the delegation of quantum computation (DQC). DQC is a problem where a classical client delegates and verifies computation done by a more powerful quantum server. A breakthrough result in the field, Mahadev [2018], gave the first, fully classical, protocol for DQC with a relaxed requirement, i.e. guaranteeing soundness only against provers that are quantum polynomial time. The authors proved the soundness of their DQC assuming the quantum hardness of learning with error (LWE) problem. With this assumption, this result establishes, under QLWE assumption, BQP ⊆ IP [BQP, BQP], i.e. a DQC with efficient (BQP) honest provers and sound against BQP adversaries. We can view this result as establishing sufficient complexity-theoretic assumptions needed for the existence of a DQC sound against BQP. In this paper, we ask what are the necessary assumptions. We show that if there exists a single round extended DQC (Definition 3) which is sound against BQP then BQP = PP. This is the promised necessary assumption. Interestingly the same assumption, i.e. BQP = PP, was proved to be necessary (Kretschmer [2021]) for an existence of a cryptographic primitive called pseudorandom state generation. Proof techniques. To prove our result about DQC we crucially use the first result, i.e ENT = NeL =⇒ BQP = PP. More concretely, we assume towards contradiction that a single round extended DQC sound against BQP exists and that BQP = PP. Next, we build a protocol certifying ENT = NeL by using two copies of this DQC to control the behavior of the provers. This contradicts the first result. We believe that this is a novel technique for proving lower bounds for DQC. The two proofs together give a framework for showing stronger results about limits of DQC. Finding a local hidden variable model implementable in a lower complexity class or holding for more rounds of interaction, would imply stronger lower bounds. Entanglement vs Non-locality -A Brief History In this section, we introduce the definitions of entanglement and non-locality and give a brief history of the ENT ? = NL problem. We recommend Augusiak et al. [2014] for a more detailed survey about non-locality. We start by defining a notion of entanglement. Entanglement. For ρ AB ∈ S(H A ⊗ H B ) we say it is separable if it can be expressed as Otherwise, we call it entangled. The most famous example of an entangled state is the Bell state, also known as an EPR pair, |φ + = 1 √ 2 (|00 + |11 ). The notion of entanglement is a purely mathematical notion and a priori doesn't carry any operational meaning. Next, we introduce the notion of non-locality in a way that is often presented in the physics literature. This definition will change slightly once we consider computational aspects (see Definition 2). Non-Locality. For a probability distribution P(a, b|x, y) we say P is local if there exist probability distributions P 1 (a|x, λ) and P 2 (b|y, λ) and λ such that where λ is understood as a local hidden variable. Operationally this means that there exist A and B, sharing randomness λ, such that their joint distribution replicates P. If a probability distribution is not local, we call it non-local. The Bell experiment (Bell [1964]), also known as the CHSH game, is one of the first examples that show non-locality of quantum correlations. In this game, two non-communicating parties 3 however all POVMs can be simulated locally as long as p < 5 12 . are given single-bit inputs, x and y respectively, that are uniformly distributed and are expected to reply with single bit answers, a and b, such that x ∧ y It can be shown that for any local strategy (satisfying (1)), the probability of ( * ) is upper-bounded by 75%. It can also be shown that if the parties share a |φ + , they can satisfy ( * ) with ≈ 85%. Hence, this shows an example of a non-local probability distribution, i.e. |φ + is non-local. This it the strongest model, in which entanglement can be certified. The term "certified" usually means that we assume quantum theory (QT) but the Bell experiment proves more, i.e. that a theory governing the behaviour of A, B must contain some notion of entanglement. Later the result from Bell [1964] was improved ( [Gisin, 1991]) to show that all pure entangled states are non-local. As we mentioned in the introduction, for a long time it was believed that ENT = NL. A surprising result was presented by Werner (Werner [1989]), in which it was shown that for a class of entangled states, every distribution obtained by projective measurements can be simulated locally. Let us give more details. For p ∈ [0, 1] let ρ(p) ∈ S(C 2 ⊗ C 2 ) be defined as where |ψ − = 1 √ 2 (|01 − |10 ). These states are known as the Werner states. It was shown in Werner [1989] that ρ AB (p) is entangled if and only if p > 1 3 . However, Werner showed that for any p ≤ 1 2 , all projective measurements on ρ(p) can be simulated locally, i.e. expressed as in (1). This result was further generalized to all POVMs when p < 5/12 (Barrett [2002]). Figure 1 summarizes the guarantees given for ρ(p) for different values of p. Does this necessarily mean that all probability distributions arising from such states are local? In this model yes, in others not necessarily. As we mentioned, after the results of Werner [1989] and Barrett [2002] other models were considered. In Buscemi [2012] equality of ENT and NL was shown in the Bell scenario, in a model where V can send quantum states to A and B. A follow-up result (Bowles et al. [2018]) showed equivalence in a model with two additional parties: Charlie and Daisy. To certify entanglement, EPR pairs are shared between A and Charlie, and B and Daisy. This allows V to self-test the measurements made by A, B on ρ AB . In this model, all the parties communicate with V classically. There are also some improvements on local simulation, e.g. in Hirsch et al. [2016] it was shown that there exist states with local models for all protocols with local filtering. Local filtering refers to a situation where A, B first apply local operations, send one bit each to V, receive a challenge from V and reply. Thus it is a special case of 3 message protocols, instead of 2 messages in the Bell scenario. This case is akin to Σ protocols in cryptography. The question remains open for the sequential scenario with more messages. Modeling Now we are ready to introduce our model and our definition of non-locality. As our model is different from the standard setup we define the game and describe allowed actions very carefully. Definition 1. (Game) For k, n ∈ N, ρ AB ∈ S(H A ⊗ H B ) we define G(ρ AB , k, n) to be a game between A, B and V. G is played in one of two modes. A, B will have access to either (i) ρ AB or (ii) a separable ("classical") state σ AB . First, the hyperparameters k, n are distributed to all parties and one of the modes is chosen. The mode is not known to V. G proceeds in k rounds. In each round A and B are given their respective share of ρ AB in mode (i) or of σ AB in mode (ii) and are forbidden to communicate. Then 1. V sends a message a to A and b to B, where a, b ∈ {0, 1} p(n) , 1 2. A, B compute their answers x a , x b ∈ {0, 1} q(n) . 2 They can operate on their respective shares of ρ AB in mode (i) or access their share of σ AB in mode (ii). The computational modelling of the behaviour of A, B is discussed below. 3. Answers x a , x b are sent to V, which stores them. Then, the next round is started in a fully "iid fashion", i.e. A, B no longer have access to the previous ρ AB , nor σ AB and the internal state of A, B is reset. After the k-th round, V outputs, based on x a , x b 's, either YES or NO. 3 Next, we define the new notion of non-locality. Definition 2. (not-C-local) For a complexity class C and a state ρ AB ∈ S(H A ⊗ H B ) we say that ρ AB is not-C-local if for every sufficiently small δ there exists k ∈ N, a game G(ρ AB , k, ·), A BQP (·), B BQP (·) ∈ BQP(·) and a polynomial p such that for every n ∈ N 1. (Completeness) If G(ρ AB , k, n) was run in mode (i) with A BQP (n), B BQP (n) then 1 p is a fixed polynomial. 2 q is also a fixed polynomial. 3 V's actions might be adaptive from round to round. Computational modelling. As we are interested in the computational power of A, B we need to model their behaviour. We assume the quantum Church-Turing hypothesis that a quantum Turing machine can simulate any realistic model of computation. Moreover, we assume (completeness in Definition 2) that honest A, B run in polynomial time. More formally, we assume that for every n ∈ N honest A BQP (n)'s answer is the result of applying some unitary on ρ A ⊗ |a ⊗ 0 q(n)−p(n)−1 for some polynomial q(n) and measuring all 4 qubits in the computational basis. This models the property that A BQP has access to only polynomially many qubits. Next, by the Solovay-Kitaev theorem, we say that the number of gates t from a universal gate set needed to approximate this unitary to within grows like polylog(1/ ). Finally, we use a result from Aharonov [2003] to argue that we can assume that the gate set used is equal to {Toffoli, Hadamard} with only polylogarithmic in (n, t, 1/ ) blowup in the number of gates. Note that the gate set {Toffoli, H} is not universal in the standard sense as both matrices contain only real entries. However, it is enough for our purposes as we are interested in computational universality (see Aharonov [2003]). To summarize, there exists a family of circuits {C A n } n≥1 acting on q(n) qubits with t(n) gates (for some polynomials q, t) such that for every n ∈ N we have that A BQP (n)'s answer is the result of applying C A n on ρ A ⊗|a ⊗|0 q(n)−p(n)−1 and measuring all qubits in the computational basis. For B BQP the situation is analogous, i.e. there is a family of circuits {C B n } n≥1 such that the answer of B BQP (n) is the result of applying C B n on ρ B ⊗ |b ⊗ |0 q(n)−p(n)−1 and measuring all qubits in the computational basis. 5 For the complexity theoretic upper bound on the local-hidden-variable simulation we consider A, B in PostBQP. By this we mean that A and B are modeled as quantum circuits also but with the ability to perform post-selection. This is the ability to post-select on a particular qubit being |1 . More concretely, for a state x∈{0,1} n α x |x if the post-selection is applied to the first qubit then the resulting state is We can think of post-selection as a new 1-qubit gate that can be applied to a chosen qubit. In the seminal result Aaronson [2005] it was shown that PostBQP = PP. The class PP consists of problems solvable by an NP machine such that (i) if the answer is "yes" then at least 1 2 of computation paths accept, (ii) if the answer is "no" then less than 1 2 of computation paths accept. Access to σ AB . In mode (ii) of the game (Definition 1) A, B have access to a separable state B . We assume that A and B have special registers, where, at the beginning of each round, k is sampled according to p k and a state σ (k) A is placed in A's register and σ (k) B is placed in B's register. Note that when we assume that A, B ∈ PostBQP(n) it implies that σ AB is a state on at most poly(n) qubits. When there are no restrictions on computational capabilities of A, B then σ AB can be thought of as public randomness. This is the case because any distribution {p k } can be approximated to high precision with access to an infinite string of common randomness and every separable state on finite-dimensional space can be approximated by local operations. But note that once we limit the computational power of A, B the equivalence is not clear. Thus, we state all our results in terms of separable states, not public randomness. Main result With the story of the entanglement vs nonlocality question behind us and our model of computation introduced we can state the computational version of the ENT = NL question. We note that the other implication is trivial as any separable state is simulatable by itself. The main theorem of the paper can be stated as The proof is deferred to Appendix B. As we discussed before the proof designs an algorithm in PostBQP that implements some local model. Implications of Theorem 1 for Delegation of Quantum Computation As we discussed, Theorem 1 is informal in its own right but surprisingly it also implies necessary complexity theoretic separations needed for existence of delegation of quantum computation protocols (DQC). A DQC is an interactive protocol (V, * ) between an efficient classical verifier V ∈ BPP and a prover. For a complexity class C we say a DQC protocol is BQP-complete and C-sound if, on input (C, x), where C is a quantum circuit and x is a classical bit string, the following two conditions hold: 1. There exists an honest prover P ∈ BQP such that the interaction between V and P is accepted with overwhelming probability. 2. For any malicious prover P ∈ C, if the interaction of P and V is accepted with overwhelming probability, then the distribution of the output of V is close to the distribution of measuring the output qubit of C ran on |x . 6 Naturally the bigger the C the harder it is to design such a protocol. The extreme is to allow C = ALL, which translates to a question whether BQP ⊆ IP[BQP, ALL], i.e. do all languages in BQP have interactive proofs with efficient honest provers sound against all adversaries. The smallest complexity class for which the question makes sense is C = BQP. This is the setting we focus on here. The first construction of a BQP-sound DQC was presented by Mahadev [2018], where the authors showed how to achieve it under the QLWE assumption, i.e. that no quantum polynomial time prover can solve the learning with errors problem. Not long after, several new variants of this protocol appeared in the literature. In Gheorghiu and Vidick [2019] it was showed how to build a blind version of this protocol by forcing the prover to prepare a quantum state blindly. In Alagic et al. [2020], using a Fiat-Shamir-like argument and parallel repetition of the Mahadev protocol, the authors showed how to implement this protocol in a single round of communication in the quantum-random-oracle-model (QROM). This shows that the quantum hardness of LWE + QROM gives sufficient complexity-theoretic assumptions required to achieve a single-round DQC. Our result. We show what are the necessary complexity theoretic assumptions for a BQP-sound DQC. More concretely we prove that the existence of an extended, single-round delegation of quantum computation (EDQC, see Definition 3) implies BQP = PP 7 . It is important to note that it was already known that existence of a constant round DQC sound against all adversaries implies BQP ⊆ AM ( [Aaronson, 2010]). We want to emphasize that our conclusion of BQP = PP is incomparable with BQP ⊆ AM. Consult Figure 2 for known relationships between complexity classes of interest. We elaborate on why the conclusion BQP = PP is interesting. To do that we give an example of a cryptographic primitive that has recently attracted a lot of attention and whose existence was shown to require BQP = PP also. Pseudorandom states, introduced in Ji et al. [2018], are efficiently-computable quantum states that are computationally indistinguishable from Haar-random states. One-way functions imply the existence of pseudorandom states, but Kretschmer et al. [2022] constructed an oracle relative to which there are no one-way functions but pseudorandom states still exist. This influenced Ananth et al. [2021] to design cryptographic primitives based on PRS. In another work (Morimae and Yamakawa [2022]) it has been shown that PRSs are enough to construct quantum bit commitments and signatures. Interestingly Kretschmer [2021] proved that BQP = PP is a necessary condition for the existence of pseudorandom quantum state generators (PRSG). This is exactly the same complexity theoretic separation implied by our result. Extended Delegation of Quantum Computation We are ready to give a definition of an extended delegation of quantum computation (EDQC). Informally speaking, for a circuit C our definition guarantees that: (i) there exists an efficient verifier and an efficient prover such that the prover can choose any state ρ QE ∈ S(C 2 ⊗ H E ) such that the bit collected by the verifier will be distributed equally to measuring the output qubit of U C |x ρ Q , moreover the post-measurement state of ρ E is exactly as if C was applied, (ii) for every BQP prover accepted with high probability there exists a 1-qubit state ρ such that the output collected by the verifier is, for every classical x, close to the distribution of measuring U C |x ρ. Definition 3 (EDQC). Let C be a classical description of a quantum circuit on k qubits that operates on two registers: V of k − 1 qubits and Q of one qubit 8 . We say that P is a 1-round protocol for extended delegation of quantum computation (EDQC) if the following holds. For a security parameter ∈ N, P(C) expects questions q ∈ {0, 1} n( ) and answers a ∈ {0, 1} n( ) for some polynomial n. V is expected to accept or reject and upon acceptance return b ∈ {0, 1}. Moreover, there exists V ∈ PPT( ) such that • (Completeness) There exists P ∈ BQP( ) such that for every x ∈ {0, 1} k−1 and every ρ QE ∈ S(C 2 ⊗ H E ) if ρ Q is given to P then the following hold -V accepts the interaction with probability 1, over the randomness of V and P the distribution of b is equal to the distribution of measuring the last qubit (Q register) of ρ E is equal to the post-measurement state after measuring the Q register conditioned on obtaining b. More formally it is the normalization of • (Soundness) There exists c such that for every > 0, sufficiently large , for every Now we state the main theorem of this section. Proof of Theorem 2 is deferred to Appendix B.2. On a high levelwe design a transformation that converts the semi-quantum protocol from Buscemi [2012] (see Appendix B.1) to a 1-round protocol that certifies entanglement of all entangled states against BQP adversaries, i.e. exitance of EDQC implies ENT = NeL. Now invoking Theorem 1 we deduce that existence of such an experiment would imply BQP = PostBQP. Why this definition of EDQC? In this section we briefly discuss how the EDQC introduced in Definition 3 relates to other notions of DQC from the literature. We argue that all non-standard requirements of EDQC are similar to properties of DQC already considered in the constructions based on LWE. There are 3 crucial differences between EDQC and more standard versions of DQC. We 1. require single round of communication, 2. allow the prover to select a part of the input to the circuit, 3. don't allow the prover to select this input adaptively. Let's elaborate on each point. The first difference is that we require the protocol to be single round. Although the original delegation protocol from [Mahadev, 2018] requires several rounds of interaction, Alagic et al. [Alagic et al., 2020] described how this protocol can be transformed into a single round protocol in the QROM. This is done by a technique similar to the Fiat-Shamir transform and parallel repetitions of the measurement protocol from Mahadev [2018]. As we discussed in the intro our proof technique could in principle be generalized beyond single round setting. We focused on a single-round local simulation but in Hirsch et al. [2016] the authors describe a local hidden variable model (LHVM) for a more general setting. They consider filtering experiments (3 message experiments resembling Σ-protocols). If one proves that this LHVMs can be realized in PostBQP then Theorem 2 automatically extends to 3message EDQCs. Similarly if one introduces a LHVM for sequential measurement games the theorem would naturally extend to EDQCs with multiple rounds. The second non-standard feature is that the prover plugs in a part of the input not controlled by the verifier (the state ρ Q ). We argue that this should be possible to realize in DQC protocols that rely on Kitaev's local-hamiltonian reduction. This is similar to how the Mahadev protocol can be extended to realize verification of QMA, for instance in Bartusek et al. [2022]. To realize this functionality one can remove terms from the Hamiltonian that correspond to the portion of input that the verifier does not control (removing a part of H in ). An in-depth description of the guarantees, when the penalty terms corresponding to the input are partially removed, can also be found in [Barooti et al., 2021, Lemma 4]. The final difference from the standard setup is that, ρ from the soundness property of Definition 3 is assumed to have no dependence on the input x. Intuitively this means that the adversary is not allowed to choose the circuit input states adaptively, i.e., dependent on x. This property is reminiscent of blindness -second after verifiability property of interest for DQC. It is clear that our property does not directly imply blindness, at least in the case of multi-round protocols, as, for instance, if the prover was to commit to a state and receive the input in the clear our property would still hold but the protocol would not be blind. Although it seems that standard definitions of blindness, i.e. indistinguishably of the views for different values of x, would imply this property, a formal proof seems to be non-trivial. The main issue is that the state ρ (Definition 3) is an abstract state, that the prover does not necessarily need to have access to. Hence, we can not argue that ρ is only dependent on the randomness of the verifier and the question q. If the definition is stronger and implies that the prover holds the state, similarly to guarantees in Vidick and Zhang [2021], it might be possible to show that blindness implies our property. . For our applications we can, without loss of generality, assume that all A i are rank one. It is because any measurement with operators of rank larger than one can always be realized as a measurement with rank-one operators. More concretely for every To reproduce the statistics of the original POVM one simply applies the finer POVM and forgets the result j. Thus for every i ∈ [k] we can write A i = η i P i , where η i ∈ (0, 1] and P i is a rank-one projection. B Proof of Theorem 1 Before we prove Theorem 1 we show some helpful lemmas. The first lemma shows that there exists a PostBQP algorithm that given access to poly copies of a pair of states can compute the square of the dot product of some corresponding pair of states up to exponential precision. The core of the proof is a combination of a binary-search-like procedure with a technique from Aaronson [2005]. This technique allows, for a state a |0 + b |1 , where a, b > 0, to design a PostBQP algorithm that, when given access to polynomially many copies of the state, computes a, b up to exponential precision. Proof. The full algorithm will consist of a series of subroutines. Subroutine 2. For a state x |0 + y |1 take its two copies, apply the CNOT gate and postselect on the second qubit being |0 . The result is x 2 |0 + y 2 |1 x 4 + y 4 . Second case is when c < d then |φ 2 i never lies in the first or third quadrants and thus | +|φ 2 i | < 1 √ 2 < 0.985. The two facts together imply that repeating the procedure poly(m) times we can distinguish c > d + 2 −O(m) from c < d with probability 1 − 2 −m . The two signs together with (5) are enough to compute an approximation to (6). It is because the signs give us information about the relative sign (+/−) in each of the summand in (6), Moreover, (5) allows us to compute all quantities of interest in (6) (up to a sign), because of the normalization of |ψ and |ψ . For example, we can compute a, b up to a relative sign using the fact that a 2 + b 2 = 1 and that |a 2 − β 0 b 2 | ≤ 2 −2n . We conclude, by the union bound, that with probability 1 − 2 −Ω(n) we compute v satisfying the statement of the lemma. The next series of lemmas explain how to simulate some entangled states using only separable states. That is, for some entangled state ρ AB and measurements applied on it by A, B we show how to reproduce the statistics of outcomes by A , B having access only to a separable state. Lemma 2 (Werner's model). For every q ≤ 1 2 the Werner state ρ(q), defined in (2), is local for all projective measurements: {P, I − P } for A and {Q, I − Q} for B. Proof. The first step is to show that the statement holds for the maximal value of q, i.e. q = 1 2 . The strategy is as follows: A , B share |λ ∼ ω(C 2 ), A returns 1 if λ| P |λ < λ| (I − P ) |λ and 0 otherwise. B returns 1 with probability λ| Q |λ . The proof that this gives rise to a distribution equal to arising from measuring the projections on ρ( 1 2 ) can be found in [Augusiak et al., 2014, section 3.1]. Now, for any ρ(q) with q < 1 2 we do the following. We write ρ(q) as a mixture of ρ( 1 2 ) and white noise I, i.e. ρ(q) = 2qρ( 1 2 ) + 1−2q 4 I. As q ≤ 1 2 , 2q < 1, so we can consider the following strategy for A , B : With probability 2q (that is coordinated with shared randomness) they perform the strategy as described for ρ( 1 2 ) and with probability 1−2q 4 they return a uniformly random bit. Direct derivation gives us P(a, b|P, Q) = Tr[(P a ⊗ Q b )ρ(q)] = 2q · Tr[(P a ⊗ Q b )ρ( 1 2 )] + 1−2q 4 . This matches the distribution of the outputs produced by A and B , as with probability 2q they reproduce the distribution associated with measuring ρ( 1 2 ) and return uniformly random bits with probability 1 − 2q. The following family of entangled states will play an important role. Proof. The proof follows a similar strategy to the one of Lemma 2. We write ρ 0 (q) as a mixture of the Werner state ρ( 1 2 ) and a separable state. Now B does the following: with probability 2q, he acts as described in Lemma 2 for the case of q = 1 2 , and with probability 1 − 2q outputs a random bit. Alice acts as follows: with probability 2q she acts as in Lemma 2, with probability 1 − 3q measures P on |0 0| and with probability q measures P on |− −|. Proof. The proof is given in Hirsch et al. [2013]. Interestingly the proof strategy is to give a 2round protocol for certifying non-locality of ρ * . The statement follows as 2-round non-locality implies entanglement. Lemma 5 (Hirsh's model). For q ≤ 1 2 define a state where σ A,B are arbitrary 2-dimensional states and ρ A,B = Tr B,A (ρ 0 (q)). Then ρ * is local for all POVMs. Moreover, it is local via Algorithm 1 and 2. Proof. Direct computation gives us that We consider cases depending on whether A returned in (i) line 8 or (ii) line 11 of Algorithm 1 and whether B returned in (i) line 7 or (ii) line 10 of Algorithm 2. Algorithm 1 Alice's Simulation If both return in (i) then, by Lemma 3, the simulation recovers the statistics of ρ 0 , the probability is equal to ηxξy 4 Tr[(P x ⊗ Q y )ρ 0 ], which is equal to the first term of (10). If A return in (i) and B returns in (ii) then the probability is 1 The last case is when both return in (ii) and the probability is then 1 Summing all the terms we arrive at (10). Having established some helpful lemmas about dot product computation in PostBQP and ways to simulate some entangled states locally we are ready to prove Theorem 1. We restate it here for convenience. Proof. We will prove the contraposition of the statement, i.e. BQP = PostBQP ⇒ ENT = NL. We will give an example of H A , H B , ρ AB ∈ S(H A ⊗ H B ) and show that for sufficiently small δ, for all k ∈ N and all G(ρ AB , k, ·) there exists n ∈ N such that one of the two requirements from Definition 2 does not hold. Local model. Define a state Note that it is a special case of the state from Lemma 4 for q = 1 3 . By Lemma 5 we know that ρ * is local via Algorithm 1 and 2. Assume towards contradiction that ρ * is nont-PostBQP-local. This means that for all sufficiently small δ 0 there exists k ∈ N, a game G(ρ * , δ 0 , k, ·) and A BQP , B BQP that satisfy conditions of Definition 1. Fix a polynomial p that certifies that and sufficiently small δ 0 . By our modelling we can assume that A BQP has a corresponding family of circuits {C A n } and so does B BQP with {C B n }. Algorithm 2 Bob's Simulation . 2: Sample y according to ξ y /2 3: Sample b 1 ∼ Ber( λ| Q y |λ ) 4: Sample b 2 ∼ Ber( 1 2 ) 5: Pick b from {b 1 , b 2 } with corresponding probabilities 2q and 1 − 2q 6: if b = 1 then 7: Return y 8: else 9: Sample y according to Tr[B y σ B ] 10: Return y 11: end if Circuits and POVMs. Circuits C A n and C B n implicitly define POVMs A, B. We will express these POVMs in terms of the unitaries defining the circuits. For the sake of simplicity of notation, we assume that a, b = 0 p(n) -the proof for other a's and b's is analogous. Let U A n , U B n be the unitaries corresponding to C A n and C B n respectively. We have that for every With slight abuse of notation we can write: The POVM element A x can be thus seen as, Let |φ x be an un-normalized vector where we treat |φ x as an element of Ω(C 2 ). Then we have that This is a direct consequence of the observation that: plugging in A x from equation (11) we get to (13). Computation in PostBQP. Now we want to design A , B that simulate answers of A BQP , B BQP to high precision. We will focus on A, as the proof for B is analogous. Note that to implement Algorithm 1 (the situation is analogous for Algorithm 2) it is enough to be able to 1. sample x according to η x /2, 2. compute λ| P x |λ = |(α x 0| + β x 1|) |λ | 2 , 0| P x |0 , −| P x |− to high precision. We will analyze the precision needed and it's influence on the final result later in the proof, For |λ ∈ Ω(C 2 ) represented as λ 0 |0 + λ 1 |1 , where λ 0 , λ 1 ∈ C, define the representation of |λ by a state with only real amplitudes as λ real := Re(λ 0 ) |00 + Im(λ 0 ) |01 + Re(λ 1 ) |10 + , where |λ ∼ ω(C 2 ). To implement: 1. A runs the circuit on |0 |0 q(n)−1 with probability 1 2 and on |1 |0 q(n)−1 with probability 1 2 , and then measures all the qubits to obtain x. The result is equivalent to running the circuit on a qubit with a density matrix ρ = I 2 . By definition, x is distributed according to η x /2. This step is thus in BQP. (12) and (13) the eigenvector of P x is defined by the amplitudes in front of |0 |0 q(n)−1 and |1 |0 q(n)−1 after the circuit is run backwards on |x , i.e. after applying (U A n ) † on |x . Let the eigenvector of P x be expressed as According to Observe that we can compute the state α x |0 + β x |1 in PostBQP. To do that run the circuit backwards on |x . Now we want to postselect on all qubits but the first being |0 . We apply a negation gate on all these qubits, then compute an AND and write the result to a fresh ancilla. At the end, we postselect on this ancilla qubit being |1 . The result is a state α x |0 + β x |1 . This step crucially uses the ability to postselect and is the main bottleneck to implementing this simulation in a lower complexity class. 3. A runs C A n on |0 |0 q(n)−1 and measures all the qubits to obtain x . By definition x is distributed according to Tr[A x |0 0|]. An analogous algorithm is implemented by B . Approximation errors. Now we can bound the difference in answer statistics between A , B and A BQP , B BQP that arises from errors coming from (17), (18) and failure events. By assumption |λ ∼ ω(C 2 ), so with probability 1 − 2 −9q(n) Then if (17) holds then the bit b 1 computed by A is correct. By the union bound over failure events (17), (18) and (19) we have that A answer differs from Algorithm 1 with probability at most 3 · 2 −9q(n) . A similar argument holds for B . By the union bound over A and B we obtain P[A BQP returns x and B BQP returns y]−P[A returns x and B returns y] ≤ 6·2 −9q(n) ≤ 2 −8q(n) . Final step. Now we arrive at a contradiction. The game is played k rounds, which means that V collects k samples. By (20) and the properties of the TV-distance we know that these k samples are from a distribution that is k · 2 −8q(n) away in TV-distance from a distribution corresponding to A BQP , B BQP . As there are 2 2q(n) different pairs of answers for A, B then by the properties of the TV-distance the probability that V will distinguish the two distributions is at most which implies, as per completeness in Definition 2 (where it is required that V accepts A BQP , B BQP with high probability) that the interaction is accepted with probability at least This is a contradiction as V should, as per soundness in Definition 2, accept the interaction with A , B with probability at most 1 − δ 0 − 1 p(n) but it does it with at least (21). B.1 The Buscemi Experiment and Entanglement Witnesses In this section we introduce the semi-quantum game from Buscemi [2012] that certifies entanglement of all entangled states in a model, where quantum communication is allowed. To describe the protocol we first define what an entanglement witness is. Definition 5 (Entanglement witness). For an entangled state ρ ∈ S(H A ⊗ H B ), an entanglement witness W ρ with a parameter 10 η > 0 is a Hermitian operator acting on H A ⊗ H B such that, Entanglement witnesses were considered in Horodecki et al. [1996] as one of possible criteria to distinguish entangled and separable states. As the set of separable states is convex, all entangled states have entanglement witnesses. Any witness W ρ can be rewritten as where τ s , ω t are the projectors onto the corresponding state in {|0 , |1 , |+ , |− , |i , |−i }. Let's describe the Buscemi game. The input to each one of A and B is a single qubit quantum state, chosen uniformly at random from the set {|0 , |1 , In the game the verifier estimates the value of a score function I, After several runs of the experiment, the verifier computes an estimate of the score functionÎ and claims that the parties held an entangled state ifÎ was negative. Honest provers, sharing ρ, can project their input together with their share of ρ on a |φ + . This would yield I = Tr[W ρ ρ]/4 < −η/4. Moreover, it can be proven that for any separable state and any two provers having access to it, E[Î] = I > η. Hence, by repeating the process, the verifier can distinguish the two cases with a probability arbitrarily close to 1. B.2 Proof of Theorem 2 Proof. Let P be a single round EDQC, guaranteeing soundness against all provers in BQP. We show that this implies that every entangled state is NeL. More formally let ρ AB ∈ S(C 2 ⊗ C 2 ) be an entangled state. For every sufficiently small δ we will show that there exists k ∈ N and a game G that satisfies Definition 1 for C = BQP. Let C be a quantum circuit acting on 4 qubits, with registers: A of 3 qubits and Q of 1 qubit. To every x ∈ {0, 1} 3 we associate τ x ∈ {|0 , |1 , |+ , |− , |i , |−i } such that the association is surjective. For x ∈ {0, 1} 3 and ρ Q ∈ S(C 2 ) the circuit works as follows, U C |x A ρ Q first creates τ x out of x and then returns the result of the Bell measurement on τ x ⊗ ρ Q . Equivalently its action can be expressed as a projection onto |φ + = 1 √ 2 (|00 + |11 ), i.e. probability of returning 1 is equal Game. Let δ ∈ (0, 1) and let W be an entanglement witness of ρ AB with parameter η . As discussed in Section B.1 one can express the witness as: for some real coefficients β x,y . 11 We define k = O log(1/δ) η minx,y |βx,y| . The game proceeds as follows: in each repetition V samples x, y ∼ U ({0, 1} 3 ) and then proceeds with running two independent copies of P(C, x), P(C, y) with A and B respectively, collects the answers, i.e. bits b A , b B . At the end V computes statistics P(b A = 1, b B = 1 | τ x , ω y ) and a score function corresponding to Ŵ Finally, V declares that A, B held an entangled state if and only ifÎ < 0. This can be seen as compiling the semi-quantum game from Buscemi [2012] into a bell-like game with the help of the EDQC P. Completeness. Assume the parties have access to ρ AB , i.e. mode (i). By definition of P there exist A, B ∈ BQP( ) satisfying the completeness property of Definition 3. We claim that if they run their protocol on their respective shares of ρ AB then they certify completeness of Definition 2. This strategy yields the following idealized (assuming perfect statistics) value: where in the crucial second equality we used the following. To compute P[b A = 1, b B = 1 | x, y] quantum mechanics allows as to think that first A performs the measurement and then B performs his measurement on the post measurement state after actions of A. By the second property of completeness we have that for every s, b A , i.e. the bit collected by V from A is distributed according to Tr ]. Next, the third property of completeness guarantees that the postmeasurement state of B's share of ρ AB is equal to the post measurement state of performing the Bell measurement on τ x ⊗ ρ A conditioned on obtaining outcome b A = 1. This means that B's state is . 11 Observe that in expression (24) there are different pairs (x, y) that map to the same pair of states. It is because |{0, 1} 3 | = 8 but |{|0 , |1 , |+ , |− , |i , |−i }| = 6. This is inconsequential. Thus the overall probability is exactly where we used the properties of partial trace in the last equality. From (25), setting of k = O log(1/δ) η mins,t |βs,t| and a standard application of the Chernoff bound we get that with probability 1 − δ we have |Î − I| < η/4, henceÎ ≤ 0 and thus the interaction is accepted. Soundness. Let Enc be the V's deterministic algorithm for generating q that takes as input x ∈ {0, 1} 3 and randomness r ∈ {0, 1} poly( ) , i.e. q = Enc(x, r) and similarly Dec be the V's deterministic algorithm for generating b, i.e. b = Dec(x, r, a). Let A, B ∈ BQP( ) be run in mode (ii), i.e. access to a separable state σ AB = ∞ k=1 p k σ (k) . 12 Assume towards contradiction that there exists a negligible function negl such that V accepts with probability at least 1 − δ − negl( ). Let be large enough so that negl( ) ≤ δ. Then V accepts with probability 1 − 2δ. For simplicity of notation denote the size of the questions and answers in the protocol as n = n( ). For every q ∈ {0, 1} n let {A a (q)} a∈{0,1} n be the effective POVM acting on H A that defines A's actions. Similarly for every q ∈ {0, 1} n we define {B a (q )} a ∈{0,1} n as the effective POVM acting on H B for B. Also let {C b } b∈{0,1} be the effective POVM of circuit C acting on k qubits. Denote by R the length of the randomness r used in ENC. We express the probability of b A = 1, b B = 1. Tr B a (ENC(y, r ))σ Now, if V accepts the whole interaction with probability 1 − 2δ then in particular in a single round V accepts the delegation part (with A and B) of the protocol with probability at least 1 − 2δ. Thus by the Markov inequality there exists a subset S ⊆ N such that k∈S p k ≥ 1 − 2δ and for every k ∈ S, A's circuit, when given σ (k) A succeeds in P with probability 1 − 4δ for 12 We allow σA to be a convex combination of infinitely many product states. every x. Thus if δ is sufficiently small then for every k ∈ S soundness of P holds, which implies that there exists a 1-qubit density matrix ρ k Q ∈ S(C 2 ) such that 2 −R r a:DEC(x,r,a)=1 Tr A a (ENC(x, r))σ and crucially the same ρ k Q can be taken for all x. Similar argument holds for B. Now we can use (27) in (26). We split N into two groups, S and N \ S. For k ∈ S we use (27) and for k ∈ N \ S we bound 2 −R r a:DEC(x,r,a)=1 Tr A a (ENC(x, r))σ (k) A − Tr C 1 (|x ⊗ ρ k Q ) by 1. The same operation is performed for B. The result is C Generalization of Theorem 1 In our proof of Theorem 1 we effectively take any two circuits C A and C B and some state ρ AB and simulate the statistics generated by them in mode (i) with two PostBQP circuits of size poly(|C A |) and poly(|C B |), where | · | denotes the size of the circuit, i.e. number of gates plus the number of qubits on which the circuit operates. This means that we can show a more general result. What does more general mean? Even when one agrees with the quantum Church-Turing hypothesis, the choice of the BQP class in the completeness part of Definition 2 is somewhat arbitrary. In the following, we define nonlocality slightly differently. Honest A, B still apply some quantum circuit on their input and their share of ρ AB but their circuits are no longer limited to be of polynomial size with respect to the question size, i.e. polynomial in n. Next, we say that a game is sound if no cheating provers with polynomially bigger circuits can fool the verifier. Where the parameter is now the size of the honest circuit. To summarize, we still assume the quantum Church-Turing hypothesis but we don't impose any restrictions on the sizes of honest circuits. More formally Definition 6. (not-efficiently-local) For a state ρ AB ∈ S(H A ⊗ H B ) we say that ρ AB is not-efficiently-local if for every sufficiently small δ there exists k ∈ N, game G(ρ AB , k, ·) and provers A honest (·), B honest (·) such that for every polynomial p(·) there exists a polynomial q(·) such that for every n ∈ N 1. (Completeness) If G(ρ AB , k, n) was run in mode (i) with A honest (n), B honest (n) then P[V accepts] ≥ 1 − δ. 2. (Soundness) For every A, B such that |A| ≤ p(|A honest (n)|), |B| ≤ p(|B honest (n)|) if G(ρ AB , k, n) was run in mode (ii) with A, B then With this new definition, our proof directly implies that where the NeL is understood as in Definition 6.
2023-03-06T06:42:05.130Z
2023-03-03T00:00:00.000
{ "year": 2023, "sha1": "397a2bc56bad1a4451a50d1f6a112ed86a9e3c8b", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "6fc02fb22878994a7a07e3a237751c02dbdcfdee", "s2fieldsofstudy": [ "Physics", "Computer Science" ], "extfieldsofstudy": [ "Physics" ] }
216355705
pes2o/s2orc
v3-fos-license
Temporal and Spatial Distribution Characteristics of Air Pollution and the influence of Energy Structure in Guangdong-Hong Kong-Macao Greater Bay Area Based on the panel data of 11 cities in GBA from 2010 to 2016, this paper uses spatial data analysis method to study the temporal and spatial distribution characteristics of air pollution, and the impact of energy structure on air pollution in GBA. The results show that the more serious air pollution is mainly distributed in Zhaoqing, Foshan, Jiangmen, etc. Air pollution has spatial correlation and obvious “path dependence” characteristics in spatial distribution. Secondly, the energy structure coefficient is positive, which means that the coal consumption increases 1%, the air pollution index will rise by 0.1381%. At the same time, a “U”-type relationship exists between per capita GDP and air pollution. Introduction The Opinions about More Effective Regional Coordination and Development Mechanism issued by the Central Committee of the Communist Party of China and the State Council clearly point out that Hong Kong, Macao, Guangzhou and Shenzhen are the center cities to lead the construction of Guangdong-Hong Kong-Macao Greater Bay Area (GBA), and promote the green development of Pearl River Delta region. However, with the development of economy and society, the problem of air pollution begin to become more prominent. Especially, PM10 pollution as the main representative has become more and more serious. To rectify air pollution problem, we must consider the spatial effects and the impact of energy structure on air pollution. Therefore, this paper studies the temporal and spatial distribution characteristics of air pollution and the impact of energy structure on air pollution in GBA. The issue of energy consumption to air pollution has always been one of the research focuses, and the existing literature revealed the relationship between energy consumption and air pollution. Auffhammer and Carson (2008) [1] use regional exhaust emissions as a variable for carbon dioxide emissions, and verify the existence of the CO2 Environment Kuznets Curve (EKC). Subsequently, based on six Central American countries, Apergis and Payne (2009) [2] find a one-way causal relationship from energy consumption to environmental pollution in the short term. Acaravci and Ozturk (2010) [3] also constructs a distribution lag model for energy consumption, carbon dioxide and economic growth, confirming the existence of EKC in Germany and Italy. Katircioglu (2014) [4] analyzes the impact of international tourism and energy consumption on local environmental pollution in Turkey. The study finds that there is a long-term equilibrium relationship between tourism, energy consumption and carbon dioxide emissions. Bastola and Sapkota (2015) [5] also use a time series model to analyze the causal relationship between energy consumption and environmental pollution in Nepal, and shows that there is a long-term two-way causal relationship between energy consumption and carbon emissions, and energy IOP Conf. Series: Earth and Environmental Science 446 (2020) 032029 IOP Publishing doi:10.1088/1755-1315/446/3/032029 2 consumption will lead to an increase in carbon emissions. In additon, the literature mainly uses CO2 and SO2 to measure air pollution, with less PM10 is discussed as air pollution indicator. Research on the spatial impact of energy structure on air pollution in GBA is worth discussing. Figure. Meanwhile, the GBA is a highly urbanized city cluster, with an average inter-city distance shorter than 10 km, which makes the prevention and control of transboundary air pollution a severe challenge [6]. Variables and data As the panel data has larger sample size, which can control the heteroscedasticity between different regions and the bias caused by neglected variables, this study selects 11 cities' panel data from 2010 to 2016 in GBA as the research subject. In this paper, air pollution is measured by the annual average concentration values of PM10, energy structure indicator is measured by the proportion of coal to total energy consumption. Meanwhile, based on the EKC, this paper selects the actual per capita GDP and its squared as explanatory variables, which illustrats the economic development level. The related data is derived from the Statistical Statistical Yearbooks and Guangdong-Hong Kong-Macao Pearl River Delta Regional Air Quality Monitoring Network. Spatial models construction Based on EKC hypothesis proposed by Grossman and Krueger (1991) [7], which an inverted U-shaped relationship between economic development and environmental pollution, we construct spatial panel models. Learning from the EKC theory and the general equilibriummodel advanced by Antweiler et al. (2001) [8], the basic model can be constructed as follows: lnAP it = α 0 + α 1 lnPGDP it + α 2 ln 2 PGDP it + α 3 ES it + ε it (1) According to Anselin (1995) [9], we construct two kinds of spatial panel models, including Spatial Lag Model (SLM) and Spatial Error Model (SEM), to explore the effect of energy structure on air pollution from 2010 to 2016 in GBA, the SLM and SEM are constructed as follows: (3) Where i and t denote city and year respectively; AP represents the annual average values of PM10 concentrations, PGDP denotes the per capita GDP, ES denotes energy structure. ∑ W ij lnAP it =1 n is spatial lag variable; ρ denotes the spatial lag coefficient, indicating the spatial spillover effects of neighboring cities to local cities; W is spatial spatial weight matrix; ε is the randomerror vector; μ it is equal to λ ∑ W ij μ it n j=1 + ε it , which λ denotes the spatial error coefficient; α 0 denotes individual fixed effect. Table 2 lists air pollution distribution situation for 11 cities in GBA durning the period of 2010 to 2016. It shows that air pollution exists a downward trend from 2010 to 2016. Air pollution in Foshan city is the most serious, which reaches a maximum of 93 ug/m 3 in 2010, and all years values are higher than 53 ug/m 3 . Secondly, followed by Zhaoqing city, where PM10 values are greater than 57 ug/m 3 , and also reaches a maximum of 77 ug/m 3 in 2010 and 2011. The air pollution level of Zhuhai, Shenzhen, Hong Kong is not too serious compared with Foshan, Zhaoqing, Dongguan, Jiangmen, etc. Therefore, crossregional environmental governance is an urgent task. Table 2 Dongguan 65 69 59 62 60 52 51 Foshan 93 86 76 76 67 56 53 Guangzhou 66 62 62 62 57 52 50 Huizhou 60 67 54 61 55 44 42 Jiangmen 58 56 58 75 64 55 53 Shenzhen 58 56 48 57 52 46 39 Zhaoqing 77 77 54 76 74 58 57 Zhongshan 65 66 58 67 51 47 42 Zhuhai 60 52 49 60 49 43 40 Hong Kong 48 53 45 50 45 42 34 Macao 56 59 53 54 53 52 46 Spatial estimation test of energy structure and air pollution in GBA Before performing spatial measurement analysis, spatial models need to be selected. Based on the spatial measurement model selection decision rules proposed by Anselin (2005) [10]. Firstly, spatial correlation of the research object is judged by the Moran's I value, and then by observing LM-Lag value and the LM-Error value to determine which model is more suitable. If the LM-Lag value and the LM-Error value are similar and significant, the Robust LM-Lag value and the Robust LM-Error value are further observed. Table 4 shows that the Moran's I value is 0.4596, which is significant at the 1% level, indicating that there is a spatial correlation for air pollution in GBA. So, the OLS estimation results without considering spatial effects are biased and non-uniform. At the same time, the LM-Lag value and the LM-Error value are 30.2499 and 20.4127, respectively, both of which are significant at the 1% level. The Robust LM-Lag value is 16.6663, passing the 1% significance level test, and the Robust LM-Error value is also tested by the 1% significance level. Therefore, Spatial Dubin Model is more suitable. In order to overcome the bias of the estimation results caused by traditional OLS regression method without solving the endogenous problem, this paper use Spatial Dubin Model to examine the effect of energy structure on air pollution in GBA by Matlab software, and the etimation results are showed in Table 5. It can be seen from Table 5 that the coefficient of energy structure is positive, and the 1% significance level means that the coal consumption ratio increases by 1%, and the air pollution index increases by 0.1381%. On the one hand, energy structure index directly reflects the industrial energy consumption structure; on the other hand, it reflects the energy consumption structure indirectly, indicating that the industrial energy consumption structure and energy consumption structure are positively correlated with air pollution, and the correlation is very high. The coefficient of per capita GDP is -1.2066, and the coefficient of square of per capita GDP is 0.0578, which proves that a "U"-type relationship exists between per capita GDP and air pollution. That is, with the continuous increase of per capita GDP, air pollution index first drops, and after reaching a certain threshold, it begins to rise. 0.5447 In order to explain the regression coefficient of spatial lag term of Spatial Dubin Model specifically, this paper decomposes the total effect of spatial spillover into direct effect and indirect effect by the partial differential method. The indirect effect reflects the influence of interpretative variables on the interpreted variables in a city. From the direct effect point of view, the energy structure has an impact coefficient of 0.1396 on air pollution, and passes the 1% significance level, indicating that the energy structure has a significant effect on air pollution. For indirect effect, the spatial spillover of energy structure is negative, indicating that energy structure of one city has an inhibitory effect on air pollution in other cities. Meanwhile, the direct effect of per capita GDP presents a "U"-type relationship, and the indirect effect presents an inverted "U"-type relationship. Conclusions Based on the panel data of 11 cities in GBA from 2010 to 2016, this paper uses spatial data analysis method to study the temporal and spatial distribution characteristics of air pollution. The results show that the more serious air pollution is mainly distributed in Zhaoqing, Foshan, Jiangmen, etc. Air pollution has spatial correlation and obvious "path dependence" characteristics in spatial distribution. Secondly, this paper further tests the spatial impact of energy structure on air pollution in GBA, and finds that the energy structure coefficient is positive, which means that the coal consumption increases 1%, the air pollution index will rise by 0.1381%. At the same time, the coefficient of per capita GDP is positive and the coefficient of square of per capita GDP is negative, which proves that a "U"-type relationship exists between per capita GDP and air pollution. Based on the above conclusions, the following suggestions are proposed. From a short-term perspective, air pollution reduction and effective
2020-04-02T09:38:14.785Z
2020-03-21T00:00:00.000
{ "year": 2020, "sha1": "63452935a2e675a0090e94902b45f49c5cf0ac3b", "oa_license": null, "oa_url": "https://doi.org/10.1088/1755-1315/446/3/032029", "oa_status": "GOLD", "pdf_src": "IOP", "pdf_hash": "bc79f0aab87ae71d57638bf8f1cd33e7fbab54cf", "s2fieldsofstudy": [ "Environmental Science" ], "extfieldsofstudy": [ "Physics", "Environmental Science" ] }
157057689
pes2o/s2orc
v3-fos-license
Effect of Ground Transportation on Adrenocortical Activity in Prepuberal Female Mice from Five Different Genetic Backgrounds Simple Summary For research purposes, mice are often transported between institutions, which may elicit stress, thereby influencing results. We determined adrenocortical activity by measuring fecal corticosterone metabolites (FCMs), as a stress marker, in prepuberal mice from five genetic backgrounds, namely C57BL/6J, C57BL/6NCrl, FVB/NCrl, Crl:CD1(ICR), and BALB/cAnCrl. Only C57BL/6N showed significantly higher FCM levels the day after transport, but baseline levels were attained within four days. Abstract Specific experimental protocols necessitate transportation, a potentially stressful event that could confound results. We determined adrenocortical activity by measuring fecal corticosterone metabolites (FCMs), as a stress marker, in prepuberal (three-week old) female C57BL/6J, C57BL/6NCrl, FVB/NCrl, Crl:CD1(ICR), and BALB/cAnCrl mice. On each transport day, five female cage mates per genetic background were weaned and transported in stable groups via truck from the breeding to the research facility. Fecal pellets were collected on Days 0, 1, and 4. Mice were superovulated for embryo production to determine if repeated fecal collection impacts this procedure. The average duration of transportation over 600 km and from packing to unpacking of mice was 7.24 and 22.62 h, respectively. FCM levels increased from Day 0 to Day 1 and decreased on Day 4 in all genetic backgrounds except in FVB/NCrl, but only B6N showed significantly higher FCM levels on Day 1. Furthermore, embryo production was not affected by repeated feces collection. The results show that weaning and immediate transport of prepuberal mice from the breeding to the research facility led to temporal and genetic background-dependent increases of adrenocortical activity in four of the five genetic backgrounds investigated, which returned to baseline levels within four days. Introduction Concomitant with the high demand for genetically engineered mice for biomedical research, transportation of mice or their genetic material in the form of oocytes, embryos, or spermatozoa is indispensable. Transportation of live mice is executed either by air and/or via ground transportation. Moreover, specific experimental protocols such as superovulation of oocyte or embryo donors necessitate transportation of prepuberal mice immediately after weaning. When weaned by commercial suppliers, mice are separated from their cage mates and dam and may even be re-grouped with animals from other cages. During transportation and upon receipt, they are exposed further to a different environment, which may include vibrations, sounds, disruption of the dark/light regime, temperature, humidity, feed, bedding, water, odors, unfamiliar animal caretakers, and a new social structure. These factors may induce stress as a cumulative burden, possibly affecting animal welfare and confounding research results. Therefore, an adaptation period to the new housing conditions is recommended [1]. To evaluate stress in animals, a number of parameters can be used. One of these is the measurement of fecal corticosterone metabolites (FCMs) since corticosterone is the major glucocorticoid in mice and its metabolites are excreted primarily via feces [2,3]. Thus, this approach is non-invasive and does not elicit a stress response, in contrast to invasive methods such as blood sampling [4]. In the few reports available on transportation of mice, body weight and immune responses decreased while plasma corticosterone levels increased up to at least 48 h after 18-48 h of transport [5][6][7][8]. For assisted reproductive technological work at the Center for Molecular Medicine, University of Cologne (CMMC), prepuberal female mice are imported from the commercial supplier and transported overnight over a distance of approximately 600 km. The hormone administration for superovulation begins three days later. Since the degree and time of disturbance of homeostasis after transportation of prepuberal mice is unknown, we non-invasively determined induced stress to the animals by measuring adrenocortical activity over a five-day period in mice of five different genetic backgrounds that were transported from the breeding to the research facility. Furthermore, mice were superovulated and two-cell embryos were collected to determine if additional handling of mice due to repeated collection of fecal samples affects embryo production. Mice Female inbred C57BL/6J (B6J), C57BL/6NCrl (B6N), BALB/cAnCrl (BALB/cN), and outbred Crl:CD1(ICR) (CD-1) mice were weaned at 21 days of age. Outbred FVB/NCrl (FVB/N) mice were weaned at 21 to 24 days. On each day of transport, five female cage mates from each genetic background were weaned together and then kept in stable groups throughout the study. At both facilities, mice were kept under standard husbandry conditions according to the Directive 2010/63/EU and were free from the Federation of European Laboratory Animal Science Associations (FELASA)-listed infectious agents. Breeding and Husbandry at Charles River Laboratories At Charles River Laboratories (CRL), mice were born and raised in three different barrier facilities (B6J: Barrier 1; B6N, BALB/cN, and FVB/N: Barrier 2, and CD-1: Barrier 3). They were kept in Type III open-top cages with autoclaved bedding (Ecopure7D, Datesand group, Manchester, UK) and paper tissue as nesting material at a temperature of 21 • C ± 0.5 • C, humidity of 50 ± 20%, an average of 16 air changes per hour in the room, and a 12/12-hour light/dark cycle (lights on at 6:00 a.m.). Mice were fed a standardized autoclaved pelleted mouse diet (VRF1 (P), SDS, Witham, Essex, UK) and given sterilized drinking water ad libitum. In mice from all genetic backgrounds, harem matings were practiced with cross-fostering of 20 to 50 pups per three to five dams. In the present study, for each genetic background, five prepuberal female mice from the same breeding cage were randomly selected and separated from their dams and kept in Type III cages for up to 1 h during which voided fecal pellets were collected (Day 0). The work done in Barriers 1, 2, and 3 at CRL was performed by three, one, and three different caretakers, respectively. They were aware of the genetic background of the mice they were handling. Transport of Mice After fecal pellet collection, each group of mice was packed in a filtered transportation crate (R/M-Karton, 630 × 420 × 170 mm, Krug, Bad Königshofen, Figure 1A), which contained bedding (Ecopure7D, Datesand group, Manchester, UK), nesting material (Zellstoffwatte, LMS Consult, (R/M-Karton, 630 × 420 × 170 mm, Krug, Bad Königshofen, Figure 1A), which contained bedding (Ecopure7D, Datesand group, Manchester, UK), nesting material (Zellstoffwatte, LMS Consult, Brigachtal, Germany), feed (VRF1 (P), SDS, Witham, Essex, UK), and a source of water (Hydrogel, Clear H2O, Portland, OR, USA). A data logger (EBI 20-TH1 Xylem Analytics, Germany Sales GmbH & Co. KG, Ingolstadt, Germany) measuring temperature and relative humidity from packing to unpacking of mice was inserted in each crate ( Figure 1B). Immediately thereafter, mice were transported from CRL, Sulzfeld to the CMMC research facility in air-conditioned vehicles ( Figure 1C) but were not exposed to light during transport. The different steps involved in ground transportation are shown in Table 1. Upon arrival at the CMMC building (Day 1), mice were kept in their crates in a heating cabinet (Allentown, NJ, USA) at 20 to 24 °C because the technicians work in the laboratories in the CMMC building first. Afterward, the technicians transported the crates with the mice manually (held in the hands during a 2-min walk) to the quarantine of the CMMC, which is situated in a neighboring building. Both the heating cabinet and the quarantine are on ground level. Mice from each genetic background were transported on six different days in the period from February to April 2018. A total of 30 female mice from each of the genetic backgrounds (B6J, B6N, FVB/N, CD-1, and BALB/cN) were used in the present study. Mouse Husbandry in the CMMC Quarantine In the CMMC quarantine, each group of five mice was kept in Type II long individually ventilated cages (IVCs, Greenlines, Tecniplast, Buggugiate, Italy) with bedding (FS14, J. Rettenmaier and Söhne, Rosenberg, Germany), houses (Mouse Smart Homes, Datesand group, Manchester, UK), wooden gnawing sticks (J. Rettenmaier and Söhne), and nestlets (Arbocell, Rettenmaier and Söhne) ( Figure 1D). They were kept at 20 to 24 °C, 50 to 70% relative humidity, 75 air changes per hour in the cages, and a 12/12-hour light/dark cycle (lights on at 6:00 a.m.). Mice were fed an autoclaved standardized mouse diet (1314P, Altromin, Lage, Germany) and given autoclaved drinking water ad libitum. No animal care activities were performed after mice were placed in their home cage on Day 1 to exclude confounding factors. The same two technicians performed all steps throughout the study. They were aware of the genetic background of the mice they were handling. Fecal Pellet Collection Voided fecal pellets were collected from the groups of mice temporarily kept on paper towels without bedding between 1:00 p.m. and 2:15 p.m. on Day 0 (at CRL) to obtain baseline values of stress hormones secreted 8 to 10 h before [3]. Feces were also collected at the same time on Day 1 and Day 4 (day of first hormone treatment described later) at the CMMC quarantine facility 24 h and 96 h later, respectively, to eliminate any influences of the circadian rhythm on hormone secretion. Samples were collected in microcentrifuge tubes and stored at −20 °C until analysis. To simulate routine conditions whereby mice are not handled after arrival at the CMMC quarantine until Day 4, fecal pellets were not collected on Day 2 and Day 3. Immediately thereafter, mice were transported from CRL, Sulzfeld to the CMMC research facility in air-conditioned vehicles ( Figure 1C) but were not exposed to light during transport. The different steps involved in ground transportation are shown in Table 1. Upon arrival at the CMMC building (Day 1), mice were kept in their crates in a heating cabinet (Allentown, NJ, USA) at 20 to 24 • C because the technicians work in the laboratories in the CMMC building first. Afterward, the technicians transported the crates with the mice manually (held in the hands during a 2-min walk) to the quarantine of the CMMC, which is situated in a neighboring building. Both the heating cabinet and the quarantine are on ground level. Mice from each genetic background were transported on six different days in the period from February to April 2018. A total of 30 female mice from each of the genetic backgrounds (B6J, B6N, FVB/N, CD-1, and BALB/cN) were used in the present study. Mouse Husbandry in the CMMC Quarantine In the CMMC quarantine, each group of five mice was kept in Type II long individually ventilated cages (IVCs, Greenlines, Tecniplast, Buggugiate, Italy) with bedding (FS14, J. Rettenmaier and Söhne, Rosenberg, Germany), houses (Mouse Smart Homes, Datesand group, Manchester, UK), wooden gnawing sticks (J. Rettenmaier and Söhne), and nestlets (Arbocell, Rettenmaier and Söhne) ( Figure 1D). They were kept at 20 to 24 • C, 50 to 70% relative humidity, 75 air changes per hour in the cages, and a 12/12-hour light/dark cycle (lights on at 6:00 a.m.). Mice were fed an autoclaved standardized Animals 2019, 9, 239 4 of 10 mouse diet (1314P, Altromin, Lage, Germany) and given autoclaved drinking water ad libitum. No animal care activities were performed after mice were placed in their home cage on Day 1 to exclude confounding factors. The same two technicians performed all steps throughout the study. They were aware of the genetic background of the mice they were handling. Fecal Pellet Collection Voided fecal pellets were collected from the groups of mice temporarily kept on paper towels without bedding between 1:00 p.m. and 2:15 p.m. on Day 0 (at CRL) to obtain baseline values of stress hormones secreted 8 to 10 h before [3]. Feces were also collected at the same time on Day 1 and Day 4 (day of first hormone treatment described later) at the CMMC quarantine facility 24 h and 96 h later, respectively, to eliminate any influences of the circadian rhythm on hormone secretion. Samples were collected in microcentrifuge tubes and stored at −20 • C until analysis. To simulate routine conditions whereby mice are not handled after arrival at the CMMC quarantine until Day 4, fecal pellets were not collected on Day 2 and Day 3. Analysis of Fecal Corticosterone Metabolites Fecal samples were dried in an oven at 70 • C for 24 h, homogenized using a mortar and pestle to form a powder, and weighed. An amount of 50 mg was shaken with 1 mL of 80% methanol in an Eppendorf tube for 30 min on a vortex and centrifuged at 2500× g for 15 min. If this amount of fecal pellets was not available after 1 h of collection, the amount was shaken with the corresponding amount of methanol (e.g., 40 mg of feces with 0.8 mL of methanol). The supernatant was transferred into a new Eppendorf tube and stored at −20 • C until analysis. FCMs were determined using a well-established 5α-pregnane-3β,11β,21-triol-20-one enzyme immunoassay (EIA) [2,3]. In total, 90 samples were analyzed-six groups of mice from each of the five genetic backgrounds with fecal samples being collected on three different days (Day 0, Day1, and Day 4). Superovulation and in vivo Production of Two-Cell Embryos On Day 4, immediately after fecal pellet collection, females were induced to superovulate with an intraperitoneal injection of 5 IU equine chorionic gonadotropin (eCG, PMSG, Cat-Nr.: OPPA01037, Hölzel Diagnostika Handels GmbH, Cologne, Germany) and an intraperitoneal injection of 5 IU human chorionic gonadotropin (hCG, Ovogest, Cat-Nr: 707184, Intervet, Unterschleissheim, Germany) given 48 h apart between 2:00 p.m. and 3:00 p.m. Immediately after the hCG injection (Day 6), females were paired with males, which were at least 12 weeks old. After cervical dislocation of the females on Day 8, Day 1.5 embryos (two-cell) were flushed from the excised oviducts of all females using M2 medium (Sigma), as previously described [9]. Morphologically intact two-cell embryos with two blastomeres of approximately the same size, homogeneous cytoplasm, intact zonae pellucida, and neither blebbing nor fragmenting were selected using a stereo microscope (Zeiss, Jena, Germany) under 400× magnification. For each group of five mice, embryos were collected in the same petri dish. Animal Welfare B6J, B6N, FVB/N, CD-1, and BALB/cN were used in the present study because most genetically modified lines in the CMMC facility are kept on these genetic backgrounds. Mice from these genetic backgrounds which are bred in-house at the CMMC are re-stocked every two years with embryo donors from approved sources to prevent genetic drift. Only animals used in this context were evaluated. As such, the work described here did not warrant a special license, as it was in compliance with the German Animal Welfare Act, and met the standards of the Animal Research: Reporting In Vivo Experiments (ARRIVE) guidelines [10]. Furthermore, the same technicians performed all steps at the CMMC quarantine throughout the study. Non-invasive sample collection, as well as group-housing, was implemented throughout the study to reduce stress and to enhance animal welfare. Statistical Analysis FCM data are presented as means ± standard error of the mean (SEM)/50 mg of feces. Differences in FCM were analyzed using a two-way ANOVA (for day and genetic background) and post hoc pair-wise t-tests. The numbers of embryos per group of five females from the five different genetic backgrounds were compared employing the Kruskal-Wallis Test, which is a robust non-parametric one-way ANOVA. On that basis, p-values ≤ 0.05 were considered statistically significant. The statistical programming software package SAS 9.4 was employed (SAS Institute Inc: SAS/STAT User's Guide, Version 9.4, SAS Institute Inc.: Cary, NC, USA, 2014). Table 1 shows the different stages of the transportation process from the breeding to the quarantine facility in the chronological order in which they were performed. The average duration of transport was 7.24 ± 0.19 h, while the average time from packing of mice at the breeding facility to unpacking of mice in the quarantine facility was 22.62 ± 0.07 h. Transport to the CMMC research facility is not direct and includes the mandatory overnight rest for the driver and at least one other stop prior to delivery at the CMMC, which lengthens the transport duration. In the crates, an average temperature of 20 • C and 40 to 50% relative humidity were measured during transport. All mice were in good physical condition upon arrival and throughout the husbandry period. Feces Excretion The amount of feces (dry weight in mg) excreted per group within a maximum of 1 h of collection according to genetic background is shown in Table 2. At least 20 fecal pellets were collected within this period from each group of mice, which were generally sufficient for FCM determination. There were only three samples where less than 50 mg of ground fecal sample was available (35, 40, 40 mg), but we adjusted the amount of methanol used for extraction accordingly. Embryo Production The prepuberal mice were superovulated and two-cell embryos were collected to determine whether additional handling due to fecal collection had an effect on embryo production. Also, the efficacy of the number of embryos produced to that normally obtained in our laboratory [11] was compared. The number of two-cell embryos collected is shown in Table 3. There were no significant differences between genetic backgrounds with respect to the number of embryos collected (p < 0.2749). Embryo Production The prepuberal mice were superovulated and two-cell embryos were collected to determine whether additional handling due to fecal collection had an effect on embryo production. Also, the efficacy of the number of embryos produced to that normally obtained in our laboratory [11] was compared. The number of two-cell embryos collected is shown in Table 3. There were no significant differences between genetic backgrounds with respect to the number of embryos collected (p < 0.2749). Discussion This is the first study to evaluate the effect of ground transportation on adrenocortical activity in prepuberal female mice from five different genetic backgrounds that are commonly employed in biomedical research. The present results show that transportation of prepuberal mice from the breeding to the research facility led to increased FCM levels on Day 1 in four out of five genetic backgrounds, and baseline levels were attained within four days. However, only B6N showed significantly higher FCM levels on Day 1. On any of the three days investigated, FCM levels differed according to genetic background. Furthermore, embryo production was not affected by handling due to repeated fecal collection. There are indeed sparse reports pertaining to the effects of transportation on mice [1]. The transportation mode can be in-house, domestic, or international, with the latter often being longer. As such, the type and duration of transport may affect corticosterone metabolite levels, which may, in turn, influence the time required to return to baseline levels. Drozdowicz et al. [7] reported that a 12-min in-house transfer of 8-12-week-old male BALB/cAnNCrl(BR) led to increased plasma corticosterone over the following 24 h. The report by Tuli et al. [12] showed that a 12-min in-house transport (10-min walk and 2 min in an elevator) of inbred 4.5-5-month-old male BALB/c/Ola mice led to a significant elevation in serum corticosterone levels immediately after transportation, but baseline levels were attained within one day. However, they reported that behavioral activity was still altered after four days. Serum corticosterone concentrations were elevated in 4-5-week-old C57BL/6 and ICR mice after 3-4 h of transportation by truck [13]. Plasma corticosterone levels increased and immune function decreased up to at least 48 h after 36-48 h of transport by truck or a 24-36 h of transport by plane in eight-week-old Crl:COBS,CD-1(ICR)BR mice as reported by Landi et al. [5]. Aguila reported that transport of six-week-old female C57BL/6J mice by air-conditioned trucks (36 to 42 h) or by air (18 to 20 h) led to decreases in natural killer cell activity and increases in corticosterone levels, but mice were acclimatized after 24 h [6]. The abovementioned studies measured corticosterone levels using radioimmunoassay. Sex was reported to influence corticosterone metabolism and FCM levels since higher levels were measured in females compared with their male counterparts [2,3,14]. As such, it is expected that FCM levels in prepuberal male mice from the genetic backgrounds used in this study will also be lower than those observed for the female mice used. Furthermore, the genetic background and the age of mice also have a significant effect on the level of stress hormone metabolites, as demonstrated for female mice from an inbred (C57BL/6NCrl) and an outbred stock (Crl:CD1), which were born and bred in-house up to the age of 26 months [15]. The latter researchers showed that corticosterone levels increased with age in Crl:CD1 mice, whereas levels were relatively constant in C57BL/6NCrl mice, which may reflect differences in adrenocortical activity or different corticosterone metabolism [15]. Higher basal serum corticosterone levels were observed in non-transported C57BL/6 compared to ICR mice [13]. The latter observation is in contrast to our results where FCM levels for non-transported prepuberal B6N mice were lower than those for the other four genetic backgrounds. In transported mice, higher increases in serum corticosterone levels were observed for ICR than for C57BL/6 mice compared to their non-transported counterparts [13]. However, this result was not confirmed in our study. Unexpectedly, in the present study, the FCM levels in FVB/N mice decreased on Day 1, in contrast to observations for the other four genetic backgrounds investigated, but differences in FVB/N mice were not significant. The reason for this decrease is unknown, but this may reflect a higher resilience of the FVB/N strain to stress. The present study was performed with prepuberal wild-type mice. Since an increasing number of mice used for biomedical research are genetically engineered and are of different ages, responses to transportation may deviate from those observed in the present study. During transportation, it is necessary to protect animals from inclement weather and adverse changes in climatic conditions. Mice are kept at 20 to 24 • C in the breeding and research facilities. Therefore, the temperature during transportation should ideally not deviate significantly from these at least for longer periods. In the present study, the temperature in the delivery truck enabled an average of 20 • C in the transportation crate. The time for ground transitions from the breeding facility to the truck and from the truck or holding site after arrival at the research facility, that is, for loading and unloading, should be kept to a minimum since these phases are at greatest risks for temperature deviations. Temperature variations were observed during air shipments of mice [16]. In the present study, the coldest temperature (10.7 to 12.7 • C) experienced from the breeding facility to the truck was for 10 min with two shipments. Between the heating cabinet and the quarantine facility, the temperature was 20 to 22.3 • C for 10 min. Therefore, all temperatures were within the range of 5 to 34 • C for safe transportation [17] and, moreover, in conformity with standard husbandry conditions after loading until unpacking. Group size did not significantly affect FCM levels [14]. Notably, in the present study, collection of fecal samples from group-housed instead of singly kept mice mimicked routine procedures, did not elicit social stress due to single housing, and allowed determination of FCMs. Therefore, collection of fecal samples, in contrast to blood, is non-invasive and contributes to improved animal welfare [4,18]. Furthermore, this task can be performed easily without any special training. In the present study, embryo production by the prepuberal female mice was in conformity with previous reports from routine work in our laboratory [11] for which mice are also bought from the same supplier. This implies that feces collection after transfer of mice to clean cages for 1 h on the days reported in this study did not affect embryo production. Whereas mice were kept in open-top cages at the commercial breeder, they were kept in IVCs in the CMMC quarantine. Previous reports showed that exploratory behavior was not significantly different when group-housed C57BL/6JRj male mice (3-4 mice per cage) were kept in IVCs after being housed for 15 days in conventional caging [19]. Furthermore, FCM concentrations were not significantly different in mice housed in open cages or IVCs [20]. Whether a change in caging from open-top at the Charles River Laboratories to IVC at the CMMC quarantine affected the FCM levels in the present study is unknown, but it should be noted that the defecation rate was comparable for each genetic background, as shown in Table 2. Previous reports showed that diets with a high fat content may lead to reduced defecation rate, whereby the FCM concentrations might get inflated [21]. In the present study, there was an immediate change of diet upon arrival at the CMMC quarantine. However, both diets showed similar nutrient composition, and defecation rate was not altered. Thus, we do not expect that FCM concentrations were affected by this diet change. FCM concentrations show a circadian rhythm [2]. Mouse husbandry involves the light and the dark phase, whereby mice are more active in the nocturnal phase. Previous reports showed that FCM concentrations peak 10 h and 4 h after the stress stimuli in the light or the dark phase, respectively [3]. In the present study, it is difficult to pinpoint just one specific time point of stress arousal since the stress stimuli span the period from the time of packing to the time of unpacking the mice. Even though locomotor activity may be influenced by eating behavior, in the present study, it did not influence the defecation rate at the time point of feces collection and, ultimately, the FCM concentration. Notably, we collected feces at the same time point on each day of handling to avoid confounding the results [4]. As a general guide, at least seven days of acclimatization is recommended following transport between sites and at least three days between buildings on the same site [22]. However, depending on the severity and duration of the stress experienced and the parameter to be investigated, the period of acclimatization may take several weeks to normalize, as observed for blood pressure (three weeks) [23], as well as corticosterone and immune parameters (three weeks) [24] in BALB/c mice. When transporting mice prior to experimental procedures, researchers should evaluate the effect on the parameters to be investigated to determine an adequate acclimatization period. Since changes in corticosterone levels often indicate pain and distress, independent of the specific parameters in an investigation, altered FCM levels will give an indication as to whether mice are stressed. Transportation involves a myriad of steps, as shown in Table 1. Also, weaning itself may be a stressful event. Therefore, the corticosterone levels in the mice investigated in the present study may include these confounding factors. Deciphering the contribution of each of these steps would have meant having controls at each of these steps, thereby increasing the number of mice used. Furthermore, the number of littermates would have been a limiting factor. Therefore, we used baseline values from the same mice to circumvent this challenge, thereby contributing to animal welfare. Under routine experimental conditions, mice may be re-grouped after arrival at the research facility and/or prior to experimentation, which may lead to higher levels of FCMs than those reported in our study. Collecting group samples may also have masked significant effects, because individual differences in baseline and stress-induced FCM levels are present [2][3][4]. However, separating mice for fecal collection, as opposed to keeping them in stable social groups, as practiced in our investigation, may also be stressful. In the present study, notably, while using the same groups of mice for our routine procedures, we were also able to obtain information about the fecal corticosterone metabolite levels of the prepuberal mice that were transported. Further work could include males, mice of different ages, FCM levels in mice that are weaned in the commercial breeding facility but which are not transported, and behavioral welfare indicators, and it could also consider the influence of different periods of the year. Conclusions We showed in the present study that FCM levels increased in four out of five genetic backgrounds, but baseline levels were reached within four days. Notably, only B6N showed significantly higher FCM levels on Day 1. FCM levels also varied according to genetic background, and embryo production was not affected by handling due to repeated fecal collection. Funding: This research received no specific grant from any funding agency in the public, commercial, or not-for-profit sectors.
2019-05-18T13:03:49.767Z
2019-05-01T00:00:00.000
{ "year": 2019, "sha1": "30132e72e989ce42f56804135e3dade4ce0e551d", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2076-2615/9/5/239/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "30132e72e989ce42f56804135e3dade4ce0e551d", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
225076402
pes2o/s2orc
v3-fos-license
Primary Generalized Glucocorticoid Hypersensitivity Treated with Mifepristone: A Case Report Abstract Here, we report a case of a patient with symptoms of Cushing syndrome, who is diagnosed with primary generalized glucocorticoid hypersensitivity in the end. The patient’s relevant laboratory tests and imaging examinations are described. Mifepristone, a glucocorticoid receptor antagonist, was prescribed and its therapeutic effect on the patient’s electrolyte level, lipid metabolism, and bone metabolism was observed during the treatment. The endocrine assessment indicated normal pituitary-adrenal axis regulation function but reduced cortisol secretion. Quantitative reverse transcription-polymerase chain reaction indicated reduced mRNA level of mineralocorticoid receptor gene. Pituitary magnetic resonance imaging showed normal pituitary anatomy, while adrenal computed tomography scan showed bilateral adrenal atrophy and increased content of visceral and abdominal subcutaneous fat. Moreover, chromosome examination revealed a normal 46, XY chromosome. In this case, mifepristone was administered to treat primary generalized glucocorticoid hypersensitivity. To the best of our knowledge, there are a few reports on mifepristone-treated primary generalized glucocorticoid hypersensitivity. In the one-year follow-up visits, the evaluated results of electrolyte level, lipid metabolism, and bone metabolism indicated that the patient’s symptoms resulting from cortisol hypersensitivity were relieved progressively. Introduction To date, primary generalized glucocorticoid hypersensitivity (PGGH) has been reported in a few cases. In all these cases, the patients harbored cushingoid manifestations but with hypo-or normocortisolemia. However, the mechanisms leading to this phenomenon were remained to be elucidated. A notable review recently demonstrated that glucocorticoid receptor (GR) gene mutations as well as N363S and Bcll polymorphisms played a pivotal role in this setting. 1 However, only one case has identified a GR gene mutation. 2 Other cases postulated that the root causes may be found at transcription levels. 3,4 In terms of the approaches to treat PGGH, GR antagonists, specifically mifepristone, were assumed to be the optimum treatment for impeding GR activity and relieving cushingoid symptoms. [5][6][7] Here, we reported a patient with PGGH treated with mifepristone. We delineated the complete diagnosis and treatment process as well as the therapeutic effect on the patient's electrolyte level, lipid metabolism, and bone metabolism. Case Presentation A 27-year-old male was admitted to our hospital with edema of lower extremities and fatigue. Before admission, he had received the hydrochlorothiazide treatment for one week (25mg, qd) at a local hospital. After hydrochlorothiazide administration, he still felt weak, although the swelling was slightly relieved. Consequently, he discontinued the treatment because of its poor therapeutic effect. The patient acknowledged that he had never taken any exogenous hormone. The patient's blood pressure and body mass index were 127/88 mmHg and 35kg/m 2 , respectively. Moreover, his weight had increased by about 25kg during the previous 6 years. The patient's secondary sex characteristics appeared when he was 15 years old, and although, he had normal external genital organs, but he had never experienced erection and ejaculation. Physical examinations revealed that he presented centripetal obesity, moon face, buffalo hump, and purple striae ( Figure 1A). Mild swelling was found in his lower extremities. Family History The patient's father had been abandoned as a child. He received inguinal herniorrhaphy at the ages of 10 and 53. He suffered from hypertension at age 45 and took nifedipine sustained release tablets (20 mg/d) and indapamide (1.5 mg/d) to control his blood pressure. His body mass index was 21.5 kg/m 2 and he had a few purple striae on the abdomen. His ACTH was also less than 0.22 pmol/L and 24h urinary-free cortisol was 23.62 nmol. The patient's mother was healthy and had normal serum ACTH and cortisol levels. Insulin Tolerance Test The patient and his father were recommended the insulin tolerance test by intravenous injection of insulin (0.15 U/ kg). Serum samples were taken at 0, 15, 30, 60, 90, and 120 minutes. During the test, the patient experienced dizziness, sweating, and weakness 3min after the insulin injection and the blood glucose measured by glucose meter was 1.6 mmol/L. Subsequently, he was intravenously administered 50% glucose (60mL). After 15 min, the blood glucose increased to 12.2 mmol/L. Furthermore, his cortisol level was increased by 49% during the test. The plasma cortisol of the patient's father increased by 105.9% during the test ( Table 2). Renin-Angiotensin-Aldosterone System Function The function of renin-angiotensin-aldosterone system was also assessed. The blood samples were drawn by venipuncture after 30 minutes of rest in a supine position in the morning, and 2 hours after staying in an upright position with 20mg furosemide intramuscular injection. The aldosterone-to-renin ratio was calculated. The patient's plasma renin activity was above normal and serum aldosterone was close to the lower limit ( Table 1). The patient appeared weak and his potassium decreased from 3.5 to 2.9 mmol/L after the injection of furosemide. Quantitative Reverse Transcription-Polymerase Chain Reaction (RT-PCR) RT-PCR was performed to evaluate the mRNA levels of GRα, GRβ, mineralocorticoid receptor (MR), 11β-1-hydroxysteroid dehydrogenase (11β-1-HSD), 11β-2-HSD, hexose-6-phosphate dehydrogenase (H6PD), and 3β-2-HSD in the peripheral blood lymphocytes of the patient and two normal, age-matched men at three different time points. The mRNA levels of MR were found to be lower in our patient. However, the difference of the mRNA levels of other indices between the patient and the normal, age-matched males was not significant (Figure 3). Mifepristone Treatment The patient was treated with mifepristone 200mg/d for one month, and then 400mg/d for the next eleven months, during which time the patient was instructed to maintain a similar level of physical activity and receive the same calorie intake (25 kcal/kg/day). After one year of treatment with mifepristone (Table 3, Figure 1B), the patient's body weight decreased from 95 kg to 93 kg. His total body fat decreased from 48.7% to 39.9%. The body mineral density at the levels of lumbar 1-lumbar 4 was increased from 0.685 to 0.702. During the treatment, the patient's potassium level was always maintained within the normal range. The ACTH and cortisol level remained lower than normal. Discussion In this case, the diagnosis of Cushing's syndrome was suspected initially when the patient was admitted to our hospital owing to his typical cushingoid symptoms. However, the endocrine profile abolished the possibility DovePress of Cushing syndrome due to his low to normal cortisol and ACTH levels. In addition, his adrenal computed tomography scan showed bilateral adrenal atrophy. Therefore, the diagnosis of adrenal cortical insufficiency was assumed. After detailed consultation, we were informed that the patient had never suffered hypotension, hyponatremia, and hypoglycemia which are symptoms precipitated in adrenal cortical insufficiency. Moreover, the patient had increased content of body fat and osteoporosis. We evaluated the patient's adrenal function. Because ACTH-(1-24) was not available in our hospital at the time, the patient and his father underwent insulin tolerance test to assess their pituitary and adrenal function instead. The DovePress International Journal of General Medicine 2020:13 patient's plasma cortisol level elevated by 49% during the test. His father's cortisol level also increased significantly after the insulin injection, which was followed by hypoglycemia. This seems to indicate that insulin-induced hypoglycemia might restore the secretion of pituitary ACTH and adrenal cortisol. One can postulate that the regulating function of pituitaryadrenal axis was normal in our patient despite his low to normal ACTH and cortisol levels. All of these results negated diagnosis of adrenal cortical insufficiency unlikely. Detailed interrogation revealed no history of exogenous glucocorticoids administration. The patient was mentally healthy and cooperative during hospitalization. Therefore, the possibility of exogenous Cushing's syndrome was ruled out. Due to the combination of subnormal cortisol, normal pituitary-adrenal axis function, and typical Cushing symptoms, the possibility of PGGH was considered. The same holds true for the near lower-limit aldosterone level and significantly low aldosterone to renin ratio accompanied by normal adrenal function, which seems to reflect the aldosterone hypersensitivity. However, further investigation is warranted to prove this speculation. It is well established that patients with PGGH harbored more GR than normal, age-matched individuals. 3,8 Furthermore, these patients tended to have abnormal GR-binding capacity; however, they also exhibit normal nuclear translocation, thermal stability, and heat activation. 9 It is also worth noting that GR pertaining to 829 the steroid hormone receptor family of nuclear receptor superfamily of transcription factors, consists of two isoforms, GRα and GRβ. 1 Furthermore, it is recognized that 11β-1-HSD and 11β-2-HSD plays an important role in the inter-conversion of cortisol and cortisone. 10 Moreover, the role of H6PDH in adjusting the activity of 11β-1-HSD oxidoreductase activity has also been reported. 11 Therefore, assessing the levels of GRα, GRβ, 11β-1-HSD, 11β-2-HSD, and H6PD would indirectly indicate the glucocorticoid action pathway. On the other hand, MR is featured in response to steroid binding, 12 especially of mineralocorticoids. 3β-2-HSD gene plays a pivotal role in encoding the 3β-2-HSD, which was the key enzyme for progesterone synthesis. Mineralocorticoids are the downstream products of progesterone. Therefore, MR and 3β-2-HSD can provide us with useful information on the mineralocorticoid action pathway. Thus, we conducted the quantitative RT-PCR in the peripheral blood lymphocytes of the patients and two normal, age-matched males to evaluate the mRNA levels of GRα, GRβ, MR, 11β-1-HSD, 11β-2-HSD, H6PD, and 3β-2-HSD. The results disclosed that the mRNA level of MR in our patient was found to be lower than the normal, age-matched males. On the contrary, the levels of GRα, GRβ, 11β-1-HSD, 11β-2-HSD, H6PD, and 3β-2-HSD were found to be similar to the normal, age-matched males. The normal levels of 11β-1-HSD, 11β-2-HSD, and H6PD were substantiated with the quantitative RT-PCR. One can speculate that the activation of the 11β-1-HSD and 11β-2-HSD is not involved in glucocorticoid hyperreaction. Moreover, normal level of GRs presented in RT-PCR indicated that there were not more GRs, which usually occurs in the PGGH, in our case. Previous studies have introduced a groundbreaking concept, which showed that patients with PGGH may harbored GR mutations. 1,2,13,14 We conducted firstgeneration sequencing assessment, however, no positive outcome was obtained. Therefore, the mechanisms underlying the glucocorticoid hypersensitivity in our patient remained to be further elucidated. Regarding treatment, ketoconazole and cabergoline previously used to treat a patient with PGGH; however, the therapeutic effect was not satisfactory. 5 After treatment with these medications, the patient's body weight decreased, but his Cushing symptoms were still presented. A recent review recommended that PGGH management should focus on addressing relevant manifestations, such as dyslipidemia, type 2 diabetes, and hypertension. 15 Considering that the patient's ACTH and cortisol levels were lower than normal, ACTH suppressor and adrenal inhibitor were unlikely treatment options. Mifepristone, a GR antagonist, seems to be a suitable and preferred therapy for this patient. Newfield et al reported the relief of PGGH manifestations in a patient with mifepristone treatment. 3 Thus, the mifepristone was recommended as therapeutic regimen for our patient. Due to the possibly unsatisfactory effects including hypokalemia, unstable blood pressure, and blood glucose level, during mifepristone treatment, 16 200 mg/ d was prescribed initially and the patient's blood pressure, blood glucose, plasma ACTH, plasma cortisol, and plasma potassium were under scrutiny. During the first month, elevated ACTH and cortisol levels as well as reduced serum potassium level did not occur. In addition, symptoms of nausea did not occur. Then, the dosage of mifepristone was increased to 400 mg/d. During the next 11 months, the patient's ACTH and cortisol levels were not significantly altered; however, 17-ketosteroid and 17-hydroxycorticosteroid were increased. The patient's blood pressure and blood glucose did not seem to be affected. After treatment for one year, the patient's symptoms of asthenia and edema were relieved. Furthermore, his body fat decreased significantly from 48.7% to 39.9%. However, his body weight decreased merely 2kg, which implied that our treatment substantially improved the fat metabolism in our patient. In addition, the patient's lumbar mineral density did not decrease, however, it increased slightly after the mifepristone treatment, further suggesting that mifepristone can help to impede osteoporosis. Conclusion In conclusion, we have reported a patient with PGGH. Relevant examination results did not provide a reliable clue to the etiology of PGGH, which remains to be investigated further. The mifepristone was administered to the patient with satisfactory effects in terms of improved electrolyte level, lipid metabolism, and bone metabolism, further providing useful information for future treatment of PGGH patients. Data Sharing Statement The data that used to support this study are available on reasonable request from the corresponding author, Yunfeng Liu.
2020-10-28T05:07:50.750Z
2020-10-01T00:00:00.000
{ "year": 2020, "sha1": "e084dd8043c5eac73895b7b423609666af8f92e6", "oa_license": "CCBYNC", "oa_url": "https://www.dovepress.com/getfile.php?fileID=62490", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "e084dd8043c5eac73895b7b423609666af8f92e6", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
216220083
pes2o/s2orc
v3-fos-license
THE DEVELOPMENT OF POLAND’S AIR DEFENSE SYSTEM 14 This article discusses the operational context for the development of Poland's air de15 fense system. This assessment focuses on air defense operations in high intensity con16 flict. Recommendations include setting a realistic level of ambition in the field of air 17 defense and increasing operational capabilities through the modernization of its com18 bat assets. The priority proposed for Poland’s air defense system is to introduce a new 19 generation of short range surface to air missile systems and then develop medium 20 range air and missile defense capabilities. 21 Introduction This article attempts to provide a brief assessment of the operational context of Poland's air defense system development. Due to the unclassified nature of the article, the discussion is limited to public documents, media and academic work. The discussion of doctrinal fundamentals serves as an introduction to further considerations related to air defense and the air defense system. Then, the article offers an assessment of the current capabilities of Poland's air defense system and confronts them with prospective air threats to propose improvements. The assessments and opinions contained in this article reflect the author's personal views only. Due to the nature and limited scope of the article, the discussion is limited to the problems of the air defense operations during a high intensity conflict. Doctrinal fundamentals of air defense The study of the operational aspects related to the development of Poland's air defense system needs direct references to the relatively universal doctrinal assumptions and decisions adopted in relation to this system in long-term strategic concepts. Air defense, which aims at the protection of friendly forces from enemy air and missile attacks, is seen in the military doctrine through the prism of active and passive defense. Active air defense activities include the use of airborne and surface based air defense assets to destroy missile and air threats or reduce the effectiveness of their employment. After some simplification, it can be assumed that active air defense is focused on direct kinetic and non-kinetic actions against air and missile threats. On the other hand, passive air defense includes all the measures that reduce the effectiveness of an enemy air attack by increasing the survivability of defended assets through early warning, camouflage, concealment, deception, hardening, dispersion and reconstitution (NSA, 2010). An assessment of any air defense systems requires a detailed examination of its capability to perform essential functions. That is especially relevant for those functions enabling active air defense. Integrated detection, identification, assessment, interception and engagement of air and missile threats is essential to active air defense operations. The implementation of those functions requires an air defense system to have specialized components that are usually referred to as subsystems. The essential components of active air defense are: airborne and surface based combat assets, surveillance assets and command and control elements. NATO doctrine identifies weapon systems, and surface environment, which is subsequently divided into control and reporting agencies together with sensors, communications systems and data processing facilities as well as contributing systems supporting air defense operations (NSA, 2010). In general, relatively universal rules for the establishment, operations and development of air defense systems focus on the requirements related to their optimization in relation to the essential air threats, as well as the complementarities of the means used to eliminate the limitations of individual weapon systems. In an assessment of Poland's air defense system, reference to its interrelationships within the NATO Integrated Air and Missile Defense System is necessary. This applies above all to the degree of the "independence" and "self-sufficiency" of the national air defense system, but, as a consequence, also to the scope of international cooperation in the field of air and missile defense and the scope of operational capabilities, which will be developed only in the alliance dimension (JAPCC, 2017). From the point of view of Poland's development of the national air defense system, it seems important to refer to two basic options for the employment of air defense assets. The allied doctrine and a number of references in related scientific work point at an option of minimizing the damage sustained by own forces and facilities and an option of inflicting maximum attrition on the enemy (NSO, 2016). Depending on national priorities, the air defense system may be developed to achieve one of these aims or a compromise of them. In the context of military threats from the Russian Federation, both of these options should be considered in the light of the cultural approach of the potential adversary to their own losses (Cieślak, 2018). Taking into account the above theoretical assumptions and doctrine, an attempt can be made to assess the current state of Poland's air defense system. Assessment of the current state of Poland's air defense system The current state of Poland's air defense system has been, similarly to the situation in many member states of the North Atlantic Alliance, the result of previous organizational solutions as well as trade-offs between the desired operational capabilities and the technological and financial capabilities of the state. From the perspective of two decades of Poland's membership in NATO, one can notice the lack of a long-term, realistic concept for the development of the national air defense system that would be consistently implemented (Cieślak, 2018). Due to the specific modernization needs of respective services of the armed forces and the involvement of the Polish Armed Forces in out of area combat and stability operations the development of individual components of the Polish air defense system was not fully harmonized. Some of the components of the national air defense system were modernized by purchasing new weapon systems, or modernizing legacy weapon systems. Some of the modernization decisions after 2000 were postponed because of political, economic or operational reasons. The coherence of efforts related to the development of the national air defense system were negatively affected by changes in the priorities and programs of technical modernization of the armed forces. Changes in priorities were also the result of subsequent strategic defense reviews, including the review completed in 2017, main conclusions of which were included in the "Defense Concept of the Republic of Poland". The lack of coherence and continuity of efforts aimed at the development of the national air defense system has resulted in generation and capability gaps between the components of the air defense system and within the components. The generation gap in airborne air defense assets means that more than thirtyyear old MiG-29 fighters operate along with advanced F-16 aircraft. The ground based air defense forces, both in the Air Force and the Polish Army, are equipped with surface-to-air-missile systems, which by large are obsolete and will need retirement in coming years. The surfaceto-air-missile systems are capable of engaging a single aerial target at a time and their origin dates back to the seventies of the last century. The exceptions are the relatively modern man portable air defense systems Grom and their mobile versions of Poprad (Dobija, 2019). However, they both are very short-range infrared guided air defense systems. The aftermath of the Warsaw Pact doctrine and organizational solutions still results in the Polish Air Force and the Polish Army operating surface to air missile systems with similar tactical capabilities. Poland's air defense system has been optimized for combating aircraft and, to a lesser extent, helicopters. One must also note a relatively good mobility of ground based air defense systems. The air defense surveillance and target acquisition assets of the Polish air defense system are predominantly radar, some of which are mobile. Official assessments of Poland's armed forces suggest that the air defense system lacks sufficient electronic warfare capability as well as camouflage and deception capability (Rosłan, 2018). This may adversely affect both active air defense operations and passive air defense measures. Priorities for air defense system's development Delays in implementing the technical modernization programs for the Polish Armed Forces over last two decades have led to a cumulative need for new weapon systems by all services of the armed forces. Realistically assessing Poland's economic potential, it should be noted that even with the current priorities for technical modernization of the armed forces until 2035, it will be impossible to acquire all planned weapon systems and achieve the declared operational capabilities (MON, 2019b). In the perspective of the next decade, and probably also in the longer perspective, it will be necessary to make choices regarding the type of capability acquired and the capacity that will be achieved. Also, in the case of Poland's air defense system, it will be necessary to answer the question about the types of operational capabilities and capacity for those capabilities. When considering the operational aspects of developing a national air defense system, one must be aware of the need to compromise between the level of ambition in air defense and the technological and financial capabilities of the state. From the perspective of the last few years, Poland's ambitions in the field of missile defense can be a good example. The cost of purchasing two PAC-3 MSE Patriot batteries in March 2018 was USD 4.74 billion (MON, 2019c). The capacity of the missile defense provided by the two batteries will probably not be sufficient to meet the needs. Acquisition costs for another six Patriot system batteries will surely expand capacity (MON, 2019b). But at the same time, it will limit the availability of financing for the remaining components of the national air defense system. Despite the urgent needs related to the modernization of ground based air defenses optimized to engage only air threats, which might be much cheaper than missile defense systems, medium-range missile systems have become the political priority of purchases (MON, 2019c). Assessing the changes in the security environment that have taken place in recent years in Poland's environment it can be predicted that under the conditions of the Article V collective defense operations, the national air defense system will have to combat a significant number of technologically advanced air and missile threats. Tactical ballistic missiles will be the most serious threat to facilities, such as airports and naval bases, command posts at the strategic and operational level as well as communication nodes (Fabian et al, 2019). Given the significance of that infrastructure for deployment of NATO forces to the alliance's eastern flank, it is necessary to look for effective ways for missile defense. One may consider, at least during the transition period, the implementation of this specific task by allied forces. It is also worth considering ways to degrade a threat of enemy's tactical ballistic missiles through offensive operations, including the use of unmanned aerial combat systems. In a potential armed conflict, the main effort of the national air defense system should focus on protecting military forces such as tactical combat teams. It can be expected that the air threat to tactical combat teams will result from enemy tactical air forces, both aircraft and assault helicopters using non-guided and guided weapons. As operations in the Syrian conflict suggest, Russian airpower has been increasing the percentage of air strikes from medium altitudes and use of guided air munitions (Lavrov, 2018). Therefore, it may be concluded that in a high intensity conflict scenario, at least some part of the air strikes may be conducted from outside the effective range of very short range air defenses such as Grom or Poprad. This should be taken into account in the scenarios of future air defense operations. As a consequence, the choice of weapons will be crucial for the national air defense system to engage enemy combat aircraft performing strikes from medium altitudes. Fighter aviation alone may not be sufficient for this mission, especially taking into account a threat from tactical ballistic missiles such as SS-26 Iskander to the fighter air bases in Poland (Dobija, 2019). Poland's plans to acquire F-35 fighters will probably not change that calculus in a significant way (MON, 2019a). Future high intensity conflict at the eastern flank of NATO may see a widespread employment of enemy unmanned aerial vehicles primarily performing reconnaissance tasks in support of their own rocket and long range artillery forces. Such use of unmanned aerial vehicles significantly increases the precision of rocket and artillery fire, which was reflected, among others, in the conflict in eastern Ukraine. It can be assumed that the threat to one's own troops from the tandem of unmanned aerial vehicles, rocket and artillery troops may be greater than from the SS-26 Iskander tactical ballistic missiles. While the latter will be effective against infrastructure targets, reconnaissance data provided in almost real time by unmanned aerial vehicles may allow enemy rocket and artillery troops to attack tactical troops in an accurate and flexible way. This requires a more detailed examination of the operational aspects of countering threat of unmanned aerial systems by Poland's air defense system. For smaller and cheaper unmanned aerial vehicles, the challenge for Poland's air defense system will be to select a weapon system which is not only operationally but also economically justified. The above dilemma became apparent with all its sharpness in the Lebanese War in 2006, when Israeli armed forces were forced to defend against Hezbollah's massive unguided missiles. The answer after a few years was the Iron Dome system, in which the cost of missiles intercepting rockets was reduced to an acceptable level while maintaining combat effectiveness (Lambeth, 2012). It is difficult to judge the final solutions for combating unmanned aerial vehicles by the national air defense system in the perspective of the next decade or later. The scale and unorthodoxy of ongoing experiments with the use of unmanned aerial vehicles, as well as in the field of countering such threat, does not allow determining with sufficient degree of certainty potential solutions. Lasers, electronic interference, kamikaze drones, anti-aircraft artillery with programmable rounds, or the use of unguided rocketsthe search areas are varied and it is difficult to determine which one is most promising. Due to the relatively small costs of developing technologies needed for addressing threat of the unmanned systems, it is worth, following the example of the armed forces of other countries, experimenting, gathering experience and developing specific technological solutions on a national scale. The assessment of the military threat posed by the Russian Federation requires that doctrinal assumptions should be made regarding the philosophy of employment of Poland's air defense system. An attempt should be made to obtain an honest answer to the question of what we expect from the national air defense system in the event of potential aggression by an aggressor. In the author's opinion, it seems unrealistic to ensure full security for defended assets due to technological and financial reasons. On the other hand, it seems advisable to consider options related to increasing the capabilities of Poland's air defense system to inflict maximum attrition on an air opponent. Russian airpower employment in the Syrian conflict showed the massive use of unguided air munitions. That requires the penetration of air defense engagement zones by attacking aircraft or helicopters thus making them vulnerable to active air defense operations. If the priority of the employment of the national air defense system were inflicting maximum attrition to the air opponent, the capabilities to engage aerial threats should be increased not only at low, but also at medium altitudes, i.e. above three thousand meters. Such an approach requires the introduction of new anti-aircraft missile systems, which will replace the currently operated SA-3 Newa SC and SA-6 Kub systems. It will also be necessary to take measures to increase the survivability of ground based air defense forces through the development of radio-electronic defense and masking systems (Rosłan, 2018). The development of ground based air defense forces seems justified, when one takes into account the lessons learned during recent conflicts, in which parties with diverse air potential clashed. One of the telling examples may be the operations of Serbia's air defense system during Operation Allied Force. The use of NATO airpower against Serbia in 1999 indicated that in conditions of air defense operations against an enemy with technological and quantitative advantage, the most vital element of the air defense system were a mobile surface to air missile systems. Air defense fighters were relatively quickly blocked at airbases, and fixed elements of the surveillance and command and control system were successfully attacked. The ability of the Serbian surface to air missile forces to pose a threat to NATO airpower throughout the Allied Force Operation significantly limited the freedom of use of the alliance's airpower and forced an unfavorable apportionment decisions. One should also notice the low effectiveness of attacking Serbian forces in Kosovo by a NATO airpower, which was caused, among others, by the presence of credible air to surface missile threat. It may be concluded that from an operational point of view, it would be desirable for Poland to operate mobile surface to air missile systems capable of operating in a decentralized and autonomous manner. Lessons learned from the armed conflicts of others cannot predetermine specific solutions for the development of Poland's air defense system. In the case of an extremely unfavorable turn in the international situation, Poland may become a victim of aggression not only by air, but also by land. Due to the above, the length of defense activities and the ability to maintain key areas and facilities within the country may determine the options of allied assistance. Therefore, the national air defense system's ability to inflict maximum attrition on an air opponent may be a factor in military deterrence that strengthens alliance guarantees (Rosłan, 2018). Taking into account the possible operational scenarios of a potential high intensity conflict, it can be hypothetically assumed that the priority of air defense should be the protection of troops and facilities crucial for the Polish Armed Forces, and then allied forces' defensive operations. Considering the above-mentioned assumptions, an attempt can be made to articulate recommendations regarding the composition and size of individual components of the national air defense system desirable from the point of view of operational requirements. The last decade has not seen any significant improvement in the state of Poland's air defense system. The situation is particularly acute in terms of ground based air defenses, in particular small and medium range anti-aircraft missile systems (Cieślak et al, 2011). Due to the fundamental changes in the security environment that took place after 2014, the priority in the development of the national air defense system should be to take urgent measures leading to the modernization of, above all, ground-based air defenses. Currently, the only modern class of anti-aircraft weapon system in Poland's air defense system are very short-range antiaircraft missile systems (VSHORAD), which are effective against air threats at distances of about five kilometers and altitudes of just over three kilometers. Poland's air defense system lacks advanced short-range surface to air missile systems capable of engaging air threats at distances of around thirty kilometers and altitudes of up to several kilometers, as well as medium-range systems. Legacy surface to air missile systems such as SA-3 Newa SC, SA-6Kub and SA-8 Osa do not guarantee effective engagement of air threats not only because of the archaic nature of technological solutions, but also because the potential enemy has extensive knowledge about their weaknesses and limitations (Dobija, 2019). In the long term, the operationally desirable solution for Poland's air defense system would be to have very short, small and medium range surface to air missile systems. However, in the short term, mainly because of economic reasons, it will be necessary to make the inevitable choice which capability to introduce or modernize first. In the next few years, it will be necessary to start replacing the currently used legacy radar guided surface to air missile systems with a new generation of air defense systems capable of autonomous or distributed operations against advanced air threats. Despite the obvious needs in this regard, which have been articulated by the military for over a decade, there has been no satisfactory action at political levels that would translate into tangible results soon. Doctrinal patterns of potential opponent forces, to include the role of airpower in hypothetical armed aggression against Poland, makes it rational to acquire short range surface to air missile systems. The technology offered by the short-range surface to air missile systems available on the market seems adequate in relation to the nature of prospective air threats. At the same time, the lower costs of short-range surface to air missile systems may allow purchasing more fire modules than those of medium range systems. Short-range surface to air missile systems may offer a rapid and significant increase in the operational capacity of Poland's air defense system to inflict maximum attrition on the potential aggressor airpower. Giving priority to the acquisition of medium range surface to air missile under the first stage of WISLA program, may delay the procurement of short range systems and create a generation gap after the retirement of SA-3, SA-6 and SA-8 missile systems. A new generation of short-range surface to air missile systems might allow Poland's air defense ground based air defenses utilizing older generation systems through operations in air defense clusters. However, when choosing new short range surface to air missile systems, their ability to operate in a network-centric environment should be taken into account. The capability of simultaneous engagement of several air threats by a single antiaircraft combat vehicle within a radius of up to thirty kilometers and at altitudes of up to several kilometers seems to be achievable in both technological and economic terms. High mobility, obtaining information about the air situation from external sources and having, in addition to radar, also passive detection systems, should ensure the survivability of small range surface to air missile systems at modern battlefield. The acquisition of more mediumrange surface to air missile systems capable of defending against tactical ballistic missiles will require more detailed considerations. In professional discussions among air defense specialists, the argument of the proportion of costs incurred in relation to the expected operational effects is raised. It is difficult to say with full conviction that Poland will be able (at least in economic terms) to create a missile defense system that will be fully effective against a barrage of the tactical ballistic missiles of a potential aggressor. If the effectiveness is not high enough, does it justify maintaining such a high political priority of missile defense among the activities for the development of the national air defense system? Full use of the surface to air missile sets capable in ballistic missiles defense requires the possession of satellite reconnaissance means that provide sufficient early warning. Medium range surface to air missiles may be used to engage air threats, but they cannot perform missile and air defense tasks simultaneously. Due to the acquisition costs, medium-range surface to air missile systems will generate a heavy burden on the state budget. This creates potential problems in the event of disruptions in the financing of the technical modernization program of the armed forces. The purchase of medium-range surface to air missile systems before small-range ones may cause lack of sufficient financial resources for the latter ones or at least a delay in procurement. Such a scenario may affect the capability of Poland's air defense system to inflict maximum attrition to an air opponent, and consequently reduce the ability to effectively deter against a potential aggressor. If in the perspective of the next decade the national air defense system does not have a new generation of short range surface to air missile systems of the new generation, then two batteries of medium-range surface to air missile systems will not be a credible potential for military deterrence against hypothetical armed aggression directed against our state. The national air defense system would then not be able to provide effective protection for key forces and infrastructure, or to inflict maximum attrition on an air opponent. Therefore Poland's air defense system should have in the long term both small-and medium-range surface to air missile systems. Nevertheless, due to the state's ability to finance the technical modernization program in the time horizon of the next decade, it seems rational to shift focus to small-range surface to air missile systems. Subsequently, a second phase of acquiring medium-range surface to air missile systems might increase to capacity of ballistic missile defense. Such an approach will ensure a faster increase of the operational capabilities of the national air defense system in the field of engaging air threats and reduce the risk of delays associated with the technological immaturity of missile systems. The concentration of efforts on the modernization of short and mediumrange surface to air missile systems should not stop the development of the other components of Poland's air defense system. The fighter component of the national air defense system should be assessed as satisfactory. However, over the next few years, midlife upgrade will be needed for F-16 aircraft and replacement for MiG-29 seems inevitable. Some of these activities are already being undertaken, e.g. the purchase of new AMRAAM air-to-air missiles, but in the coming decade there will be requirement to replace on-board radar with an active electronically scanned array (AESA) one. Such an approach, combined with increasing the survivability of fighter bases, appears to be a desirable course of action with regard to the airborne assets of the national air defense system. The development of the surveillance and command and control subsystem of Poland's air defense system should take into account making it more passive and non-cooperative. Plans to introduce Integrated Air and Missile Defense Battle Command System may result in a networked architecture that will enable any sensor best shooter philosophy in active air defense operations. Aside from improvement to ground radar systems, some improvements to optoelectronic and electronic surveillance ad target acquisition should be made. Poland's air defense system needs substantial improvements in the field of passive air defense. Electronic warfare systems allowing for effective disruption of enemy aircraft onboard weapons control systems, communications as well as navigation systems. It is necessary to increase the use of multispectral deception and camouflage systems in and to hardening of selected infrastructure of air defense system. In parallel with the acquisition of new generations of surface to air missile systems, Poland's air defense system needs electronic warfare systems for electronic defense and electronic attack to be able to nullify the effectiveness of enemy anti-radiation missiles and disrupt enemy communications and navigation needed for support of air operations Summary The decisions related to the development of Poland's air defense system require an honest assessment of the operational context, as well as financial and economic conditions necessary for fulfillment of planned changes. Such a comprehensive approach is needed to propose well-reasoned operational, organizational and technical solutions. The development of Poland's air defense system should not be seen in terms of a one-off undertaking that will result in a state of the art system, but rather as a long-term process that needs both stability and flexibility of approach. Due to the changes in the security environment resulting from the aggressive policy of the Russian Federation in recent years, the urgency of increasing operational capabilities of Poland's air defense system seems warranted. By doing so, Poland will be able to increase its military security and make its military deterrence more credible. Development of Poland's air defense system needs a realistic definition of the level of national ambitions and the consistent implementation of approved concepts and plans through the long-term process of technical modernization. In the short term the priority should be given to modernization of ground-based surface to air missile systems, primarily to the short-range ones. In longer term, the harmonization of modernization efforts will be needed to prevent generation gaps within the national air defense system. The modernization requirements of Poland's military must be carefully confronted with the financial capabilities of the state. It calls for a reassessment of the priorities for specific operational capabilities. This applies primarily to the modernization of the missile defense capability. While in the long term it is desirable, in the shorter timeframe, priority should be given to capabilities related to defense against air threats, and, consequently short-range surface to air missile systems should be given due attention. Poland's air defense system also needs substantial efforts to improve capabilities related to passive air defense to complement active air defense operations.
2020-04-09T09:04:28.508Z
2020-02-21T00:00:00.000
{ "year": 2020, "sha1": "133e5dd3ee00a91fea51314bb4791e9b052b7fd9", "oa_license": "CCBY", "oa_url": "https://doi.org/10.37105/sd.44", "oa_status": "GOLD", "pdf_src": "MergedPDFExtraction", "pdf_hash": "a41524fbb83d3b5a85fd2d18b31b9c872b8b0a03", "s2fieldsofstudy": [ "Business" ], "extfieldsofstudy": [ "Engineering" ] }
55945737
pes2o/s2orc
v3-fos-license
Effect of BA Treatments on Morphology and Physiology of Proliferated Shoots of Bambusa vulgaris Schrad. Ex Wendl in Temporary Immersion Axillary buds, collected from greenhouse-grown plants of Bambusa vulgaris Schrad. ex Wendl (B. vulgaris), were incubated on a static liquid culture medium, Murashige and Skoog (MS) medium with 2% (w/v) sucrose, and supplemented with 12.0 μM 6-benzyladenine (BA). They were transferred to a temporary immersion system (TIS) using liquid MS medium supplemented with 0 (CK-free medium), 6.0, 12.0, 18.0 μM BA. The morphological and anatomical indicators were measured. The BA influenced in vitro multiplication of B. vulgaris. The best results were achieved in the SIT with a concentration of 6.0 μM of BA, which increased the number of shoots (5.1 shoots/explant) in the absence of hyperhydric shoots. Results demonstrated that the water content in the sprouts increased with 12.0 and 18.0 μM BA every four hours. Furthermore, these high levels of BA contributed to a lower accumulation of phenolic compounds and lignin content. The total chlorophyll significantly increased when using 6.0 uM BA, but decreased both parameters with other treatments. These results favor to increase the number of shoots/explants during in vitro multiplication. They will also optimize the in vitro culture conditions, leading to an improvement of in vitro propagation methods for this species. Introduction B. vulgaris (Bambusa vulgaris Schrad.ex Wendl) is considered within the genus Bambusa, the most important species globally.The establishment of new industrial forest plantations to meet the high demand for this reforestation, recovery of soil, environmental protection, house building, making furniture, agricultural implements, handicrafts and paper production requires lengthy periods of time.Different methods of propagation could be used to assist in the development of this species plantation.Tissue culture is the most commercially feasible method to produce bamboo plants that are as uniform as possible on a large scale and within a short space of time [1]. Various propagation protocols using semi-solid media systems have been described [2][3][4][5].However, it is well known that mass propagation of plants by tissue culture in conventional semi-solid media is labor intensive and costly.Gelling agents contribute significantly to in vitro production costs and limit the possibility of automation for commercial mass propagation.Consequently, new studies on in vitro propagation using different culture conditions can contribute to further optimization of the process and to reducing production costs [6].Using liquid media in micropropagation processes is considered the ideal solution for reducing plantlet production costs and enabling automation [7].Nevertheless, the advantages of in vitro culture in a liquid medium are often counterbalanced by technical problems such as asphyxia and hyperhydricity, a typically stress-induced change observed in morphological, anatomical and physiological disorders [8,9].Various procedures have been developed to avoid these problems [9].These include the use of temporary immersion systems (TIS) to improve in vitro growth and plant quality in different species [10].But, few bamboo species such as Bambusa ventricosa [11], Guadua angustifolia Kunth [12], and Dendrocalamus latiflorus [13] have been propagated by TIS.The aim of this report was to determine the optimal concentration of 6-Benzyladenine (BA) for Bambusa vulgaris shoot proliferation in a TIS, and to clarify whether the different concentrations of BA influence during the multiplication.Here we present the first report of the successful propagation of B. vulgaris in this semi-automatic system. Plant Material and Growing Conditions Axillary buds were collected from greenhouse-grown plants cloned from culms and branches selected in field according to the Technical Instructions for vegetative propagation of B. vulgaris [14].Shoots were surface sterilized with ethanol (70% v/v) for 3 s.After rinsing three times with sterile distilled water, the explants were dipped in a water solution containing 2% sodium hypochlorite and 0.2 ml Tween-80 for 20 min, followed by three rinses in sterile distilled water.The explants were then placed in individual test tubes (25 mm -150 mm) containing 10 ml of static liquid culture medium [15], supplemented with BA (6.0 µM), myo-inositol (100 mg•L −1 ) and sucrose (3%; w/v), to induce bud sprouting.The pH of the medium was adjusted to 6.0 before autoclaving.After 20 days, the aseptic shoots were placed into a liquid basal MS proliferation medium [15], supplemented with BA (12.0 µM), myo-inositol (100 mg•L −1 ), sucrose (3%; w/v) and vitrofural (116 mg•l −1 ).Shoots were cultured in 70 ml of proliferation medium, in an in magenta jars (Sigma Aldridge Company Ltd.).Shoots were continuously subcultured at 3-week intervals.Cluster of shoots developed was divided in smaller clumps of 3 shoots.Cultures were incubated at 25˚C ± 2˚C with 16 h light (fluorescent lamps with photon lux light intensity of 40 µl•mol•m −2 •s −1 ).After the second subculture, explants were inoculated into the TIS.For liquid treatment, 30 explants were cultured for 3 weeks in similar conditions to those described above and evaluated at the same time as the TIS.Three replicates were included. TIS and Experimental Design The concept and operation of the TIS used in the experiments were based on the RITA ® vessel developed by CIRAD [16] and made up of two compartments of two compartments of 250 ml of capacity each one (Figure 1(a)).Four shoots with two pairs of fully expanded leaves were inoculated per vessel, containing 225 ml of liquid culture proliferation medium each.The system was programmed to transfer the medium and to immerse the explants for 1 minute every 6 h.Three concentrations of BA (6.0, 12.0, and 18.0 µM) and a cytokinin-free medium (CK-free medium) were assayed.In order to test the profitability of the TIS culture system in relation to conventional culture methods, a control treatment consisting of shoots cultured on a conventional liquid basal MS medium (SS) supplemented with BA (12.0 µM) was included.The number of normal shoots (NS), number of hyperhydric shoots (HS), shoot length (cm) and number of leaves per shoot were recorded after 4 weeks of culture.Three culture vessels were used in each treatment and the experiment was repeated three times. OPEN ACCESS AJPS Effect of BA Treatments on Morphology and Physiology of Proliferated Shoots of Bambusa vulgaris Schrad.ex Wendl in Temporary Immersion 207 Measurement of Water Content Liquid culture medium was removed from the vessel in order to determine the water content.The shoots were collected from the different treatments and then rinsed with distilled water.The fresh weight (FW) of shoots was recorded immediately after harvesting and the shoots were dried for 48 h at 60˚C and their dry weight determined.The water content (WC) was calculated as: ( ) ( ) Measurement of Total Phenol Content To quantify phenol content, B. vulgaris shoots from the different treatments described above were sampled.The samples were freeze-dried and finely ground using a pestle and mortar in liquid nitrogen.The dried plant material (0.5 g) was weighed a centrifuged adding 10 ml of methanol was added.The suspension was shaken at 1000 rpm at room temperature on an orbital shaker (Thermomixer Compact, Eppendorf) for 1 h.The suspension was centrifuged at 8000 g for 10 min.The residue was re-extracted with 10 ml methanol using the same procedure.The total phenol content was determined according to the method.Extracted solution (0.5 ml) was mixed with 1.0 ml of 50% Folin-Ciocalteu reagent.After, 2.0 ml of 20% Na 2 CO 3 was added to the mixture and incubated for 10 min at room temperature.After incubation at room temperature for 2 h, the absorbance of the solution was measured at 750 nm using a UV/VIS spectrophotometer (Beckman Coulter DU1800).The total phenol content was calculated based on a standard curve of Gallic acid, which was linear within a range of 50 -400 mg•l −1 (R 2 = 0.9954).The results were presented as the mean of the nine analyses and expressed as milligrams of Gallic acid equivalents (GAE) per gram dry weight (mg GAE•g −1 DW) [17]. Measurement of Total Lignin Content The first three leaves of five plants randomly selected were collected to compare the lignin content constitutive in the leaves of both genotypes.The samples were processed immediately after collected and homogenized in liquid nitrogen using a precooked mortar and pestle.Subsequently, these extracted in methanol and dried in bell, a process that was repeated four times.From each sample, 200 mg were hydrolyzed in 4 ml of 72% H 2 SO 4 (v/v) at 30˚C for 1 h.The hydrolyzate was diluted in 112 ml of water and maintained at 121˚C and 1.2 atm for 1 h.The solution was filtered using Whatman filter paper No. 41 and the solid residue was washed with water, then bell-dried and weighed in scale (Sartorius) and deter-mined the percentage residues of the cell wall over 200 mg for each sample.Lignin content was expressed as percentage of residues cell wall [18]. Chlorophyll Measurement Chlorophyll content was determined by [19] with modifications: One hundred mg of fresh leaves were homogenized with 1.7 ml of acetone 80% buffered with 2.5 mM sodium phosphate (pH 7.8), vortexes for 15 minutes and then centrifuged at 4˚C for 15 minutes at 3000 rpm.Subsequently, absorbance was measured at 663 and 645 nm.Acetone 80% was used as a blank control.The chlorophyll concentrations were calculated using the formula given by [20]. Statistical Analysis Data were analyzed using SPSS version 18 for Windows.Normality of data was tested using the Kolmogornov-Smirnov test.Significance of differences was determined by analysis of variance (ANOVA), and the least significant (P\0.05)differences among the mean values were estimated by Tukey.All statistical tests were performed in Sigma Stat software.Data are presented as means ± standard error, and different letters in the tables and figures indicate significant differences at P\0.05. Effect of BA on Shoots Proliferation and Growth The TIS improved in vitro B. vulgaris shoots proliferation after 3 weeks of culture.In the CK-free medium, a single shoot was produced per explant (Figure 1(c)).All shoots grown with 6.0 µM BA displayed a normal morphology (Figure 1(d)), with a mean of 7.80 NS per explant (Table 1). Results from the TIS culture showed that shoot multiplication was greater in the immersion system compared with the conventional propagation methods in static liquid culture medium.A mean of 4.90 NS per explant was recorded in the TIS plus 12.0 µM BA treatment, whereas a mean of 2.70 NS per explant was recorded in the static liquid culture medium at the same BA concentration (12.0 µM) (Table 1).Even though the number of NS (4.20 NS per explant) was recorded in 18 µM BA (Table 1), this BA concentration was harmful because it decreased the number of shoots per explant, length of shoots and number of leaves per explant (Table 1). Shoot height and number of leaves per shoot were significantly increased when 6.0 µM BA was used Figure 1(c)), but both parameters decreased at the higher concentrations (Table 1 Our system produced the highest number of shoots reported in the literature for B. vulgaris up to date, achieving 7.80 NS with 6.0 µM, within 3 weeks of culture in TIS, without the formation of hyperhydric shoots. Measurement of Water Fresh and dry matter accumulation were significantly increased when 6.0 µM BA was used (Figures 2(a) and (b)), but both parameters decreased at the higher concentrations. Increasing concentrations of BA in the culture medium increased total water content (Figure 2(c)).The shoots developing in all concentrations of BA accumulated, pro-portionally, more water than those cultured on the CK-free medium (Figure 2(c)). In general, the water content is a physiological marker that defines the quality of the in vitro shoots.Although, the results show that increasing concentrations of BA in the culture medium increased water content and decreased quality of the shoots.On the other hand, the high concentrations of BA in the culture medium break the balance auxin/cytokinin, so that the cell begins to divide, form many large cells, with limited organelles and high water content [21]. Measurement of Total Phenol Content and Lignin A significant correlation was observed between total phenol content and BA concentration.Total phenol content decreased in the CK-free medium, shoots cultured with 12 and 18 µM BA and in static liquid culture medium, respectively, those in shoots cultured with 6 µM BA (Figure 3).On the other hand, quantifying the lignin content showed that the BA concentration strongly affected the content lignin between treatments.Total lignin content increased in 6 µM BA, respectively, those in shoots cultured with 12 and 18 µM BA and static liquid culture medium. Although, lower total phenol and lignin content coincided with the higher water content values (Figure 4).The change in the morphology of B. vulgaris shoots, which involved an increase in the water content, was associated with a reduced content of phenols and lignin, this suggests an impairment of metabolism of phenolic compounds, probably forming lignins or their precursors, which may be a cause of physiological malformation. Chlorophyll Measurement A significant correspondence was observed between total chlorophyll content and BA concentration.The total chlorophyll content was four, three, two and one and a half times lower in shoots grown in the CK-free medium, with SSL, 12 and 18 µM BA, respectively, than in shoots cultured with 6 µM BA (Figure 5).Moreover, the higher total chlorophyll content coincided with the lower water content values (Figure 2) and values higher total phenol content and lignin. On the other hand, the shoots cultured in static liquid culture medium none showed symptoms of hyperhydricity.However, these shoots showed a dark green color to light green, which coincided with the higher water content values and lower total chlorophyll content (Figures 2 and 5). Discussion According to the literature, BA is the most commonly used cytokinin in the micropropagation of bamboo, alone or combined with kinetin or auxin [1,12] suggested a protocol for the micropropagation of B. vulgaris using a basal MS static liquid culture medium supplemented with BA and TDZ.In this protocol, a mean of 3.8 NS per explant was produced in 10 weeks after subculture onto an elongation medium.[22] reported that the placement of the explants in MS medium supplemented with BA (20 µM) alone or with NAA (3.0 µM) resulted in the maximum number of shoots, at 32.39 shoots/explant, respectively.These shoots elongated to 2.23 cm within 4 weeks of culture.[23] achieved the highest frequency of shoot proliferation (4.5) on BA (13.3 µM) and IBA (2.0 µM) using nodal explants from mature bamboo shoots. Several studies have confirmed that temporary immersion stimulates shoot proliferation and growth [24,25].The high number of shoots obtained in the TIS is a consequence of the efficient gaseous exchange between the plant tissue and the gas phase inside the vessel.Multiple daily air replacement by pneumatic transfer of the medium prevents the accumulation of gases such as ethylene and CO 2 .Additionally, the uptake of nutrients and hormones over the whole explant surface ensures maximum growth [26].The most important reason for the efficiency of the TIS is that it combines the advantages of liquid culture (increased nutrient uptake), to improve the growth of the plantlets [27]. Shoot proliferation was proportionally enhanced by increasing concentrations of BA, as shown in Table 1, and in vitro shoot morphology was greatly affected by BA concentration.Several reductions in the number shoot per explant were observed at the concentration of BA (12.0 µM).Therefore, this BA concentration harmed B. vulgaris shoot length and number of leaves per shoot proliferation in static liquid culture medium.Similar results, have been described in Bambusa tulda, Bambusa atra and Dendrocalamus giganteus [28] and Dendrocalamus hookeri [29].However, the number shoots per explant were increased with 6.0 µM BA in the TIS.The positive effects of TIS on the nutrient assimilation have been demonstrated during the growth and development of bamboo [30].A significant correlation was observed between total phenol content and BA concentration.The lowest content of this important secondary metabolite was recorded at CK-free medium.Low phenolic levels have been found poor morphological and anatomical functioning [31].This malformation is associated with poor lignification and excessive hydration of tissues, which result in plantlets that cannot survive ex vitro conditions after transplanting [32].This suggests that BAP, to cause decrease in the content of phenols and lignins in shoots, could have a direct effect on the synthesis of proteins involved in the metabolism of phenolic compounds and polymerization of lignins and their precursors. A significant difference was observed between shoots cultured with TIS and shoots cultured with static liquid culture medium.In general, the shoots cultured with static liquid culture medium increased water content and decreased total phenol, lignin content and chlorophyll.In the other hand, these morphological changes are early hyperhydricity response in B. vulgaris shoots.However, described that compared with shoots cultured in TIS is characterized by lower chlorophyll content, which is the cause of their looks translucent [33] and a cause of inefficiency photosynthetic described for these shoots [34,35]. In conclusion, the use of TIS-improved bamboo micropropagation enhances both, shoots proliferation and growth.The in vitro B. vulgaris shoots grown in 6.0 µM BA, had a lower water and greater phenol and lignin content than the other groups.The concentration of 6.0 µM BA was most appropriate for B. vulgaris shoot proliferation in TIS, since the number of NS was higher than in those cultured in static liquid culture medium.In the future, strategies such as the addition of osmotic agents such as polyethylene glycol, as well as CO 2 supply to the vessel and the use of forced ventilation may play an important role in improving bamboo plant quality without compromising the number of shoots achieved in this BA treatment.The Bambusa vulgaris shoots grown in static liquid culture medium with 12.0 µM BA and the highest concentration in BA (18 µM) in TIS had numerous anatomical defects and physiological disorders.The results of the current study provide, for the first time, information on the rapid and successful propagation of B. vulgaris by TIS. Effect of BA Treatments on Morphology and Physiology of Proliferated Shoots of Bambusa vulgaris Schrad.ex Wendl in Temporary Immersion 206 Figure 1 . Figure 1.Shoots grown in TIS with different concentrations of BA after 4 weeks of culture.a) Double-vessel system with B. vulgaris shoots; b) Shoots cultured on static liquid culture medium.Shoot from c the CK-free medium; d) 6.0 µM BA; e) 12.0 µM BA, and f 18.0 µM BA (Immersion frequency every 6 h, immersion time 1 minute). Figure 2 . Figure 2. Effects of BA on the biomass of the B. vulgaris shoots grown in TIS after 4 weeks of culture.a) Total FW; b) Total DW; c) Percentage water content.* Each value is the mean for 80 shoots ± average range (standard error) of the mean. Figure 3 . Figure 3.Total phenol content in the B. vulgaris shoots grown in TIS with different concentrations of BA after 4 weeks of culture.* Total phenols were calculated as mg of GAE per g DW, and values represent the means ± standard error (n = 80).Different letters indicate significant differences as determined by the Tukey test at P = 0.05. Figure 4 . Figure 4. Total lignin content in the B. vulgaris shoots grown in TIS with different concentrations of BA after 4 weeks of culture.* Total lignin were calculated as % of residues of the cell wall, and values represent the means ± standard error (n = 80).Different letters indicate significant differences as determined by the Tukey test at P = 0.05. Figure 5 . Figure 5.Total chlorophyll content of B. vulgaris shoots TIS grown with different concentrations of 6-BAP, after four weeks of cultivare.Means with different letters on the bars indicate statistically significant differences (one way ANOVA, Tukey, P ≤ 0.05, n = 80). ). OPEN ACCESS AJPS Effect of BA Treatments on Morphology and Physiology of Proliferated Shoots of Bambusa vulgaris Schrad.ex Wendl in Temporary Immersion 208 Table 1 . The effect of BA on shoot proliferation of B. vulgaris after 4 weeks of culture in the TIS and liquid system. TIS Immersion frequency every 6 h, immersion time 60 s.SSL shoots cultured on static liquid culture medium.NS normal shoots.CK-free, CK-free medium.Values represent the means ± average range (5 shoots per TIS, 20 shoots per treatment, repeated three times, n = 80) and different letters in the same column indicate significant differences as determined by Tukey test at P\0.05.
2018-12-10T21:14:40.109Z
2014-01-24T00:00:00.000
{ "year": 2014, "sha1": "18cbb250725186658cf8e031866b03daceeed6e2", "oa_license": "CCBY", "oa_url": "http://www.scirp.org/journal/PaperDownload.aspx?paperID=42433", "oa_status": "GOLD", "pdf_src": "MergedPDFExtraction", "pdf_hash": "18cbb250725186658cf8e031866b03daceeed6e2", "s2fieldsofstudy": [ "Agricultural And Food Sciences" ], "extfieldsofstudy": [ "Biology" ] }
49322241
pes2o/s2orc
v3-fos-license
Genetics‐based methods for agricultural insect pest management Abstract The sterile insect technique is an area‐wide pest control method that reduces agricultural pest populations by releasing mass‐reared sterile insects, which then compete for mates with wild insects. Contemporary genetics‐based technologies use insects that are homozygous for a repressible dominant lethal genetic construct rather than being sterilized by irradiation. Engineered strains of agricultural pest species, including moths such as the diamondback moth Plutella xylostella and fruit flies such as the Mediterranean fruit fly Ceratitis capitata, have been developed with lethality that only operates on females. Transgenic crops expressing insecticidal toxins are widely used; the economic benefits of these crops would be lost if toxin resistance spread through the pest population. The primary resistance management method is a high‐dose/refuge strategy, requiring toxin‐free crops as refuges near the insecticidal crops, as well as toxin doses sufficiently high to kill wild‐type insects and insects heterozygous for a resistance allele. Mass‐release of toxin‐sensitive engineered males (carrying female‐lethal genes), as well as suppressing populations, could substantially delay or reverse the spread of resistance. These transgenic insect technologies could form an effective resistance management strategy. We outline some policy considerations for taking genetic insect control systems through to field implementation. Introduction Many insects in agro-ecosystems are considered to be major global pests causing significant economic harm. For example, the pink bollworm Pectinophora gossypiella (Saunders), a specialist pest of cotton, originated in Asia and spread to America, Australasia and Africa in the 20th Century (Naranjo et al., 2002). It is now present in almost all cotton-growing countries, and is a key pest in many of them. The Mediterranean fruit fly ('Medfly') Ceratitis capitata (Wiedemann) is a highly invasive generalist attacking more than 250 host plants, and is one of the world's most economically important pests (CABI, 2016). Diamondback moth Plutella xylostella (L.), a pest of Correspondence: Michael B. Bonsall. Tel.: +44 (0) 1865 278916; e-mail: michael.bonsall@zoo.ox.ac.uk brassicas (including a number of vegetable and oilseed crops), has evolved resistance to all classes of synthetic insecticides, as well as to some biopesticides; it was the first insect observed to evolve field resistance to dichlorodiphenyltrichloroethane and to Bt (a biopesticide derived from a bacterium Bacillus thuringiensis). Diamondback moth costs the global economy an estimated US$4-5 billion per year through a combination of lost yield and costs of management (Zalucki et al., 2012;Furlong et al., 2013). Conventional control methods, particularly chemical insecticides, have often failed to prevent the enormous damage caused by insect pests, and advances in biology (rather than chemistry) have been harnessed to provide novel control options. One alternative against diamondback moth is Bt biopesticide in the form of sprays (Furlong et al., 2013). Genetically modified (GM) insecticidal crops express these Bt toxins to protect the plant from target pests. For example, Bt cotton defends against Lepidoptera, including pink bollworm (Carrière et al., 2003), by expressing Cry 1Ac toxins, which are specifically lethal to Lepidoptera (de Maagd et al., 2001). An area-wide method known as the sterile insect technique (SIT) has been very successful against pink bollworm and Medfly (Dyck et al., 2005) and is being improved using advances in molecular biology. For all three of these example species, new genetic insect control methods are being developed to tackle agriculturally important pest populations. In this review, we set out an overview of genetic insect control methods and, in doing so, we give an indication of how mathematical modelling is useful in providing insights (and exploring limitations) to these technologies in the absence of broad evidence from experimental field trials and observations. This area of research is highly interdisciplinary; our focus is on theoretical analysis, considering ecology and genetics together to help design, understand, test and implement these novel strategies for agricultural insect pest management. Sterile insect methods The idea of releasing sterile insects into wild populations as a pest management intervention was independently conceived in the 1930s and 1940s by geneticist A. S. Serebrowskii in Moscow; tsetse field researcher F. L. Van der Planck in what is now Tanzania; and E. F. Knipling at the U.S. Department of Agriculture (USDA) (Klassen & Curtis, 2005). Van der Planck and Serebrowskii focussed on sterility resulting from hybrid crosses between different species or different genetic strains. Knipling (1955) pursued the use of ionizing radiation to induce dominant lethal mutations causing sterility. In current practice, the SIT involves the mass rearing of the pest species on artificial diet, exposing very large batches of individuals to radiation to cause chromosome damage, followed by their release into a target area. When the released insects mate, the resulting eggs do not hatch because of the damage to genetic material in the parent's germ line. Sustained inundative releases are required. Sufficient sterile insects must be released for a long enough period to achieve a significant reduction in pest numbers, either suppression to a suitably low density or local population elimination. One important measure is a release ratio, or over-flooding ratio, of released sterile insects to wild fertile insects. The SIT is mating-based, relying on biology rather than chemistry to tackle pest populations. It is species-specific and so has no direct off-target effects on other species in the environment, and is best-suited to systems where a single species is the major cause of harm. Area-wide SIT programmes have achieved success on very large scales (Dyck et al., 2005). Decades-long international campaigns have suppressed and eradicated the New World screwworm Cochliomyia hominivorax (Coquerel) from the U.S.A., Mexico, Belize, Guatemala, Honduras, El Salvador, Nicaragua, Costa Rica, Panama and some Caribbean islands (Vargas-Teran et al., 2005). Continuing releases in Panama form a barrier to prevent reinvasion into Central and North America from South America. SIT was also deployed against a screwworm outbreak in Libya. Screwworm is a myiasis pest whose larvae develop in living tissue of vertebrates, notably cattle, but also other livestock, wildlife and occasionally humans (e.g. in wounds). Freedom from infestation in those countries has produced significant economic benefits that vastly outweigh the costs of intervention (Vargas-Teran et al., 2005). SIT programmes typically release both males and females, lacking a practical method to sort the sexes easily in large numbers. This is inefficient because the released sterile females and males tend to court and mate with each other rather than seeking out wild mates. Male-only releases are generally more efficient than mixed sex releases, a large-scale study of irradiated Medfly quantified this as being three-to five-fold more efficient per male (Rendón et al., 2004). Early removal of females (eggs or early larval instars) in the generation destined for release also potentially saves on rearing costs as only the males need to be housed and fed. Genetics-based variants of the SIT are being developed (Thomas et al., 2000;Alphey, 2014;Alphey & Alphey, 2014;. Various insect species, crop pests and human disease vectors are undergoing trials ranging from laboratory experiments to large-scale open releases (Gong et al., 2005;Ant et al., 2012;Harris et al., 2012;Harvey-Samuel et al., 2015). Insects have been engineered with a self-limiting construct conferring a dominant lethal phenotype. Male insects that are homozygous for that transgene are released to mate with wild females, whose progeny inherit the dominant lethal and so are unable to survive to reproductive maturity (Fig. 1A). The population-level outcome is then identical to SIT: a reduction in population size. To allow such insect strains to be produced and mass reared, the construct is repressible. The laboratory or factory diet contains an antidote that switches off expression of the lethal effector gene. In the simplest version, this works in a similar manner to radiation-based SIT. Compared with sterilizing doses of radiation, the targeted nature of genetic engineering generally mitigates fitness reductions in transgenic insects; although some detriment in performance might occur, there will be little or no effect in the most promising candidate genetic lines (Marrelli et al., 2006;Harvey-Samuel et al., 2014). This enables SIT-like applications in further species for which a sterilizing dose of radiation can cause too much collateral damage to somatic cells and/or tissues. As the phenotypes of these novel constructs are engineered rather than caused randomly by irradiation, molecular biologists can design when and where a lethal gene is expressed. One important outcome is dominant female-specific lethality, alternatively described as male-selecting constructs (Heinrich & Scott, 2000;Thomas et al., 2000;Fu et al., 2007Fu et al., , 2010Wise de Valdez et al., 2011;Ant et al., 2012;Labbé et al., 2012;Jin et al., 2013;Tan et al., 2013) (Fig. 1B). Daughters of homozygous transgenic males are not viable, except when reared on diet containing the repressor. Their sons survive, possibly with minor fitness costs. This sex-specificity can be exploited to enable male-only release because removing the repressor from the final generation of insects prior to release results in only males surviving, thus achieving sex-separation by 'genetic sexing'. Another novel genetic trait, developed in container-dwelling mosquito species (Phuc et al., 2007), has lethality occurring after the immature life stage that is affected by density-dependent competition mortality and before reaching maturity; adult females Genetics. Engineered males carry two copies of a genetic construct 'L', which is a repressible dominant lethal. The wild-type allele 'w' represents the absence of the engineered construct. All offspring of released males and wild females inherit one copy of the dominant lethal. In (A), those progeny are not viable. In (B), the engineered lethality is female-specific; thus, female progeny are nonviable and male progeny survive and can pass on the construct to half of their own offspring. are harmful as they bite and transmit disease. Where the juvenile stage of a pest insect causes damage to plants, such a late-acting trait is less attractive. Variations also include tissue-specific expression to render female mosquitoes flightless and therefore unable to feed or mate (Fu et al., 2010;Wise de Valdez et al., 2011;Labbé et al., 2012). An alternative, non-genetic variant of the SIT, known as the incompatible insect technique (IIT), is being developed in mosquitoes (Bourtzis, 2008). This involves the sustained release of males infected with Wolbachia, an intracellular bacterium that interferes with reproduction, rendering matings between released males and wild Wolbachia-free females infertile (Werren et al., 2008;Bourtzis et al., 2014). Although IIT may be applicable to the integrated management of vectors, as far as we are aware, there are (as yet) no proposed applications of this technology to plant pest insects; one key reason is the need for a highly effective sexing strain or sex-separation method to enable only males to be released, because any released females could establish the Wolbachia infection in the wild population, thereby removing the incompatibility (Bourtzis, 2008). Agricultural pest management: mathematical modelling Starting with the USDA in the 1950s (Knipling, 1955), mathematical modelling has long been used to understand the potential effect of sterile insect methods on an insect population (Alphey & Bonsall, 2014b). Models can address research questions relevant to a particular context, whether the target insect is a plant pest that causes damage when ovipositing, through feeding or by transmitting plant pathogens, or is a vector of human, livestock or wildlife diseases. Those research questions can serve a range of purposes, including helping to understand underlying biological processes, designing appropriate traits, predicting the impact of fitness costs, informing the design and evaluation of experiments, or exploring potential benefits. A common theme in this work is to combine ecology and genetics. For example, modelling the effects of larval competition and exploring late-acting lethal phenotypes in mosquitoes predicted that this could be substantially more effective at population control than an early-acting (e.g. embryonic) lethality or radiation-induced sterility (Atkinson et al., 2007;Phuc et al., 2007;Alphey & Bonsall, 2014a). Indeed, if density-dependent juvenile competition were over-compensatory, genetic lethality that occurred at an earlier stage, thereby freeing survivors from regulation by intense competition, could push adult insect numbers higher than in the natural uncontrolled population (Yakob et al., 2008;Alphey & Bonsall, 2014a). This multidisciplinary approach can be broad; population dynamic models incorporating density-dependent competition were combined with epidemiological models to investigate the potential effect of releases on a mosquito-borne disease in a human population (Atkinson et al., 2007;Alphey et al., 2011a) and linked with bio-economic and health economic models to estimate the potential cost-effectiveness of this novel vector control (Alphey et al., 2011a). Similar multicomponent modelling approaches could be applied to plant pests, to explore potential for cost-effective population control to limit crop yield losses. In a simple deterministic population dynamic model with density-dependent regulation (Alphey et al., 2011a), pest numbers approach a natural equilibrium, or oscillate around it (Fig. 2). Genetic control using modified males can be incorporated by scaling reproductive growth by the fraction of matings that produce viable progeny (the number of fertile males divided by the total number of males, assuming a well-mixed, randomly mating population) (Alphey & Bonsall, 2014b). The density dependence term in the formula is adjusted according to whether the genetically-induced mortality occurs before or after the competition takes effect (Alphey & Bonsall, 2014b). Critical thresholds, or tipping points, are a common feature of models of genetic control (e.g., Atkinson et al., 2007;Phuc et al., 2007;Alphey et al., 2009Alphey et al., , 2011bAlphey & Bonsall, 2014a,b). A 'release ratio' or 'overflooding ratio' can be defined in various ways, such as the ratio of engineered males to the number of wild males at natural equilibrium to be maintained at constant value through sustained ongoing releases balancing out mortality, or as a fixed proportion of released males to males emerging in the wild. Whatever the precise definition used, there exists a critical ratio. If engineered males are released at a sustained ratio higher than that critical threshold, the population will be eliminated. If the release ratio is below the critical value, the population attains a new, lower, equilibrium density. Such population suppression would be considered a practical success if that new lower equilibrium were below an economic harm threshold for a crop pest. Moreover, this suppression might be of ecological benefit because the species is not totally eliminated from an ecosystem. A population cannot, however, be suppressed arbitrarily close to zero and there is a clear switch from suppression to elimination as the critical release ratio is passed. The impact of mating competitiveness of released insects can be explored using population models. A key practical result of a mathematical model of genetic control of the mosquito Aedes aegypti (L.) (Phuc et al., 2007), which transmits viruses including dengue, yellow fever and Zika, suggested that those engineered males would need to achieve 13-57% of matings to achieve sufficient suppression to reduce the disease burden of dengue virus. This model prediction has guided assessment of the performance of the strain in open field trials (Harris et al., 2011(Harris et al., , 2012. Populations of Ae. aegypti have been suppressed successfully in field trials in the Cayman Islands (Harris et al., 2012), Panama (Gorman et al., 2015) and Brazil (Carvalho et al., 2015). In principle, genetic sterile insect methods could work alone. However, they are more likely to be practical, cost-effective and sustainable (delaying the evolution of resistance) when used in combination with other approaches. Integrating pest management methods Genetic insect control methods need not be directly aimed at population suppression. The female-lethal, or male-selecting, versions could in principle be used to help manage resistance to other control methods. First, consider an example of another plant pest control method using GM technology: insecticidal crops. Transgenic Bt crops are engineered to express insecticidal toxins derived from Bacillus thuringiensis, causing mortality to susceptible insects eating the plant (Tabashnik et al., 2013). Effective Bt crops are valuable and there is a strong economic threat from the propensity of insects to evolve resistance. The primary approach used to slow the evolution of resistance is known as the high-dose/refuge strategy and this is mandatory in some countries. The effectiveness and dominance of resistance to toxins is often dose-dependent. Commercial crop varieties are designed to express a 'high dose' of the relevant Bt protein, so that, if any allele in the population is able to confer resistance, the amount of toxin expressed will be sufficient to kill resistant heterozygotes. If this is achieved, the resistant allele is functionally recessive. Planting high-dose Bt crops across an entire landscape would likely lead to the rapid spread of resistance because the only individuals that could survive would be homozygous. The 'refuge' part of the strategy provides an area of nontransgenic plants (either a conventional variety of the crop or an alternative host plant species) to serve as a safe harbour for susceptible insects. This acts as a source of susceptible alleles and helps to dilute and slow the evolution of resistance by providing susceptible mates for resistant insects so that their progeny are heterozygous and are killed by the toxin (Tabashnik et al., 2008(Tabashnik et al., , 2013. In terms of the genetics, consider a resistant allele r, which is initially rare. The dominant allele S is susceptible to Bt. If the high dose assumption is achieved, only some rr individuals can survive a full life cycle on transgenic plants and emerge as adults to mate. In the refuge, most emerging adults will be susceptible SS, especially if the r allele has fitness costs in the absence of the Bt toxin. If the refuge is located so that the two subpopulations are well-mixed, most resistant rr survivors will mate with susceptible SS insects from the refuge. Their resulting Sr progeny cannot survive on the Bt crops and so will not pass on the resistant allele to future generations. Unfortunately, a few insect species, such as the economically important pests Helicoverpa zea (Boddie) and Helicoverpa armigera (Hübner) (both species, confusingly, known as both bollworm and corn earworm), have been identified in the past as 'moderate dose pests', where the toxins were unable to kill all heterozygotes (EPA, 1998;Tabashnik et al., 2008). Even where its main assumptions appear to hold, the high-dose/refuge strategy is predicted only to delay resistance and, after two decades of commercially grown Bt cotton and Bt corn (maize), some field-evolved resistance has now been observed and reported in a variety of insect species. In some cases, this has already led to reduced efficacy of crops, or even crop failures. A comprehensive review is provided by Tabashnik et al. (2013). In economic terms, the high-dose/refuge strategy is an inter-temporal trade-off, sacrificing current value to retain value generated in future (Frisvold & Reeves, 2008). Refuge plants will be damaged, which reduces their yield if they are crops, or reduces the area allotted to crop production if alternative host plants are used. This damage or lost production is tolerated, in return for prolonging the efficacy of the protection afforded by the Bt crops. In principle, a conventional crop refuge could be sprayed with another pesticide (one with no cross-resistance between its active ingredient and Bt), although doing so reduces its effectiveness as a refuge. Next, consider combining Bt plants with a female-lethal genetic insect control programme. The males to be released should carry two copies of the lethal construct and should also be homozygous susceptible to Bt, SS. This results in introgression of genes through the male line, with male progeny of released insects inheriting an S allele and therefore passing it on to their offspring, at least in the refuge. Viewing this as an alternative source of susceptible alleles, we investigated whether mass-release of these toxin-susceptible insects could substantially delay or reverse the spread of resistance to Bt and reduce the need for a refuge . Simulation models were used by the U.S. Environmental Protection Agency when developing the regulations for Bt crops, and specifying resistance management requirements including minimum refuge sizes and spatial restrictions (Matten et al., 2012). We used a population genetic model to investigate the effect of releases of susceptible female-lethal engineered males on the evolution of Bt resistance over time . With plausible parameter values, the r allele can spread (Fig. 3A) and spreads faster with a smaller refuge. Generally, a frequency of 0.5 is an indicator of a serious problem because resistance frequency increases rapidly once r becomes the more common allele (Carrière & Tabashnik, 2001). With a very modest ratio of released males to males emerging in the wild (sustained over time), our models predict that releases can slow or reverse the spread of resistance to Bt crops. Although a typical suppression programme might aim for a ratio of 10 : 1 or more (Dyck et al., 2005), resistance management programmes could see an effect with ratios as small as 1 : 5 (i.e. where one-sixth of matings are with a modified male). Engineered insect releases allow an equivalent level of resistance management with a smaller refuge. If the initial r allele frequency is higher, more released males and/or a larger refuge are required to achieve a given effect. The release numbers and refuge size can be traded off, which would be at least in part an economic decision . The objective is to protect crops from damage. The resistant allele frequency is not important in itself. What really matters is whether the pest population is too large. Linking the population genetics model with a simple population dynamic model of a pest exhibiting exponential growth (if uncontrolled), we predicted that insect releases always improve population control (compared with Bt crops alone, with the same refuge size) because of the combination of resistance dilution (A) (B) Figure 3 Resistance management. (A) Frequency of the resistant r allele in emerging adults and (B) population size relative to initial size, over time (insect generations) with no insect releases (black lines) and with release of toxin-susceptible modified males carrying a female-lethal genetic construct at fixed ratio of 1 : 2 to males emerging in the wild (grey lines). The released insects act in synergy with the Bt crops. This represents a generic pest for which resistance to Bt plants is recessive with partially-dominant fitness costs, where refuge is 4% of the habitat. Based on a deterministic, discrete-generation model reported in Alphey (2009). and suppression (Fig. 3B). This difference may not be easily detectable in the early generations, but the outcomes diverge significantly when resistance spreads beyond 0.5 allele frequency without releases . Sterile insect methods can be used remedially where resistance has become widespread. A refuge cannot exceed 100% (i.e. plant no Bt crops at all), and by itself would rely on fitness costs to cause the r allele to decline. Higher release ratios can rapidly reduce the r allele frequency at the same time as directly reducing the population size. Releases over Bt crops can be used to reverse the spread of resistance, although the reduction of resistance to acceptably low levels needs an appropriate combination of release ratio and refuge size . Optimal control theory has been applied to a generic high-dose refuge scenario, exploring resistance management decisions for planting of Bt crops and refuges each season. A model using dynamic programming methods investigated the potential for fitness costs of resistance to enhance the delaying effects of the refuge, where the biological nature of those costs was a reduction either in fecundity or in relative competitive ability. Where the fraction of the landscape allocated to Bt crops is decided optimally, a resistance-related fecundity penalty allows for the planting of larger areas of Bt crops than equivalent costs that reduce the ability of resistant insects in density-dependent competition (Hackett & Bonsall, 2016). Work is ongoing to extend this approach to incorporate the genetic control of plant pests along with insecticidal crops or sprays. Female-lethal genetic control releases are potentially a useful addition to the resistance management toolkit. Theory predicts this combined approach could be effective at very low release ratios, much lower than would be needed for suppression or eradication by SIT, and permit much smaller refuges than would otherwise be needed and are typically mandated . Males engineered with female-lethality acting in synergy with Bt crops could be effective in a wide range of ecological and genetic scenarios . Building on this theoretical work, genetic strains of diamondback moth with the female-lethal trait have been developed and tested for fitness costs with population-level effects (Harvey-Samuel et al., 2014). Proof of principle of population suppression combined with resistance dilution has now been demonstrated in field-cage experiments with experimental Bt broccoli (Harvey-Samuel et al., 2015) and work is progressing towards open field trials to evaluate performance in an agricultural habitat (Cornell University, 2016;Oxitec Ltd, 2016). This example of single-toxin Bt crops is one illustration of using engineered insect releases to help manage resistance to another control method. The idea is also applicable in principle to other approaches, such as Bt biopesticide sprays or synthetic chemical insecticides. Successful sterile insect release programmes have been implemented as area-wide operations, usually with involvement and investment from local or national government and/or international government and agencies (Dyck et al., 2005). Growers' associations and similar bodies may also be involved including, for example, by contributing to programme funding through levies. The release of GM insects, as described above, for concurrent suppression of insect populations and dilution of resistance to another control method (such as Bt crops), could follow this precedent. The economics of particular pest and crop species would have to be considered in the context of any current regulation (e.g. mandatory refuge sizes for Bt crops) and environmental harm from current practices (e.g. insecticidal sprays), to assess whether such insect releases are likely to be of benefit to the various participating stakeholders. Intrinsic resistance dilution We can take this concept of using transgenic insects to manage resistance a step further. What if heritable resistance were to arise to the lethality of the genetic construct itself? Such resistance has not been reported in any species of engineered insects, whether in trials or at an earlier stage of technology development, although a hypothetical resistant gene can be described and modelled (Alphey et al., 2011b). Key features of a putative resistance allele (R) are: the effectiveness of the resistance (what fraction of RR individuals bearing a lethal allele can survive to maturity?); the dominance of resistance (the lethal-surviving proportion of SR genotypes, relative to RR homozygotes); and the magnitude and dominance of any fitness costs of the R allele in individuals that do not carry the lethal construct. With good quality control, released males can be assumed homozygous susceptible to the lethality of the genetic construct (S alleles) and do not carry any mutation or variant conferring resistance (R) that might be present in the wild population. In a similar vein to the Bt resistance management described above , any progeny of released males will inherit one copy of the susceptible S allele, as well as one copy of the lethal genetic construct, which provides resistance dilution. The S alleles of released males are inherited through their male progeny where the construct is female-specific, and are also passed on via any progeny that survive the lethal effect (i.e. resistant phenotype). So, if resistance is not recessive (some SR individuals can survive), a bisex-lethal mechanism will also have this inherent resistance dilution potential (Alphey et al., 2011b). There are two opposing causes of selection pressure. Progeny of released engineered males inherit: (i) the lethal allele, which favours resistance and (ii) a susceptible S allele, which dilutes resistance. The fitness advantages and disadvantages of resistant R alleles are negatively frequency dependent. Resistance is only beneficial in insects bearing the lethal construct, and the costs of resistance are mainly manifested in individuals that do not carry the lethal allele. The overall selection pressure is also dependent on the number of transgenic insects in the population and therefore on the release ratio deployed. We created a frequency-dependent population genetic model, with the number of released males kept in fixed proportion to the number of males in the current generation of the target population, aiming to explore how these competing forces play out (Alphey et al., 2011b). The model displays a complex array of possible outcomes. In summary, for some putative resistant alleles, the built-in dilution can be sufficient to drive any resistant allele extinct before it reaches observable frequency. More challenging resistant alleles, those with greater effectiveness against the lethal and lower fitness costs, can potentially spread through the population to an equilibrium level, although this will not necessarily have a major impact on the effectiveness of controlling the pest population (which depends on the reproductive rate of the species). Further work has extended this research idea to explore the effects of spatial structure using a two-deme metapopulation (Watkinson-Powell & Alphey, 2017). A nontarget population, linked by dispersal with the target population into which modified insects are released, can act either as a source of susceptible alleles (acting as a kind of refuge) or as a source of resistant alleles depending on the fitness properties of the resistant allele. The rate of dispersal also influences the outcomes. As a result, the presence of a nearby nontarget population could have a range of impacts from significantly hindering the control programme to significantly enhancing it. Further work has also demonstrated qualitatively similar findings to the original proportional-release model under the alternative, and arguably more practical assumption, that a fixed number of modified males is released into each pest generation (Thompson, 2015). Gene editing Heritable genetic 'sterility' is not the only genetics-based method being developed to control insect populations (Alphey, 2014;Burt, 2014). Recent advances in genetic modification have focussed on techniques of gene and genome editing. Molecular methods, including CRISPR ('clustered regularly interspaced short palindromic repeats') approaches, have been developed with the aim of precisely modifying genes (Esvelt et al., 2014;Kim & Kim, 2014). These techniques have the potential to drive genetic constructs through a population, incorporating 'gene drive' mechanisms that confer greater-than-Mendelian inheritance even if the construct has fitness costs. These gene-editing approaches have been developed in mosquitoes either to suppress vector populations, by affecting female fertility (Burt, 2003;Deredec et al., 2008;Hammond et al., 2016), or to modify a population, by spreading a trait that affects the ability to harbour pathogens (Gantz et al., 2015). Gene-editing approaches could also be used to suppress agricultural pests and/or manage resistance; for example, CRISPR gene editing has been used in a functional study to identify suitable gene targets in diamondback moth (Huang et al., 2016). However, considerable technical, ecological, regulatory and social engagement work remains to be carried out as these approaches move towards scalable field implementation. Interdisciplinary research: theoretical, laboratory and field Developing genetic approaches to insect control through to field applications is an interdisciplinary endeavour. Theoretical analyses such as those described in the present review are part of a much bigger picture, a composite of varied elements that must work together to achieve real change. Laboratory science is crucial for the creation of appropriate strains, particularly molecular biology and insect genetics. Applying this technique successfully to populations in nature is largely an exercise in applied ecology. For example, how many insects are in the target population? This is hard to measure, although it is a key element of the effective release ratios that will be achieved, and so influences the impact, duration and cost of a control programme. How might the effects of identified fitness costs scale up to population level? Insect behaviour is important; where do they mate and lay their eggs, and how far can they disperse? Released engineered males must be able to reach a significant proportion of females in the target population and be reasonably competitive for mates when they find them. Evolutionary biology and behavioural ecology must be understood, for example, to ensure that mass-reared insects retain appropriate mating behaviours, and to inform future resistance management strategies for self-sustaining genetic traits that will be designed to persist in the environment. A variety of performance measures are critical to success, including a lethal phenotype's conditionality (do transgenic insects survive on the antidote-containing diet?), lethality (do close to 100% of target insects die in absence of the repressor?) and sex-, tissue-or stage-specificity (does a female-lethal construct produce any detrimental fitness effects in males?), in addition to the longevity, flight ability, dispersal, mating behaviour and competitiveness of the released males. Candidate lines are selected and tested, assessing all these crucial performance measures in a stepwise series of experiments and trials progressing from test tube, through small cages, then large cages (semi-field conditions), to open release. Technology development, pilot studies and control programmes involve other disciplines beyond science; there are also regulatory, social and ethical dimensions with respect to implementing this approach (Lavery et al., 2008;Esvelt et al., 2014). Policy and regulation of genetic insect control Policy and regulations surrounding genetic insect control have developed and expanded in the last few years and continue to receive attention (e.g. House of Lords Science and Technology Committee, 2015). Based on existing environmental risk legislation, in most jurisdictions that have regulatory frameworks for these, the deliberate release of genetically modified insects requires proportionate assessment to ensure that wider biodiversity and/or human health is not adversely affected. Simultaneously, the benefits of suppressing agricultural pests, reducing harm and improving plant yields impinge on cost-benefit analysis in using particular control technologies. Across the European Union, Directive 2001/18 requires Member States to evaluate risks of releasing GM organisms (GMOs, whether plants, vaccines or animals). This is a risk (cost) based approach to the deliberate release of GMOs, which is based on the use of recombinant DNA technology (i.e. genetic modification) as the trigger for regulation. Contrasting regulatory processes exist. In Canada, for example, the legislation for 'plants with novel traits' outlines that the phenotypic effects (novel traits) of the plant are the basis for regulation, which is a more 'product-based' approach to triggering an environmental risk assessment (Canadian Food Inspection Agency, 2016). For GM insects, several guidance frameworks have been produced in recent years. The European Food Safety Authority (2012) published a regulatory framework for GM animals and the World Health Organization (WHO/TDR and FNIH, 2014) issued guidance for the testing and regulation of GM mosquitoes. Both of these asserted that a tiered approach from laboratory studies (focussed on the molecular biology and simple ecological processes) through to contained or confined field trials to commercial implementation should underpin environmental risk assessment in support of the development of GM insect technologies. At each point, risk assessment, risk management and risk communication ensures the validity of the emerging technology. Unlike plants, where genetic modifications are compared with an unmodified (conventional) plant, defining harm and identifying appropriate comparators for genetically modified insect pests requires more nuanced approaches. Proportionate to the technology and logically consistent with other pest intervention methods, appropriate ways of assessing the environmental risk might focus on changes in crop yields or other indirect measures of assessment (e.g. scale of insect damage). Developing regulations that consider the risks in the context of the potential benefits, and not as an isolated risk (or worse a conflated hazard), requires further consideration about the implementation of all types of area-wide control such as who benefits and who pays for these public goods. Public acceptance of GM technologies has varied across products (crops, insects, vaccines and insulin), as well as between countries and communities, and public engagement concerning GM insects will continue to be an important issue for developers and regulators (House of Lords Science and Technology Committee, 2015). Concluding remarks The need for new innovations to deal with emerging agricultural pests and diseases has never been greater, and this is an interesting time for the science and research of genetic control of insects. Some of the GM technologies described in the present review are already being proven in the field. The next wave of molecular methods is being applied to disease-transmitting mosquitoes and this is beginning to reach over to agriculturally important species. Attention is being given to regulatory aspects to enable the safe and appropriate implementation of these biological, genetics-based strategies. Collectively, these developments advance the prospects for realizing tremendous agricultural and socio-economic benefits. Further information This review is part of a larger project. Further information on our research and science policy work on genetic insect control can be found on the Mathematical Ecology Research Group website: https://merg.zoo.ox.ac.uk/projects/genetic-insect-control. Our outreach activities, including a mosquito control computer game are available at: https://merg.zoo.ox.ac.uk/outreach. Our science policy work can be seen at: https://merg.zoo.ox.ac.uk/sciencepolicy.
2018-06-22T00:23:56.773Z
2017-06-21T00:00:00.000
{ "year": 2017, "sha1": "80711999db360b12a32adad65b76ccbec7bd2633", "oa_license": "CCBY", "oa_url": "https://onlinelibrary.wiley.com/doi/pdfdirect/10.1111/afe.12241", "oa_status": "HYBRID", "pdf_src": "PubMedCentral", "pdf_hash": "80711999db360b12a32adad65b76ccbec7bd2633", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
269929058
pes2o/s2orc
v3-fos-license
Gray matter volume of functionally relevant primary motor cortex is causally related to learning a hand motor task Abstract Variability in brain structure is associated with the capacity for behavioral change. However, a causal link between specific brain areas and behavioral change (such as motor learning) has not been demonstrated. We hypothesized that greater gray matter volume of a primary motor cortex (M1) area active during a hand motor learning task is positively correlated with subsequent learning of the task, and that the disruption of this area blocks learning of the task. Healthy participants underwent structural MRI before learning a skilled hand motor task. Next, participants performed this learning task during fMRI to determine M1 areas functionally active during this task. This functional ROI was anatomically constrained with M1 boundaries to create a group-level “Active-M1” ROI used to measure gray matter volume in each participant. Greater gray matter volume in the left hemisphere Active-M1 ROI was related to greater motor learning in the corresponding right hand. When M1 hand area was disrupted with repetitive transcranial stimulation (rTMS), learning of the motor task was blocked, confirming its causal link to motor learning. Our combined imaging and rTMS approach revealed greater cortical volume in a task-relevant M1 area is causally related to learning of a hand motor task in healthy humans. Introduction Advances in neuroimaging analysis have led to the association of exceptional behavior, particularly the acquisition of complex cognitive and motor skills, with specific brain structures (Elbert et al. 1995;Gaser and Schlaug 2003;Hyde et al. 2009;Hamzei et al. 2012;Gerber et al. 2014;Sampaio-Baptista et al. 2014).For example, memorization of detailed spatial maps over an extended period (years) was associated with increased hippocampal volume (Maguire et al. 2000;Woollett and Maguire 2011) while learning complex visual motor skills, such as juggling, were associated with increases in gray matter of the temporal/occipital junction (Draganski et al. 2004(Draganski et al. , 2006)).These findings have advanced the fundamental question of whether skill learning can alter brain structure in the mature brain.However, the question of whether initial, or preexisting, brain structure determines the capacity for skill learning is less studied.While a relationship between inter-individual differences in brain structure and inter-individual differences in learning has been reported in multiple studies (Kanai and Rees 2011;Sampaio-Baptista et al. 2014 ;Lehmann et al. 2020), whether these brain areas were of functional relevance for learning the particular skill has not been demonstrated.For example, greater gray matter volume of the medial occipito-parietal areas at baseline correlated with steeper learning slopes for juggling (Sampaio-Baptista et al. 2014).Similarly, greater gray matter volume in right orbitofrontal cortex was related to a participant's ability to improve performance on a whole-body balancing task on a seesaw-like-platform (Lehmann et al. 2020).However, these tasks involve the whole body and the brain areas identified as being related to skill learning were not examined for functional relevancy (Sampaio-Baptista et al. 2014;Lehmann et al. 2020).Therefore, whether individual variability in functionally-relevant brain structures is causally linked to observed differences in behavioral change remains to be determined. In the present study, we sought to determine the relationship between gray matter volume in a task-relevant brain area in the mature healthy human brain and inter-individual differences in task learning.For this purpose, we focused on the primary motor cortex (M1) for its known involvement in the early stage of motor consolidation with new skill acquisition.We hypothesized that baseline gray matter volume within an M1 area that is also active in a specific hand motor learning task is related to the capacity for gains in this task, and that disruption of this M1 area blocks the ability to learn the task.To test this hypothesis, we performed two studies in a group of healthy, middle-aged to older, right-handed participants.In the first study, we obtained structural magnetic resonance imaging (MRI) of participants' brains prior to learning how to manipulate a joystick with their fingers to reach targets of different sizes on a computer screen.Subsequently, task-based fMRI was used to determine, at a group level, the cortical area functionally active when executing this task.This region of interest (ROI) was further structurally constrained to fall within atlasdefined boundaries of M1 (Fischl et al. 2008).We measured each participant's gray matter volume within this group-level "Active-M1" ROI and related it to functional gains during learning of the skilled hand motor task.The second study was designed to establish causality between processes in M1 and learning of the skilled hand motor task.We used low frequency subthreshold repetitive transcranial magnetic stimulation (rTMS) of the hand area of left M1 to transiently disrupt M1 activity immediately after participants practiced the hand motor learning task (Muellbacher et al. 2002).Given evidence for the critical involvement of M1 in the early stage of motor consolidation (Muellbacher et al. 2002), we expected that learning of the skilled hand motor task would be disrupted by rTMS, but not sham stimulation, to the hand area of M1, thereby providing evidence for a causal link between processes in the task-relevant M1 gray matter and learning of the skilled hand motor task. Materials and methods Participants in both studies were recruited through Emory University and fulfilled the following criteria for inclusion: age between 50 and 80 years; no neurological or psychiatric disorders; normal neurological examination; no current usage of central nervous system-active drugs; no contraindication to MRI; and no structural MRI brain abnormalities as evaluated by a board-certified neurologist.All participants also demonstrated normal cognition on the Repeatable Battery for the Assessment of Neuropsychological Status (RBANS; Total Scale index scores >2 standard deviations below the mean were considered abnormal) (Randolph et al. 1998).Handedness was determined by the Edinburgh Handedness Inventory (Oldfield 1971).All procedures were approved by the Institutional Review Board at Emory University and were in accordance with the ethical standards of the institutional research committee on human experimentation (institutional and national) and with the Helsinki Declaration of 1975, as revised in 2008.All participants gave written informed consent and were blinded to the stated hypothesis of the experiments.The studies were conducted as secondary analysis of healthy participants from two registered clinical trials (ClinicalTrials.govidentifier: NCT01726218 and NCT02544503); data are available upon request. Overview of experiments: First, all participants underwent a structural magnetic resonance imaging (MRI).Second, participants learned how to manipulate a joystick with their fingers to reach targets of different sizes on a computer screen.Third, taskbased fMRI was used to determine, at a group level, the cortical area functionally active when executing the hand motor learning task.These data were collected at least 1 day after the motor learning task.This concluded data collection for Study 1. Study 2 was designed to establish causality between processes in M1 and learning of the skilled hand motor task.We conducted two experiments in a randomized order (see below for details).We used low frequency subthreshold rTMS or sham stimulation of the hand area of left M1 to transiently disrupt M1 activity immediately after participants practiced the hand motor learning task. MRI acquisition parameters Structural and functional MR images of all participants in Study 1 were collected on a Siemens Prisma 3 T MRI system with a 64-channel head coil.Structural images were acquired using the HCP Lifespan Project MPRAGE protocol (Harms et al. 2018): TR = 2400 ms, TE = 2.24 ms, f lip angle = 8 degrees, FOV = 256 × 240 × 208 mm, 0.8 mm isotropic voxel dimensions.Eight task-based functional MRI runs were collected with the following protocol: TR = 2000 ms, TE = 28.0 ms, f lip angle = 90 degrees, FOV = 192 × 192 × 102 mm, voxel size = 3 mm 3 , 163 volumes per run, total duration 44:00 min.Structural scans were collected and considered for inclusion into the study according to the inclusion criteria prior to participants' involvement in other parts of the study. Experimental design-hand motor learning task All participants completed a single exposure hand motor learning task that lasted 20-30 min per trained hand (Buetefisch et al. 2014;Barany et al. 2020).Participants completed 1 to 3 runs depending on their performance in the first run and their gains in skillfulness with subsequent practice.This learning paradigm was selected to avoid overlearning in participants with high baseline skillfulness.Training was conducted for both the right and left hand separately, with the order of initial left versus right hand training randomized across participants.The training task is a computerized behavioral paradigm (Presentation ® software, www.neurobs.com) developed to measure a participant's ability to manipulate a joystick (Mag Design and Engineering, www.magconcept.com)between the thumb, middle, and index fingers to precisely guide a cursor to an array of target positions.Manipulating the joystick to reach the target required wrist extension/f lexion movements of ∼5 • in addition to finger movements.Four different target sizes were presented, which allows for parametric variation in the level of required precision.According to the speed-accuracy tradeoff described in Fitts's law, decreasing the size of a target increases the level of difficulty and results in longer movement times (Fitts 1954).By forcing participants to complete the task in a short, predefined period of time, accuracy was expected to decrease for smaller target sizes.Training-related improvement in accuracy at a similar or increased speed meets the criteria for motor skill learning (Censor et al. 2012). Participants sat comfortably in a dental chair ∼165 cm in front of a computer screen (29 × 51 cm).The forearm and proximal arm were supported to eliminate their involvement in the execution of the task.At the beginning of each trial, participants were presented with a blank gray screen followed by the display of a central cue square indicating the target size at the center of the screen (Fig. 1A).In the upper half field of the computer screen, either a small (5.3 × 5.3 mm), medium (9.3 × 9.3 mm), large (13.2 × 13.2 mm), or extra-large (17.2 × 17.2 mm) target (Fig. 1B) was presented at four possible locations (300 • , 330 • , 30 • , and 60 • ).Participants were instructed to move the cursor square as quickly , 60 • relative to vertical meridian) after which the participants were instructed to use the joystick to move the cursor to the target as quickly and accurately as possible.If the cursor began at the starting point and was inside the target 2 s after target presentation, participants received "hit" feedback; otherwise, they received "miss" feedback.B) Four different target sizes were used to vary demand on precision.C) during fMRI acquisition, participants used an MRI-compatible joystick secured on their body to complete the task with pads placed to support the arm and minimize elbow and shoulder movement.EMG electrodes were placed on participants' right-and left-extensor carpi ulnaris (ECU) muscles to monitor unimanual performance. as possible into the target square, the appearance of which served as a "Go-signal."After the target square appeared, participants had 2 s to move the cursor into the target and maintain it in that position until the end of the 2-s trial. Correct movements were defined as originating from the start position at the center of the screen and resulting in an overlap between the center of the cursor and the target at the end of the 2-s movement period.The movement time was defined as the time elapsed between the Go signal and the time the center of the cursor reached the target.Accordingly, a movement was recorded as inaccurate when the participant did not reach the target within the 2-s period, failed to hold the cursor position within the target, or left the start position early.Participant feedback was provided in the form of presentation of the word "hit" or "miss" on the screen for 500 ms for each correct or incorrect trial, respectively (Fig. 1A).The total duration of each trial was 4 s.Trials were grouped into blocks of 28 trials with three blocks per run, and a 12-s rest period between blocks.In each block, each target size was presented seven times at a random selection of the four possible locations (4 sizes × 7 trials/size = 28 trials).At the end of the run, the percentage of "hit" trials for each target size across all three blocks was calculated.The performance level determined whether participants continued to practice the pointing task for up to 2 additional runs.For participants achieving at least 50% hits on the small target size, no further practice was done to avoid overlearning the task.All other participants continued to practice until they reached either a minimum performance level of 50% hits for the extra-large target or until they completed 2 additional runs.For participants with small target accuracies <50% but extra-large target accuracies >50% after the first run, practice continued until their performance improved to >50% hits on the largest target size for which they were less than 50% accurate after the first run or until two additional runs were completed (Wischnewski et al. 2016). Experimental design-hand motor learning task-based fMRI Subsequent to the motor learning task described above, a variation of the motor learning task was administered to participants during active acquisition of fMRI protocols in the scanner, with target sizes modified to accommodate the different participant positioning and viewing distance.Stimuli were displayed using Presentation ® software (www.neurobs.com),and in separate runs participants manipulated the joystick with either the right or left hand as described for the motor learning task above (Barany et al. 2020).Targets were projected onto a screen that was viewed via a mirror mounted onto the head coil.With the participant in a supine position on the scanner bed, the base of the joystick base was strapped to the participant's torso with Velcro straps and positioned so participants could rest their wrist on the base of the joystick comfortably and manipulate it without moving their distal or proximal arm.Foam pads further supported the arm to avoid additional muscle activity (Fig. 1C).Movement epochs (16 s × 12) alternated with resting epochs (9 s × 11).Trials were blocked by target size, with four trials per movement epoch.Each target size block was presented three times per run.The order of target size blocks was randomized within each run.The participants completed four runs with one hand (right or left) and then four runs with the other hand, in a counterbalanced order across participants. MRI data processing Structural T1-weighted images were processed using the FreeSurfer (FS) toolkit version 6.0 (http://surfer.nmr.mgh.harvard.edu)(Dale et al. 1999;Fischl et al. 1999).Each MR image was intensity corrected, skull stripped, and then automatically segmented into gray and white matter.The segmentations and surfaces were inspected and manually edited for accuracy according to established guidelines (Segonne et al. 2007).A surface-based map of the primary motor cortex (M1) was isolated in its entirety by combing both the anterior (BA4a) and posterior (BA4p) labels generated as part of the default FS pipeline in both hemispheres; these labels have demonstrated robust accuracy in their definition of the motor cortex according to cortical folding patterns (Fischl et al. 2008).Global intracranial volume was also calculated using the FS estimated Total Intracranial Volume measure (Buckner et al. 2004) and examined as a potential covariate. Experimental design-identifying M1 regions related to the hand motor learning task Functional ROIs ref lecting activation by the hand motor learning task were derived from eight runs of the functional imaging data collected as described above.For each hand, data were collapsed across target size and blocks of task performance were compared with rest blocks.All processing of the fMRI data was accomplished using Analysis of Functional NeuroImages (AFNI) (Cox 1996); fMRI preprocessing steps included slice time correction, head motion correction, 12-parameter affine alignment between the structural and functional images, nonlinear warping between the structural image and the MNI152 2009 template in MNI space, smoothing with a FWHM 6.0 mm smoothing kernel, and conversion to percent signal change.The transformations for head motion correction, co-registration, and normalization were concatenated and applied in a single step to the functional data before smoothing to reduce the number of interpolation steps.Following preprocessing, GLM analysis was performed using AFNI's 3dDeconvolve tool.In addition to the left-or right-hand task regressors, six head movement vectors were included as regressors of no interest.Volumes with more than 0.9 mm total head movement were censored.Beta values for the left-and right-hand conditions for each participant were submitted to one-sample t-tests at the group level to determine areas where left-and right-hand activity were significantly greater than rest.Group left-and right-hand masks were created by thresholding these statistical maps at an uncorrected P = 10 −10 , a threshold more stringent than required by multiple comparisons correction, but that resulted in a single cluster of activation for each hand regressor centered in the contralateral M1.Left-and right-hand group activation masks were then registered to FS atlas space (MNI 305) and mapped to the cortical surface using embedded FS algorithms. Using the FS M1 label to anatomically constrain the group-level functional activation ROIs to M1, two ROIs (one per hemisphere) were used to extract each individual's cortical volume data for statistical analysis.In each hemisphere a surface-based ROI was computed that only contained overlapping vertices from both the FS M1 label and the respective fMRI left-and right-hand group masks; for example, overlapping vertices in the left hemisphere from M1 and the fMRI group mask (derived from right-hand performance) were isolated and combined into a single ROI labeled "Left Hemi Active-M1".A similar ROI was constructed in the right hemisphere from overlapping vertices in the FS M1 label and fMRI mask derived from left-hand performance labeled "Right Hemi Active-M1" (see Fig. 2 for all mapped ROIs).The resulting group Active-M1 ROIs were mapped to all participants and cortical gray matter volume, calculated as the product of cortical thickness and surface area, from each label was extracted (in mm 3 ) using embedded FS algorithms.Taking this approach, we were able to delineate gray matter volume in M1 areas from both hemispheres that were active when executing the motor learning task.As indicated in Fig. 2 Active-M1 ROIs were primarily located in the anterior wall of the central sulcus and posterior aspect of the precentral gyrus, areas referred to as hand area where corticomotor (CM) neurons are found in non-human primates (Strick et al. 2021). Statistical analysis-motor learning task Baseline accuracy was defined as the percentage of hits in the first block of the first run.Final accuracy was defined as the percentage of hits in the last block of the final run (which was either the last block of run 1, 2, or 3 depending on the participant's accuracy in the first run and their gain in skillfulness with subsequent practice).A standardized learning value (SLV) (Wischnewski et al. 2016) was calculated to assess the training-related gain in precision, defined as: Standardized Learning Value = Learning Value * (max possible gain) −1 . Given that a typical learning curve asymptotes near 100% accuracy and that improvement upon high accuracy is more difficult than improvement from low accuracy, the learning value was multiplied by the inverse of the maximum possible gain (Ammons 1947).The maximum possible gain was calculated as the difference between the maximum possible hits and the number of actual hits in the first block of the first run.Repeatedmeasures analysis of variance (RM-ANOVA) models were used to examine the within-subjects effects of target size (S, M, L, XL) and hand (right, left) on accuracy (% hits) and movement time.Paired t-tests were used to compare the baseline skillfulness and SLV between hands.Two-tailed correlation coefficients were generated between baseline skillfulness and the change in skillfulness across the experiment; all data are expressed in mean ± SEM. Statistical analysis-cortical volume of M1 regions related to motor learning task To address the proposed hypothesis that greater gray matter volume within the Active-M1 ROIs relates to greater gain in skill acquisition on a hand motor learning task, a series of Pearson correlation coefficients were calculated.First, to assess the potential confound of age, Pearson correlation coefficients were generated between age and all variables of interest, including baseline accuracy and SLV motor performance for both hands, as well as volume of the Active-M1 ROIs for both hemispheres.To assess the potential confound of global head/brain size, Pearson correlation coefficients were generated between the FS estimated Fig. 2. Group labels for active-M1 ROI in each hemisphere: Visualization of the anatomically defined M1 (outlined in white) atlas-based regions of interest on both pial and inf lated surfaces derived from the FreeSurfer toolkit.Regions within anatomically defined M1 that were active at the group level during the motor learning task, as identified by fMRI (noted in yellow), are defined as "active-M1" ROI and are localized in anterior wall of the central sulcus and posterior aspects of the precentral gyrus included in the anatomically defined hand area.(n = 30). Total Intracranial Volume measure and the gray matter volume of the Active-M1 ROIs for both hemispheres.To examine the relationship between Active-M1 gray matter volume and baseline accuracy on the hand motor task, a priori hypothesized Pearson correlation coefficients were calculated between accuracy in each hand relative to the contralateral M1 ROIs (e.g.right hand baseline accuracy with Left Hemi Active-M1 ROI).Similarly, to examine the relationship between Active-M1 gray matter volume and motor skill acquisition gains (SLV), a priori hypothesized Pearson correlation coefficients were calculated between SLV motor performance in each hand relative to the contralateral M1 ROI.Finally, to determine whether any relationships observed above were unique to M1 and not to other primary sensory cortical regions, a "control region" analysis was performed by constructing Pearson correlation coefficients between baseline accuracy/SLV motor performance and cortical volume of the primary auditory cortex (A1) in the contralateral hemisphere.The A1 ROI was defined as the anterior transverse temporal gyrus label (Heschl's gyrus) from a standard FS parcellation scheme (Destrieux et al. 2010).All statistical analyses were accomplished using SPSS v27 (IBM Corp 2020) and R (R Core Team 2017). Participants Twenty healthy right-handed participants (mean age ± SD: 60.1 ± 7.2 years, range 50-74, 13 females), contributed TMS data to Study 2; 12 of the participants also participated in Study 1 (data were collected under the NCT02544503).The remaining eight participants who did not contribute data to Study 1 were recruited using the same recruitment strategies and inclusion criteria (see overview of the studies); their neuroimaging data were only used for inclusion criteria evaluation as it had been collected under a different scan protocol on a different MRI machine (NCT01726218).All participants had the same exposure to the motor learning task. Experimental design-disrupting the retention of training-related improvement in accuracy via rTMS For Study 2, 1 Hz rTMS or sham stimulation was applied to left M1 (LM1) hand area as defined by the hot spot of the targeted wrist extensor muscle, a muscle that is involved in the training task and represented within the hand area (see below) (Siebner et al. 2022) to determine the effects of subthreshold stimulation on pointing task accuracy and motor learning as previously described (Muellbacher et al. 2002;Hadipour-Niktarash et al. 2007;Buetefisch et al. 2011).Brief ly, the subthreshold intensity refers to the percentage of the maximum output of the stimulator that is below the level of a response in the electromyographic recording of a targeted muscle and does not produce a muscle twitch.rTMS applied at similar intensities and frequencies or as a timed single pulse exerts specific effects in M1 (Siebner and Rothwell 2003;Lazzaro et al. 2008) that differ from effects seen when applied more anterior or posterior to M1 (Johansen-Berg and Matthews 2002; Muellbacher et al. 2002;Shadmehr and Krakauer 2008).In two separate sessions, participants performed two runs of the motor learning task described above with each hand.In between the first and second runs, stimulation was applied to the extensor carpi ulnaris (ECU) hot spot of the left M1 using an air-cooled figure-of-8 coil (70-mm wing diameter) or sham air-cooled figureof-8 coil (70-mm wing diameter) connected to a Magstim Rapid 2 (Magstim Company, UK) (Buetefisch et al. 2011).The rTMS was applied at a 1 Hz frequency for a total of 900 pulses for 15 min to the LM1 at subthreshold intensity of 90% resting motor threshold.rTMS at subthreshold intensity reduces the spread of the rTMSinduced electric field and avoids muscle twitches during rTMS that could modulate central processing via sensory afferents and compromise blinding of participants with respect to the sham stimulation.In the sham condition, the coil is discharged at a constant minimal current regardless of the intensity set for the stimulator.The sham stimulation induces an electric field amplitude roughly 5% that of real TMS with a similar spatial extent in cortex (Opitz et al. 2015).In our study, the acoustic by-product associated with discharging the coil was matched in the sham and rTMS condition which helps with the blinding of the participants.The order of the sham and rTMS sessions was counterbalanced across participants; however, all participants from Study 1 completed both the hand motor learning task and structural MRI session prior to both Study 2 sessions.As part of a separate research question, participants also completed a third session with stimulation at another strength; that data are not reported here.Participants were blinded to the type of stimulation and told they would receive two different stimulation protocols.The subthreshold intensity rTMS, the identical appearance of the sham and rTMS coil, and the matched auditory noise ensured the blinding of participants.Participants remained seated in a dental chair throughout the procedure.Surface EMG was recorded from the right ECU muscle throughout the stimulation process and monitored at a sensitivity of 0.01 mV/div to ensure muscle relaxation.This sensitivity was also used to confirm subthreshold stimulation.At the beginning of each session, the hot spot was defined as the location of the largest MEP (motor evoked potential) amplitude in response to a single TMS pulse at a low intensity.The location was verified by absence of MEPs in locations within 1 mm.This was accomplished by marking the preliminary hotspot on the reconstructed MRI of the participants brain using a frameless neuronavigation system (BrainSight, Rogue Research, Montreal, Canada) and then systematically probe the locations around this preliminary hotspot.If other locations produced larger responses, another preliminary hotspot was defined and tested in a similar manner.Once the hotspot was verified, it was marked as the definite hotspot for this person to ensure accuracy and precision in coil position for targeting the M1 ECU hot spot during all measurements.The threshold in this hotspot was determined using PEST (Parameter Estimation by Sequential Testing) (Awiszus 2003). Statistical analysis-rTMS related effects on retention of training-related improvement in accuracy Motor learning task accuracy was calculated for each participant for each combination of hand (contralateral/right or ipsilateral/left), intervention (rTMS 90% or sham), and time (preintervention or postintervention) according to the criteria described above.A mixed model ANOVA was calculated to determine the main effects of hand, intervention and time on accuracy, with subsequent ANOVAs or paired t-tests calculated to explore interaction effects among these variables.Greenhouse-Geisser corrections were applied to adjust for lack of sphericity where appropriate. Study 1 Participants met all inclusion criteria and completed all aspects of the study with usable imaging data for analysis.No outliers (> ±3 SD) or missing data were observed among any variables. See Table 1 for demographic information on participants from both studies, and Table 2 for motor learning and cortical volume characteristics for Study 1. Pearson correlation coefficients between age and all brain/behavior variables revealed a significant relationship between age and baseline accuracy for the right hand (r = −0.670,P < 0.001) but not the left (r = −0.348,P = 0.060).Age did not correlate with SLV in either the right (r = −0.263,P = 0. As indicated in Fig. 3A-B, participants' accuracy improved and movement time decreased with practice.Paired t-test results show that accuracy increased significantly between the first and last blocks (left hand t 29 = 7.36, P < 0.001; right hand t 29 = 9.74, P < 0.001) and movement time significantly decreased (left hand t 29 = 5.06, P < 0.001; right hand t 29 = 4.03, P < 0.001), indicating that motor learning occurred.Participants with lower baseline accuracy had a higher maximum possible gain in precision across trials and did show larger gains in precision across the experiment.In both hands, baseline accuracy was significantly correlated with the amount of change in accuracy between the first and last blocks (left hand r = 0.699, P < 0.001; right hand r = 0.628, P < 0.001), indicating that participants who began with lower baseline accuracy experienced greater gains in precision across training.Overall, these results support the use of SLV as a measure of motor skill learning that accounts for differences in baseline accuracy and maximum possible gain in precision. Relationships between M1 cortical volume and motor learning Pearson correlation coefficients between estimated Total Intracranial Volume and volume of the Active-M1 ROIs yielded no significant results, indicating that variability in global head size did not relate to variability in gray matter volume of Active-M1 ROIs.As stated above, our primary goal was to determine whether the gray matter volume in contralateral Active-M1 ROIs, measured prior to motor learning, contributed to subsequent learning of the hand motor task.Pearson coefficients revealed that right hand SLV was significantly correlated with gray matter volume of the Left Hemi Active-M1 in the positive direction (r = 0.410, P = 0.025) (see Fig. 4) but right hand baseline accuracy was not (r = 0.264, P = 0.158).For the left hand, neither SLV (r = −0.26,P = 0.892) nor baseline accuracy (r = −0.067,P = 0.724) were significantly correlated with gray matter volume in the right hemisphere Active-M1 ROI. To determine whether the observed relationships between M1 volumes and motor hand performance were specific to M1, primary auditory cortex (A1) was selected as a control cortical region given it is not directly involved in motor learning.Pearson correlation coefficients between A1 gray matter volume in each hemisphere with baseline accuracy and SLV in the corresponding contralateral hand were all nonsignificant: Left Hemi A1 with right hand baseline accuracy (r = 0.063, P = 0.740) and SLV (r = −0.080,P = 0.674); Right Hemi A1 with left hand baseline accuracy (r = 0.060, P = 0.755) and SLV (r = −0.032,P = 0.866). Study 2 Twenty right-handed participants completed Left M1 rTMS protocols for Study 2. Data from the sham session were unavailable for two participants-one participant reported migraine headache after the first rTMS session and was excluded from participating in additional sessions, and one participant had a scheduling conf lict.Twelve Study 2 participants also participated in Study 1.The imaging data of the remaining eight participants Right hand SLV is plotted on the y-axis against left hemisphere gray matter volume of active-M1 ROI along the x-axis.This correlation was statistically significant in the positive direction (n = 30). were collected on a different scanner under a different imaging protocol not optimized for cortical volumetric analysis.Their data were therefore not considered for Study 1 but they participated in the same motor learning task as the remaining 12 participants. Functional relevance of the left M1 hand area to motor learning Subthreshold stimulation of left M1 at 90% resting motor threshold (RMT, rTMS90) disrupted motor learning compared to sham stimulation (Fig. 5).A 2 (hand: right/left) × 2 (intervention: rTMS90/sham) × 2 (time: preintervention/postintervention) ANOVA showed a significant main effect of hand (F 1,17 = 23.4,P < 0.001, η 2 G = 0.087) and a significant time by intervention interaction (F 1,17 = 15.2,P < 0.001, η 2 G = 0.016).No other main effects or interactions reached significance.To better understand the time by intervention interaction, we performed a two-way ANOVA for each intervention separately.When sham stimulation was applied during the intervention period, there was a significant effect of time (F 1,17 = 9.66, P < 0.01, η 2 G = 0.041), indicating accuracy differed between the preintervention (73.7 ± 2.2%) and postintervention (78.3 ± 1.7%) assessments.There was also a significant effect of hand (F 1,17 = 14.6,P < 0.001, η 2 G = 0.102) such that participants were more accurate with the right hand than the left hand; the time × hand interaction was not significant.For the rTMS90 condition, only a significant main effect of hand was seen (F 1,19 = 15.9,P < 0.001, η 2 G = 0.067).Similar to the sham condition, participants were more accurate with the right hand than the left hand, but there was no effect of time.The performance before (76.8 ± 1.6%) and after (76.4 ± 1.7%) the application of rTMS at 90% RMT did not significantly differ (F 1,19 < 1, η 2 G = 0.0004). Discussion Here, we report that a combined multimodal approach can be used to delineate task-relevant M1 structure and that greater gray matter volume in this task-relevant M1 structure is directly and causally related to subsequent gains in motor skill learning of the corresponding hand.This demonstrates that a functionally relevant structure in the healthy mature human brain determines the capacity for behavioral changes.Our specific approach to identifying a region of M1 that was active during the execution of the motor skill learning task using fMRI ("Active-M1") is unique.We show that greater gray matter volume within a task-relevant M1 structure is related to better learning of a hand motor task in the healthy mature human brain.As expected, this identified M1 area (Active-M1) was in the anterior wall of the central sulcus and posterior aspects of the precentral gyrus and included the anatomically defined hand area in the omega-shaped hand knob (Yousry et al. 1997).In studies of non-human primates, this M1 area also contains the CM neurons with direct monosynaptic output from M1 to the spinal alpha motor neurons that skilled hand movements depend on (Rathelot and Strick 2006).We, therefore, demonstrate that the task-based fMRI labeling of the atlas based M1 area ("Active-M1") and the neurophysiologically TMS defined hotspot (Siebner et al. 2022) co-localize in M1 hand area. Performance on the skilled motor learning task was not related to the cortical volume of A1, a brain area not involved in motor execution or motor learning, which supports the specificity of its relationship to M1. Furthermore, the finding that learning of the left hand did not significantly relate to the gray matter volume of Active-M1 of the right hemisphere confirms the specificity of its relationship to the dominant hemisphere.These findings are in line with evidence to support that left M1 is involved with left and right hand learning a serial reaction task (Grafton et al. 1995(Grafton et al. , 2002) ) which is consistent with our findings of disrupted motor learning for both hands following rTMS of the left M1 (see below).Anatomical asymmetry of the hand area in M1 has been demonstrated and related to handedness using MRI of the brain (Amunts et al. 1996(Amunts et al. , 1997(Amunts et al. , 2000) ) and cytoarchitectonic studies of histological sections of postmortem brains (Amunts et al. 1996).Finally, disruption of learning was only seen when M1 hand area was stimulated with rTMS but not with sham, which provides evidence for a causal link between M1's critical participation in the learning task. Specifically, we assessed whether inter-individual differences in the gray matter volume of Active-M1 was related to interindividual differences in motor learning.We observed a positive relationship between the amount of motor learning with the dominant right hand and the gray matter volume of left hemisphere Active-M1, a relationship not observed between motor learning in the non-dominant left hand and gray matter volume in right hemisphere Active-M1.To our knowledge, this is the first demonstration of a link between M1 gray matter volume in a region specifically delineated as task-relevant using fMRI and subsequent motor learning in the healthy mature human brain.Some early work described increased size in dorsal aspects of the intrasulcal length of the precentral gyrus, thought to represent the hand area, in skilled keyboard players versus control participants, but the actual involvement of this M1 area in the task was not demonstrated (Amunts et al. 1997).In addition, using voxel-based morphometry in a time-series design Wenger and colleagues (Wenger et al. 2016) found expansion of the right M1 in response to 7 weeks of a left-hand writing task which normalized despite continued practice and proficiency; however, the finding was not localized to a particular M1 area and the functional relevance is more indirect as the involvement in execution of the task was not demonstrated.(Huang et al. 2013;Lissek et al. 2013;Hamano et al. 2021). Importantly, we provide evidence for a causal link between the task relevant left hemisphere M1 area (left Active-M1 ROI, which is located in the M1 hand area) and learning with the corresponding right hand by disrupting learning when rTMS is applied to the left M1 hand area.When rTMS is applied to M1 hand area, at the frequencies and intensities employed in the current study, it has been demonstrated to disrupt learning of a hand motor task likely by transiently disrupting local processes that are related to learning (Muellbacher et al. 2002).The effect of rTMS applied at similar intensities and frequencies or as a timed single pulse exerts inhibitory or excitatory effects that are specific to M1 (Siebner and Rothwell 2003;Lazzaro et al. 2008) and differ from effects seen when applied more anterior to premotor cortex (PMC) or posterior to primary somatosensory cortex (S1) (Johansen-Berg and Matthews 2002; Muellbacher et al. 2002;Hadipour-Niktarash et al. 2007). The effect of left M1 rTMS on early motor consolidation of ipsilateral left-hand performance has not been tested before.However, our finding of a bilateral effect with left M1 rTMS is consistent with its reported bilateral effect on M1 excitability (Fitzgerald et al. 2006) mediated through homotopic connections between M1 in both hemispheres (Ferbert et al. 1992;Lazzaro et al. 1999).The results could also be explained within the framework of hemispheric specialization where each hemisphere contributes to the control of hand movements and motor learning/adaptation.Specifically, in this model, the left hemisphere provides predictive control mechanisms specifying aspects such as movement direction for movements with both the contralateral and the ipsilateral arms (see Mutha et al. 2012).Accordingly, learning a visuomotor task such as the pointing task in this study is expected to demonstrate improved performance in accuracy for both hands through processes in left M1.The similarity of the metrics of motor learning between the left and right hand in our sample would suggest that the behavioral acquisition and trajectory of motor learning for this particular task occurs irrespective of the hand as the control mechanisms specifying these aspects resides in left M1.It is, therefore, conceivable that left M1 disruption with rTMS affects motor consolidation for improved precision of both hands. While movement representation in M1 is distributed and overlapping, it is primarily contained within the border of major body parts such as face, arm, trunk, and leg, and short-term and longterm M1 reorganization is marked by dynamically shifting borders between neighboring representations without the involvement of nonadjacent M1 regions (Nudo et al. 1996;Sanes and Donoghue 2000;Rathelot and Strick 2006).More specifically, there is evidence that larger baseline intracortical microstimulation (ICMS) maps correlated with better accuracy at baseline and after training of a skilled hand motor task.For example, the monkey with the smallest baseline distal ICMS map had the poorest baseline performance on the pellet-retrieval task, while the monkey with the largest baseline distal ICMS map had the best baseline performance on the skilled hand motor task.This also was true of posttraining performance (Nudo et al. 1996).While anecdotal, this finding is in line with the results of our fMRI experiment where task related activity was in the M1 hand area and also consistent with the notion that M1 is crucially involved in executing and learning skillful hand and finger movements. Rodent studies offer compelling evidence regarding the mechanisms underlying motor learning and that M1 contains the neuronal substrate to support processes related to learning related functional and structural changes in M1 (Sanes and Donoghue 2000).These studies demonstrate that the acquisition of motor skills in adult rodents involves activity-dependent plasticity that is ref lected in both neuronal changes and reorganization of movement representations within M1 (Rioult-Pedotti et al. 1998;Kleim et al. 2002a;Monfils et al. 2005; Jones and Jefferson 2011) including the formation and strengthening of synapses, as well as changes in dendritic branching, axonal morphology, and angiogenesis (Kleim et al. 1998;Kleim et al. 2002b;Kleim et al. 2002a).While our approach does not allow anatomical specification on the micro-and ultrastructural level, one possible explanation for greater cortical volume may result from more abundance of vascular, neuronal, or glial structures thereby providing critical anatomical structures to support superior performance with skilled motor learning (Asan et al. 2021).An alternative explanation is that active M1 network expanded in to areas that were formerly not active.The finding from the second study where rTMS to M1, but not sham stimulation, disrupted the retention of practice-related improvements in accuracy provides the causal link between the identified M1 structures for motor learning.Our results confirm earlier findings where the disruptive effect of rTMS on retention of the behavioral improvement was only present when applied to M1 but not seen with rTMS to other brain areas, and only when applied immediately after the motor practice but not late after practice (Muellbacher et al. 2002).Together, the results support the notion of rTMS-related interference with processes in M1 actively involved in early consolidation of gains in skillfulness (i.e.accuracy).These processes involve LTP-like mechanisms as demonstrated in rodents (Rioult-Pedotti et al. 1998;Rioult-Pedotti et al. 2000;Sanes and Donoghue 2000), nonhuman primates (Plautz et al. 2000), andhumans (Bütefisch et al. 2000). Previous studies examining relationships between performance and brain structure note that an increase in gray matter volume was related to the acquisition of specific skills, such as professional keyboard playing, juggling, and visuospatial navigation among taxi drivers in London (Maguire et al. 2000;Draganski et al. 2004Draganski et al. , 2006;;Woollett and Maguire 2011).Given that exceptional skillfulness is a result of extensive training, the strong relationships between brain structure and behavior observed in these studies may be explained by the impact of significant practice.However, it is noteworthy that the participants in our study did not demonstrate extraordinary performance in a specific skill, nor engage in extensive practice.This suggests even normal range motor skill performance and learning can be determined by interindividual differences in volume in M1 for "everyday" individuals (Bütefisch et al. 2000;Gaser and Schlaug 2003).This is consistent with previous reported association of gray matter and learning (Kanai and Rees 2011). In conclusion, our study demonstrates that primary motor cortex structure, as measured by gray matter volume, determines gains in precision with hand motor practice, suggesting that structural brain resources determine the capacity for behavioral changes in the healthy mature human brain.Future research is necessary to define the underlying cellular, anatomical, and ultrastructural characteristics that determine the relationship between cortical volume in M1 and gain in skillfulness.Furthermore, given the older age of our sample, determination of whether our findings are consistent across the age spectrum would require examination in younger samples. Fig. 1 . Fig. 1.Task design.A) each trial began with a central square cuing the upcoming target size.After the cue, the target appeared in one of four possible locations (300 • , 330• , 30• , 60 • relative to vertical meridian) after which the participants were instructed to use the joystick to move the cursor to the target as quickly and accurately as possible.If the cursor began at the starting point and was inside the target 2 s after target presentation, participants received "hit" feedback; otherwise, they received "miss" feedback.B) Four different target sizes were used to vary demand on precision.C) during fMRI acquisition, participants used an MRI-compatible joystick secured on their body to complete the task with pads placed to support the arm and minimize elbow and shoulder movement.EMG electrodes were placed on participants' right-and left-extensor carpi ulnaris (ECU) muscles to monitor unimanual performance. Fig. 3 . Fig. 3. Performance on the motor learning task (n = 30).Task performance for each block (collapsed across target size) by a) mean accuracy (proportion hits), and B) movement time (ms), plotted as a function of the number of training blocks.Error bars indicate standard error.As participants completed different numbers of runs (each containing three blocks) depending on their performance, the final block of each participant served as the reference point.The x-axis is structured in reference to the final training block, with preceding training blocks indicated as negative numbers in reference to the final target block.Note the number of participants at each data point is different due to the differing numbers of runs needed to reach criterion (left hand: Final to −2, n = 30; −3 to −5, n = 25; −6 to −8, n = 6; right hand: Final to −2, n = 30; −3 to −5, n = 19; −6 to −8, n = 11). Fig. 4 . Fig. 4. Relationship between skilled motor learning performance (SLV = standardized learning value) of the right hand and gray matter volume of the active-M1 region of interest (ROI) in the left hemisphere.Right hand SLV is plotted on the y-axis against left hemisphere gray matter volume of active-M1 ROI along the x-axis.This correlation was statistically significant in the positive direction (n = 30). Fig. 5 . Fig. 5. Effect of the rTMS intervention on left-and right-hand motor performance (Study 2, n = 20).Performance on the motor learning task indexed by accuracy (proportion hits) is plotted for preintervention (pre) and postintervention (post) timepoints for the sham and rTMS90% conditions.Error bars indicate standard error.Participants' accuracy improved between preintervention and postintervention testing for sham stimulation but not rTMS90%. Table 1 . Demographic and clinical characteristics of Studies 1 and 2 samples combined. Table 2 . Motor learning task and motor cortex characteristics of Study 1 sample. a Proportion of hits in the first block of run1.
2024-05-22T06:17:49.480Z
2024-05-01T00:00:00.000
{ "year": 2024, "sha1": "df88f1ade1a7358bedb1b5f8c955ce9335ef0d91", "oa_license": "CCBYNC", "oa_url": null, "oa_status": null, "pdf_src": "PubMedCentral", "pdf_hash": "c68e2b71a950c237a3aff3ade7631a66087a34a6", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Medicine" ] }
14538
pes2o/s2orc
v3-fos-license
A comparative study of Image Region-Based Segmentation Algorithms Image segmentation has recently become an essential step in image processing as it mainly conditions the interpretation which is done afterwards. It is still difficult to justify the accuracy of a segmentation algorithm, regardless of the nature of the treated image. In this paper we perform an objective comparison of region-based segmentation techniques such as supervised and unsupervised deterministic classification, non-parametric and parametric probabilistic classification. Eight methods among the well-known and used in the scientific community have been selected and compared. The Martin's(GCE, LCE), probabilistic Rand Index (RI), Variation of Information (VI) and Boundary Displacement Error (BDE) criteria are used to evaluate the performance of these algorithms on Magnetic Resonance (MR) brain images, synthetic MR image, and synthetic images. MR brain image are composed of the gray matter (GM), white matter (WM) and cerebrospinal fluid (CSF) and others, and the synthetic MR image composed of the same for real image and the plus edema, and the tumor. Results show that segmentation is an image dependent process and that some of the evaluated methods are well suited for a better segmentation. Keywords—Evaluation criteria; Martin's; Rand Index; Image Segmentation; Magnetic resonance image. I. INTRODUCTION I. INTRODUCTION TRegion-based segmentation methods are powerful tools for objet detection and recognition.These methods aim at differentiating regions of interest (objects / background).Their objective is to divide the image into homogeneous zones to separate the different entities in the image.This is usually a first step in a more complex treatment chain involving pattern recognition.For example in medical imaging, segmentation is very important for representation and visualization as well as for the extraction of parameters and the analysis of images.Region based segmentation is a specific approach in which one seeks to construct surfaces by combining neighboring pixels according to a criterion of homogeneity.The nature of the considered images and the objective of the segmentation being multiple, there is no unique technique for image segmentation and segmenting an image into meaningful regions remains a real challenge [1].According to Cocquerez et al. [2], the choice of a technique is related to the texture which is one of the important characteristics of an image.The purpose for based-region segmentation is to identify coherent regions of an image. In order to compare the suitability of a segmentation method, we propose a comparative study between regions based segmentation techniques.To correctly validate a result of segmentation of medical images, it is necessary to have the ground truth, which is quite difficult in this case of real images.The quality of imagery and the requirement of accurate segmentation are the crucial aspect in characterizing the performance of segmentation algorithms in brain images [3], [4].Many image processing techniques have been proposed for brain MRI segmentation, most notably thresholding [5], region-growing [6], classifying [7], clustering [8], modelling [9], neural network based [10] and others. As can be seen on Error!Reference source not found., region based segmentation methods can be grouped into two famous families: deterministic based methods and probabilistic based classification methods.By the same way, each of these families can be subdivided into two groups.Deterministic classification family is composed of unsupervised and supervised methods.Whereas, probabilistic classification family contains parametric and non-parametric methods.In this paper, we present a comparative study of clustering based segmentation methods on synthetic and MR images.This paper is mainly devoted to study situations in which using different methods for the image segmentation.Its principal purpose is used five criteria and shows its suitability in unsupervised image segmentation.The performance of each technique is evaluated using the Martin's [11], Probabilistic Rand Index [12], Variation of Information [47] and Boundary Displacement Error criteria [53].These measures compute the consistency degree between the regions produced by two segmentations.The evaluation of a segmentation algorithm consists in measuring the similarity between the reference algorithm and that obtained by this algorithm.The choice of an accurate measure is quite critical in order to provide a strict evaluation and reflect the real quality of an automatic segmentation with comparison to a manual one.The remainder of the paper is organized as follows: Section 2 presents the different region-based segmentation methods used for MR image analysis.The evaluation criteria are described in section 3. Section 4 describes the materiel and data used in this study.Experimental results on synthetic and real images are presented in section 5. Finally, a discussion concludes this paper in section 6. II. REGION-BASED SEGMENTATION TECHNIQUES A large number of segmentation approaches have been proposed in the literature [13,14,15,16].A good survey about their evaluation can be found in [17] [18].A list of www.ijacsa.thesai.orgIn the next subsections we will introduce briefly each of these techniques. A. K-Means K-Means algorithm is an unsupervised clustering algorithm that classifies the input data points into multiple classes based on their inherent distance from each other.The iterative K-Means clustering algorithm was first proposed by MacQueen [19].The algorithm aims at partitioning the data set, consisting of ℓ expression patterns {x1,..., xℓ} in an ndimensional space, into k disjoint clusters , such that the expression patterns in each cluster are more similar to each other than to the expression patterns in other clusters [20].There are two popular partitioned clustering strategies: squareerror and mixture modeling.The sum of the squared Euclidian distances between the samples in a cluster and the cluster center is called within-cluster variation.K-Means are widely used in many applications such as data extraction and image segmentation [21].The K-Means method is an iterative algorithm that minimizes the sum of distances between each object and its cluster centroid. B. Fuzzy C-Means (FCM) Fuzzy C-Means (FCM) is an unsupervised fuzzy clustering algorithm [22].Excerpted from the algorithm of C-means [23], it introduces the concept of fuzzy set in the definition of classes, each point in the data set belongs to each cluster with a certain degree, and all clusters are characterized by their center of gravity.The FCM clustering algorithm was first suggested by Dunn [24] and later improved by Bezdek [25].The FCM method proposes a fuzzy membership that assigns a degree of membership for each class by iteratively updating the cluster centers and the membership degrees for each data point.The cluster that has an associated pixel is one whose membership degree is highest.A novel approach called enhanced possibilistic Fuzzy C-Means clustering is proposed for segmenting MR brain image into different tissue types on both normal and tumor affected pathological brain images.FCM methods has been proposed for the segmentation of MR Images [26,27]and for the segmentation of major tissues in [28,29] and possible tumor on T1-weighted volumes.The FCM is often used in medical image segmentation [30,31].Chen et al. [32], have proposed an algorithm based on FCM for the correction of intensity in homogeneity and for segmentation of MRI images. C. Fuzzy C-Means algorithm with Spatial Constraint (SCFCM) Fuzzy C-Means algorithm with Spatial Constraint (SCFCM) is based on the clustering algorithm FCM described above, two kinds of information in image are used, the gray value, and space distributed structure.Based on the relevance of nearly pixels, the neighbors in the set should be similar in feature value.Its effectiveness contributes not only to introduction of fuzziness for belongingness of each pixel but also to exploitation of spatial contextual information.SCFCM clustering algorithm preserves the homogeneity of the regions better than existing FCM techniques, which often have difficulties when tissues have overlapping intensity.In order to reduce the noise effect during segmentation, the proposed D. Expectation Maximization (EM) Expectation Maximization (EM) is one of the most common algorithms used for density estimation of data points in an unsupervised setting.The EM algorithm [33]is used to estimate the parameters of this model; the resulting pixelcluster memberships provide a segmentation of the image.The EM algorithm can be considered as a variant of the K-Means algorithm where the membership of any given point to the clusters is not complete and can be fractional.An EM algorithm was proposed in [34]to model the homogeneities as a bias field of the image logarithm.This algorithm has been applied for the segmentation of brain MR image [35].According to [36]the EM algorithm has demonstrated greater sensitivity to initialization than the K-Means or FCM algorithms. E. Mean Shift (MS) The Mean Shift (MS) [37] algorithm clusters an ndimensional data set by associating each point with a peak of the data set's probability density.For each point, Mean Shift computes its associated peak by first defining a spherical window at the data point of radius r and computing the mean of the points that lie within the window.At each iteration the window will shift to a more densely populated portion of the data set until a peak is reached, where the data is equally distributed in the window.MS was successfully applied by Mayer et al. [38]in clustering, segmentation and filtering of natural resources in 2D images [39], using a paradigm adaptively to segment the brain MR images. F. Markov Random Field (MRF) The Markov Random Field (MRF) models are used for the restoration and segmentation of digital images.They can make up for deficiencies in observed information by adding a-priori knowledge to the image interpretation process in the form of models of spatial interaction between neighboring pixels.Hence, the classification of a particular pixel is based, not only on the intensity of that pixel, but also on the classification of neighboring pixels.The goal of segmentation is to estimate the correct label for each site.The segmentation is obtained by classifying the pixels into different pixel classes.These classes are represented by multivariate Gaussian distributions.A most of reference are cited, It can be viewed as a particular model selection problem, and different techniques have been proposed in the classical HMF case [40].It has been used for brain image segmentation by modeling probabilistic distribution of the labeling of a voxel jointly with the consideration of the labels of a neighborhood of the voxel [41]. G. Support vector machine (SVM) The Support Vector Machine (SVM) is a learning machine for two-group classification problems. The machine conceptually implements the following idea: input vectors are non-linearly mapped to a very high-dimension feature space.SVM is a set of supervised learning techniques for solving problems of discrimination, regression and are particularly adapted to data process at very high dimensions [42].The algorithm of the SVM is described as follows: First specifies a small set of training pixels, such as a small part of an object and a small part of the background, as the clues.Then, fast SVM is applied to train the classifiers based on the training pixels.Finally, the remaining image, which is viewed as the test set, is subdivided into several regions by the classifier.A comparison between a segmentation method with SVM and FCM is applied in [43]. H. The Pulse-Coupled Neural Network (PCNN) The Pulse-Coupled Neural Nets (PCNN) is a twodimensional non-training neural network in which each neuron in the network corresponds to one pixel in an input image.The neuron receives its input as an external stimulus.These stimuli are combined in an internal activation system, and are accumulated until they exceed a dynamic threshold.This will result in a pulse output and through an iterative process.The algorithm produces a temporal series of binary images as outputs algorithm is based on the neurophysiologic models evolving from studies of small mammals.Depending on time as well as on the parameters, this dynamic output contains information, which makes it possible to detect edges, do segmentation, identify textures and perform other feature extractions.For the PCNN, the neurons associated with each group of spatially connected pixels with similar intensities tend to pulse together [44].This is the basic principle of segmentation of the PCNN.In fact, there are many approaches for image segmentation with the PCNN.Generally, all the methods of segmentation can be classified into two kinds of schemes: common image segmentation and automatic image segmentation. III. EVALUATION CRITERIA The goal of this study is to perform a quantitative comparison between automatic segmentation and a set of ground truth segmentation (reference).We use the same methodology reported in, and an evaluation metric for image segmentation of multiple objects [45], where a quantitative predictive performance evaluation used full reference image quality assessment metrics has been conducted.In this section we present five criteria, the Probabilistic Rand Index, Global Consistency Error, Local Consistency Error, Boundary Displacement Error and Variation of Information. A. The Probabilistic Rand Index (PRI) In literature there are many criteria of nonparametric measures such as: Jaccard's index, Fowlkes, and Mallow's index [46] , he is work by counting pairs of pixels that have compatible label relationships between the two segmentations to be compared.We consider two images reference and segmented respectively S1 and S2 of N points X = { , , , , … , }; that assigned labels { } and { } respectively to point xi.The Rand Index can be computed as the ratio of the number of pairs of vertices or faces having the compatible label relationship in S1 and S2.Can be defined as: www.ijacsa.thesai.orgWhere I is the identity function, and the denominator is the number of possible unique pairs among N data points.This gives a measure of similarity ranging from 1, when the two images reference and segmented respectively are identical, to 0 other wise.We first outline a generalization to the Rand Index, termed the Probabilistic Rand (PR) index, which we previously introduced in [47] The PR index allows comparison of test segmentation with multiple ground-truth images through soft non uniform weighting of pixel pairs as a function of the variability in the ground-truth set.The Rand index [47] counts the fraction of pairs of pixels whose labeling are consistent between the computed segmentation and the ground truth.This quantitative measure is easily extended to the probabilistic Rand index (PRI) [48] by averaging the result across all human segmentations of a given image.Consider a set of manually segmented (ground truth) images {S1, S2,..., SK} corresponding to an image X = {x1,x2,...x i,...,xN}, where a subscript indexes one of N pixels.Let S test be the segmentation that is to be compared with the manually labeled set. B. Martin Evaluation Criteria Martin et al. [49] proposed two error measures to quantify the consistency between image segmentations of differing granularities, and used them to compare the results of algorithms to a database of manually segmented images.The Martin's similarity index which outperforms the others in terms of properties and discriminative power is employed for performance evaluation to compare the different region-based segmentation methods.The role of the test is to assess the quality of segmentation by transforming the measurements into a mathematical function called test.These criteria may be a test of homogeneity of a set of points of similarity, or any statistical test.Martin et al. [50] proposed an interesting error measure, which takes 2 images S1 and S2 as input, and produces a real-valued output in the range [0, 1], the Martin's distance where 0 signifies no error and 1 worst segmentation, which the inverse for similarity 1 signifies no error and 0 worst segmentation.The measure is shown to be effective for qualitative similarity comparison between segmentations by humans, who often produce results with varying degrees of perceived details, which are all intuitively reasonable and therefore ''correct".On the other hand, the Martin error measure is sensitive to qualitatively different segmentations.A segmentation error measure takes two segmentations S1 and S2 as input, and produces a real valued output.For a given pixel pi consider the segments in S1 and S2 that contain that pixel.The segments are sets of pixels.If one segment is a proper subset of the other, then the pixel lies in area of refinement and the local error should be zero.If there is no subset relationship, then the two regions overlap in an inconsistent manner.In this case, the local error should be non-zero.If R(S, pi) is the set of pixels corresponding to the region in segmentation S which is the region that contains pixels pi, the local refinement error, E, is defined as: (2) Note that this local error measure is not symmetric.It encodes a measure of refinement in one direction only: is zero precisely when S1 is a refinement of S2 at pixel pi, but not vice versa.There are two natural ways to combine the values into a measure of the error for the entire image.Global Consistency Error (GCE) forces all local refinements to be in the same direction.Local Consistency Error (LCE) allows refinement in different directions and in different parts of the image.Let n be the number of pixels: Although these error metrics are calculated by grouping pixels into objects first, they unfortunately tolerate oversegmentation and under-segmentation, as a consequence of their intended purpose for comparing human segmentations.As LCEGCE, it is clear that GCE is a tougher measure than LCE. C. Boundary matching (Boundary Displacement Error) Several measures work by matching boundaries between the segmentations, and computing some summary statistic of match quality.The Boundary Displacement Error (BDE) measures the average displacement error of one boundary pixels and the closest boundary pixels in the other segmentation [48] .Work in [51] proposed solving an approximation to a bipartite graph matching problem for matching segmentation boundaries, computing the percentage of matched edge elements, and using the harmonic mean of precision and recall, termed the F-measure as the statistic.Furthermore, for a given matching of edge elements between two images, it is possible to change the locations of the unmatched edges almost arbitrarily and retain the same precision and recall score. D. Information-based (Variation of Information) The proposed metric measure is termed the variation of information (VI) and is related to the conditional entropies between the class label distributions of the segmentations.Work in [52] computes a measure of information content in each of the segmentations and how much information one segmentation gives about the other.Several measures work by counting the number of false-positives and false-negatives [53] and similarly assume existence of only one ground truth segmentation.Due to the lack of spatial knowledge in the measure, the label assignments to pixels may be permuted in a combinatorial number of ways to maintain the same proportion of labels and keep the score unchanged. A. Data synthetic MR image A large number of segmentation approaches have been proposed in the literature [54,55,56,57].A good survey about their evaluation can be found in [58] 2 shows the comparison between the automatic image segmentation and the ground truth image for synthetic MR images.The results of average and variance we applied for 25 synthetic MR image the results are given in following are varies between 0.3245±0.0012and 0.5021±0.0013for GCE criterion, the value between 0.124±0.0034and 0.3585±0.0070for LCE, the values between 0.4069±0.0058and 0.5912±0.0067forPRI, the values between 1.5021±0.5871and 5.2314±1.2341for VI, and 92.8908±22.5487and3.7077±0.6532for BDE criterion. B. Real data In this section, images are obtained from the IBSR (Internet Brain Segmentation Repository) database [60].As described on the IBSR, the database is composed of threedimensional coronal brain Magnetic Resonance Images (MRI).The coronal three-dimensional T1-weighted spoiled gradient echo MRI scans were performed on 2 different imaging systems. The MR Brain data sets and their manual segmentations were provided by the center of morphometric analysis at Massachusetts general hospital and are available at IBSR.The voxels contain images segmented by experts for each subdatabases are the ground truth voxels.These databases are used by many that users' all around the world.It supplies brain MR images as well as the segmentation results that are performed by the trained experts in a manually guided manner.Error!Reference source not found.3showsdifferent images from the IBSR database.For our experiment, we used 25 test images from the IBSR database and the corresponding ground truth (segmented by the expert) to each image. The different based segmentation methods are applied on each image and the Martin's criteria are used to evaluate the performance of each algorithm.The analysis of the results of Fig. 4 demonstrates that some of the used algorithms generate as many classes as those generated by the laboratory.These findings are confirmed by the criteria reported in Table 3. Comparison of the eight segmentation algorithms using LCE and GCE errors (mean) in the case of real images in these paper 25 images.The MS method performs better than the FCM, followed respectively by SVM, SCFCM, EM, K-Means, MRF, and PCNN.Accordingly, we compare the segmentation performance in brain tissue.To say that actual results are consistent with the results obtained on synthetic images. Computational time The processing time for segmenting images is presented in Table 4.We list the CPU time in segmenting images in Fig. 10.It can be seen from Table 4 that the processing time for MRF are both higher than the other algorithms. V. DISCUSSION AND CONCLUSION This paper presents an objective comparison of regionbased segmentation methods.Our study focuses on supervised and unsupervised deterministic classification, non-parametric and parametric methods probabilistic classification.Among the well-known and used techniques in the scientific community, we have selected eight techniques.These methods have been used on two different databases.The first composed synthetic MR images are available for download at www.ucinia.org,and the second composed of brain MR images from the IBSR database.For comparison, a ground truth is created in our laboratory for synthetic MR images and by an expert for IBSR database.To compare the different region based segmentation methods, we used the Martin's similarity indexes and Probabilistic Rand Index.Five criteria have been used: The global consistency error, the local consistency error, Probabilistic Rand Index, Variation of Information, and Boundary Displacement Error.At each time, the result of these criteria is the difference between the automatic segmentation and the ground truth.In this paper, we compared the performance of different region-based segmentation algorithms.Results show that the EM is outperforms the other seven algorithms in the three different dataset images.The analysis of the results of the five criteria demonstrate that except the EM, K-Means, SCFCM and the FCM algorithm, all the methods that we have tested perform well for the segmentation of images such those considered in this paper.Nevertheless, we are going to group them in two classes.The first class contained SCFCM, K-Means, FCM, and EM, the latter algorithm has a best performance with GCE = 0.6935, LGE = 0.4113 and PRI=0.8245 for the synthetic data, GCE = 0.5021, LGE = 0.3585 and PRI=0.6067 for the synthetic MR data, and with GCE = 0.9268, LGE = 0.9047, and PRI=0.6067 for the ISBR data.This is consistent with what have been reported on the robustness of the MS algorithm for feature extraction and image segmentation.The MS algorithm is an unsupervised clustering-based segmentation method and needs no a priori information on the number and the shape of the data cluster.The FCM method takes advantage of local textual information and high inter-pixel correlation inherent.The second class, with a worst quality scores for the criteria groups decreasing: MRF, MS, PCNN, and SVM methods.The very high value of the five criteria for the EM method is due to known fixed segmentation parameters of the EM method estimated by optimizing the likelihood.The optimized requires no 'step size' parameters and will not oscillate around the optimum.However, there is no guarantee of global solutions.These results might be due to initialization the parameter for each algorithm.Last but not least, according to Error! Reference source not found.which reports the least values obtained for the GCE, LCE, PRI, VI, and BDE on the synthetic data it is shown that the demonstrated EM method is well adapted for any type of images synthetic MR, and MR images.In the second, by the FCM, K-Means, and SCFCM methods almost the same values of five criteria in different type of the three datasets.In this paper, the adaptive EM is outperforms the other seven algorithms in three dataset (synthetic MR images, and MR images).As a prospect to this study, we are actively working on 3D segmentation methods.In progress as well, a study to compare criteria for evaluation of the image segmentation methods. Fig. 1 Fig. 1 Region-based segmentation methods unsupervised, supervised, and non-parametric region based segmentation algorithms are presented in this section, such as Mean Shift (MS), Fuzzy C-Means (FCM), KMeans, Expectation Maximization (EM), Spatial Constraint Fuzzy C-Means (SCFCM), Markov Random Fields (MRF), Pulse Coupled Neural Network (PCNN), and Support Vector Machine (SVM).In the next subsections we will introduce briefly each of these techniques. [59] A list of unsupervised, supervised, and non-parametric region based www.ijacsa.thesai.orgsegmentation algorithms are presented in this section, such as Mean Shift (MS), Fuzzy C-Means (FCM), KMeans, Expectation Maximization (EM), Spatial Constraint Fuzzy C-Means (SCFCM), Markov Random Fields (MRF), Pulse Coupled Neural Network (PCNN), and Support Vector Machine (SVM).The image is constitute with: White matter(WM), gray matter (GM), cerebrospinal fluid(CSF), edema, tumor for the synthetic MR image are represented as a set of spacial probability maps for tissue and pathology shows in Figure.2.In our laboratory, from these different matters we have created the ground truth for each image. Fig. 2 From left to right : (a) image T1, (b) ground truth, (c) white matter, (d) gray matter, (e) CSF, (f) edema, (g) tumor.In this section we compare the results of segmentation methods on synthetic MR image.In Fig 2 present an example of synthetic MR image with the display the images of different segmentation methods.And in the Table 2 below shows the values of evaluation criteria.These examples we allow to understand how to meet these criteria have different images segmentation.The values obtained in Table2shows the comparison between the automatic image segmentation and the ground truth image for synthetic MR images.The results of average and variance we applied for 25 synthetic MR image the results are given in following are varies between 0.3245±0.0012and 0.5021±0.0013for GCE criterion, the value between 0.124±0.0034and 0.3585±0.0070for LCE, the values between 0.4069±0.0058and 0.5912±0.0067forPRI, the values between 1.5021±0.5871and 5.2314±1.2341for VI, and 92.8908±22.5487and3.7077±0.6532for BDE criterion. For a quick interpretation of the results, Fig. 4 report the evolution of the martin's criteria.The best criterion values are obtained for the EM method (GCE criterion = 0.9268, LCE criterion = 0.9047, PRI=0.9724,VI = 0.4935, and BDE= 3.245) in average.a b c d www.ijacsa.thesai.org Contour www.ijacsa.thesai.orgmethod incorporates both the local spatial context and the nonlocal information into the standard FCM cluster algorithm using a novel dissimilarity index in place of the usual metric distance. TABLE I . Averages and STD of: GCE, LCE, PRI, VI, and BDE mean values of the synthetic MR Dataset for the different segmentation methods. Table 3 shows the output of the criteria; interval with a lower limit greater than 0 and high limited at 1, the values implies that the adaptive EM performs significantly better in segmentation than benchmark (the FCM, K-Means, SCFCM, MS, MRF, PCN or SVM).The GCE, LCE, and RI values of the EM method in Fig12, for 25 brain images, which demonstrate the robustness of the method EM.
2014-10-01T00:00:00.000Z
2013-01-01T00:00:00.000
{ "year": 2013, "sha1": "7f764000d53f518965b6e6663bd1c8c7b595f355", "oa_license": "CCBY", "oa_url": "http://thesai.org/Downloads/Volume4No6/Paper_27-A_comparative_study_of_Image_Region-Based_Segmentation_Algorithms.pdf", "oa_status": "HYBRID", "pdf_src": "Crawler", "pdf_hash": "7f764000d53f518965b6e6663bd1c8c7b595f355", "s2fieldsofstudy": [ "Computer Science" ], "extfieldsofstudy": [ "Computer Science" ] }
49296130
pes2o/s2orc
v3-fos-license
D-Cateslytin: a new antifungal agent for the treatment of oral Candida albicans associated infections The excessive use of antifungal agents, compounded by the shortage of new drugs being introduced into the market, is causing the accumulation of multi-resistance phenotypes in many fungal strains. Consequently, new alternative molecules to conventional antifungal agents are urgently needed to prevent the emergence of fungal resistance. In this context, Cateslytin (Ctl), a natural peptide derived from the processing of Chromogranin A, has already been described as an effective antimicrobial agent against several pathogens including Candida albicans. In the present study, we compared the antimicrobial activity of two conformations of Ctl, L-Ctl and D-Ctl against Candida albicans. Our results show that both D-Ctl and L-Ctl were potent and safe antifungal agents. However, in contrast to L-Ctl, D-Ctl was not degraded by proteases secreted by Candida albicans and was also stable in saliva. Using video microscopy, we also demonstrated that D-Ctl can rapidly enter C. albicans, but is unable to spread within a yeast colony unless from a mother cell to a daughter cell during cellular division. Besides, we revealed that the antifungal activity of D-Ctl could be synergized by voriconazole, an antifungal of reference in the treatment of Candida albicans related infections. In conclusion, D-Ctl can be considered as an effective, safe and stable antifungal and could be used alone or in a combination therapy with voriconazole to treat Candida albicans related diseases including oral candidosis. The excessive use of antifungal agents, compounded by the shortage of new drugs being introduced into the market, is causing the accumulation of multi-resistance phenotypes in many fungal strains 1 . Infections caused by these resistant microorganisms often no longer respond to conventional treatment, therefore lengthening the duration of illness related to the infection. Moreover, the widespread use of antifungal agents in clinics and hospitals promotes the development and spread of antifungal-resistant strains and thus the occurrence of nosocomial infections. The development of new alternative molecules to conventional antifungal agents therefore constitutes a major public health issue. The human oral microbiome is a complex ecosystem made up of several hundred species of microorganisms 2,3 . Especially, this commensal flora plays a key role in maintaining oral homeostasis. However, the disturbance of this balance may cause serious infections including oral candidosis, one of the most prevalent opportunistic fungal infections affecting the oral cavity. Usually, oral candidosis only affects mucosal linings in an inflammatory process, but the rare systemic manifestations may have a fatal course 4,5 . Actually, over time, the microbial plaque forms on the tooth surface and on the oral mucosa. As a matter of fact, a local environment less exposed to the cleansing action of saliva, favours an important release of virulence factors by the pathogens of the plaque, and especially the most commonly isolated microorganism, Candida albicans, leading to inflammation of the mucosa and the onset of oral candidosis 6 . In addition, various factors, including age, diabetes, or medical treatments such as chemotherapy and corticosteroids trigger a decrease in the amount of saliva secreted in the oral cavity and are considered as predisposing factors for oral candidosis. Antifungal medications are often prescribed to treat oral candidosis. In fact, voriconazole constitutes an antifungal of reference to treat Candida albicans related infections. However, the prevalence of Candida species that are resistant to antifungal agents is increasing, making treatment options a concern 7-10 . Consequently, new alternative molecules to conventional antifungal agents used in dental practice are urgently needed to prevent the emergence of fungal resistance. Naturally occurring host defense peptides (HDPs), also named antimicrobial peptides, constitute an exciting class of drug candidates, especially because their mechanism of action presents less risk of inducing drug resistance. Indeed, the capacity of HDPs to interact with diverse cellular targets could explain that they have not yet generated widespread resistance 11,12 . HDPs are short cationic amphiphilic peptides that belong to the most ancient and conserved forms of innate immunity and exist across all major lineages. They display an unusually broad spectrum of activity against pathogens including bacteria, viruses, fungi and parasites 13 . Mammalian HDPs represent an important component of the innate immune system as they can trigger both direct microbe killing and rapid immune response modulation [14][15][16][17] . Among all isolated and characterized HDPs, peptides generated from the endogenous processing of Chromogranin A are of particular therapeutic interest. Chromogranin A is an acidic protein stored in the secretory vesicles of numerous nervous, neuroendocrine and immune cells and is released upon stress in most of the body fluids including saliva [18][19][20][21] . Chromogranin A is known to be a precursor for several biological active peptides. Those peptides are linear, short (less than 25 residues) and therefore very easy to synthesize for a minimal cost. Moreover, they are stable in a wide range of temperature and pH 22 . Specifically, Catestatin (CGA 344-364 ) has been reported to exhibit antimicrobial activity against a wide array of pathogens including bacteria, fungi and parasites [23][24][25][26] . Besides its crucial role as a catecholamine release inhibitor, Catestatin also triggers inflammation by exhibiting vasodilatation properties, activating neutrophils, attracting monocytes and mast cells, inducing mast cell degranulation and production of cytokines and chemokines [27][28][29][30][31] . Moreover, Catestatin is expressed in keratinocytes 32 . The arginine rich N-terminus fragment of Catestatin, named Cateslytin (Ctl; CGA 344-358 , RSMRLSFRARGYGFR) is an effective antimicrobial agent against several microbial strains including C. albicans 33,34 . Recently, we demonstrated that the dextrogyre (D) conformation of Ctl (D-Ctl) is much more potent than the natural peptide L-Ctl as an antibacterial agent 35 . Indeed, the substitution of some or all L-amino acids by D-amino acids increases the resistance of HDPs to proteolytic degradation 11,36 . In the present study, we compared the activity of the two conformations of Ctl, levogyre (L) and dextrogyre (D), against Candida albicans. Our results, based on antifungal, safety and mechanistic assays reveal that D-Ctl shows the most effective antifungal properties towards Candida albicans, and could be used to treat its associated diseases including oral candidosis. Results Both D-Ctl and L-Ctl are potent antifungal agents against Candida albicans. We first tested the potential of D-Ctl to inhibit Candida albicans growth using antifungal assays with different concentrations of D-Ctl, and compared it with L-Ctl. Our results show that D-Ctl displays a slightly better activity than L-Ctl with a minimal inhibitory concentration (MIC) of 5.5 μg/mL (2.9 μM) (Fig. 1B), compared to 7.9 μg/mL (4.2 μM) for L-Ctl (Fig. 1A). D-Ctl potentiates voriconazole, an antifungal of reference to treat Candida albicans associated infections. We then compared the activity of D-Ctl with voriconazole (VCZ), an antifungal of reference to treat C. albicans associated infections. For that purpose, antifungal assays were performed with increasing concentrations of VCZ. The MIC of VCZ was determined at 0.07 μg/mL (0.2 μM) (Fig. 1C). Although really potent, VCZ-resistant C. albicans strains have been isolated 37 . One way to prevent the emergence of fungal resistance is to use combination therapy and further reduce the doses of VCZ prescribed. The effect of the combination between VCZ and D-Ctl was determined using antifungal assays combining different concentrations of both compounds (Fig. 1D). Our results show that the combination using the minimal amount of both antifungal agents and able to kill 100% of Candida albicans was ½ MIC D-Ctl + ¼ MIC VCZ with a FIC index of 0.75 (FIC index = FIC VCZ + FIC D-Ctl = 0.5 + 0.25 = 0.75). According to EUCAST 38 , a FIC index included between 0.5 and 1 indicates an additive antifungal effect of the combination. As a result, VCZ and D-Ctl have an additive effect on C. albicans. In other words, by adding D-Ctl to the treatment, the concentration of VCZ could be decreased by 4 (¼ MIC VCZ = 0.018 μg/mL = 0.05 μM). D-Ctl and L-Ctl are not toxic for human gingival fibroblasts. To assess the cytotoxicity of both peptides, we performed MTT assays using human gingival fibroblasts (HGF-1) as a cellular model. Thus, each peptide was incubated with the cells at different concentrations for 24 hours, 48 hours and 72 hours (Fig. 2). As expected for a host defense peptide, L-Ctl was not toxic at 100 μg/mL for a period of time ranging from 24 to 72 hours ( Fig. 2A). Interestingly, D-Ctl was not cytotoxic either on HGF-1 after 72 hours for concentrations up to 100 μg/mL (Fig. 2B). As a result, neither L-Ctl nor D-Ctl showed cytotoxicity at their respective MIC. In addition, the combination of D-Ctl (½ MIC) and VCZ (¼ MIC) was also not toxic for human gingival fibroblasts (Fig. 2C). Unlike L-Ctl, D-Ctl is not degraded by the proteases secreted by Candida albicans. In order to use D-Ctl as a therapeutic agent against C. albicans, it should not be degraded by its proteases. Subsequently, we tested whether D-Ctl was stable in the supernatant of Candida albicans by HPLC, compared to L-Ctl. To this aim, L-Ctl or D-Ctl were incubated in the supernatant of Candida albicans for 24 hours at 37 °C prior being analysed by HPLC (Fig. 3A,B). As a control, the peptides and the supernatant were incubated separately and analysed by HPLC. (Fig. 3A,B, chromatograms 3). The profiles obtained for the supernatant of Candida albicans displayed numerous peaks corresponding to the peptides and proteins secreted by the pathogen (Fig. 3A,B, chromatograms 1). When L-Ctl was incubated with the supernatant of Candida albicans, the peak of L-Ctl disappeared from the chromatogram, suggesting that L-Ctl was degraded by proteases from the supernatant (Fig. 3A, chromatogram 2). However, D-Ctl was still present after an incubation of 24 hours, implying that it remains stable in the supernatant of Candida albicans (Fig. 3B, chromatogram 2). In addition, VCZ did not impact the stability of D-Ctl (Fig. 3B, chromatogram 4). These finding suggest that, as expected, the D-amino acids contribute to the stability of D-Ctl towards fungal proteases. Unlike L-Ctl, D-Ctl remains stable in saliva. Degradation during oral delivery is also a major concern for the use of an antifungal agent to treat oral candidosis. Consequently, we assessed the stability of D-Ctl compared to L-Ctl in the saliva of a cohort of eleven donors. Peptide integrity was measured after 24 hours incubation by LC-SRM (Fig. 3C). Remarkably, unlike L-Ctl, D-Ctl was stable in saliva for all donors tested. These results suggest that the D-conformation of Ctl can overcome the lack of stability of the natural peptide L-Ctl in saliva. D-Ctl and L-Ctl were incubated with C. albicans for 30 min. The excess of peptide was then removed to allow visualization of the peptide inside the yeast colonies. Both peptides were able to enter C. albicans during the 30 min incubation. Remarkably, after 95 min incubation with D-Ctl (Fig. 4A,B), we observed two groups of colonies: colonies totally invaded by D-Ctl (23/39 colonies observed = 59%) (Fig. 4A) and colonies partially invaded (16/39 colonies observed = 41%) (Fig. 4B). All colonies totally invaded by D-Ctl show a systematic arrest of fungal growth and ongoing cellular divisions (Fig. 4A). On the other hand, colonies partially invaded by D-Ctl could grow by division of the non-invaded cells (Fig. 4B). Interestingly, there was no observation of D-Ctl transiting from cell to cell unless during cellular division as shown in Fig. 4A (6/6 observed colonies = 100%). As a control, untreated C. albicans show normal growth over time (Fig. 4C). Altogether, these results demonstrate that D-Ctl can rapidly enter C. albicans, but is unable to spread within a yeast colony unless from a mother cell to a daughter cell during cellular division. D-Ctl quickly invades Remarkably, over a period of 17 hours, we also observed that the intensity of fluorescence of rhodamine-labelled L-Ctl decreases (Fig. 5A) whereas it stays stable for rhodamine-labelled D-Ctl (Fig. 5B) (34/34 = 100% of L-Ctl treated colonies and 23/23 = 100% of D-Ctl treated colonies). This could be explained by the degradation of L-Ctl as previously described (Fig. 3A). Discussion In the dental field, the most common pathology involving a fungal biofilm is oral candidosis. Oral candidosis manifests itself as an inflammatory process of the oral mucosa. Candida albicans is described as the main agent causing oral candidosis 4 . Oral candidosis is a public health issue that can affect patients at any age. Its prevalence is particularly strong in the increased risk populations such as elderly patients, diabetic, premature newborns or immunodepressed patients at risk of invasive fungal infections. The most important aspect of treatment is improving oral hygiene. A sialagogue may be useful for the recovery of a normal salivary flow in case of xerostomia, one of the symptoms occasionally associated with oral candidosis. The other aspect of treatment involves Naturally occurring host defense peptides (HDPs), also named antimicrobial peptides, constitute an exciting class of drugs candidates, especially because their mechanism of action presents less risk of inducing drug resistance 12 . In this context, we demonstrated here that D-Ctl, a derivative of L-Cateslytin (L-Ctl) is a potent antifungal against C. albicans and could be administered to treat oral candidosis as a monotherapy or in combination with voriconazole (VCZ), an antifungal agent of reference to treat C. albicans related infections. VCZ is effective for both mucosal and invasive candidosis, and specifically used to treat patients that are resistant, intolerant or presenting a contraindication to treatment with fluconazole or amphotericin B. The results from a six-year study (1996)(1997)(1998)(1999)(2000)(2001) for 218 Candida species isolates causing bloodstream infection confirm the high efficiency of VCZ, described as the most active drug among all azole compounds tested (voriconazole, fluconazole, itraconazole) towards C. albicans, C. parapsilosis, C. tropicalis (80% of isolates), C. krusei and C. glabrata 39 . Actually, we demonstrated in this study that by adding D-Ctl to the treatment, the concentration of VCZ could be decreased by 4 with potential implications on the emergence of resistant phenotypes. One way to explain this synergistic effect would be that the peptide punches holes in the cell membrane, therefore facilitating the penetration of VCZ in the pathogen. Further investigation will be needed to better understand this mechanism. When compared to VCZ, L-Ctl and D-Ctl were still less efficient against C. albicans. However, the powerful activity of most antifungal agents currently on the market is balanced by detrimental side effects. Indeed each triazole including fluconazole, itraconazole, voriconazole as well as amphotericin B has a profile of side effects ranging from rash, nausea, diarrhea, visual hallucinations to liver and/or kidney toxicity or heart failure 40 . To validate the therapeutic potential of D-Ctl, we verified whether this peptide was not degraded by proteases secreted by Candida albicans. Indeed, as a mechanism of defence against the host, C. albiacans is able to release virulence factors such as the mannoprotein Mp65, the Seoul imipenemase Sim1 and the secreted aspartic protease Sap6 41 . As a result, unlike L-Ctl, D-Ctl was not degraded by the proteases secreted by C. albicans. Besides, in contrast to L-Ctl, D-Ctl was also stable in saliva (including two donors with gingivitis). Gingivitis manifests itself as an inflammation of the gingival tissue, which can lead to periodontitis. Actually these periodontal pathologies, as well as oral candidosis, induce qualitative and quantitative changes in the chemical composition of the saliva [42][43][44] . In this context, further experiments on saliva samples from donors affected by oral candidosis and/or periodontitis would be relevant to support the stability of D-Ctl. However the data obtained already suggest that D-Ctl could be used as a topical antifungal medication delivered in the oral cavity to treat oral candidosis. In addition, time-lapse video microscopy reveals a quick invasion of D-Ctl within Candida albicans, without inducing cell lysis. Besides, D-Ctl was able to spread from a mother cell to a daughter cell only during cellular division to stop ongoing cellular divisions. In contrast with D-Ctl, the decreasing intensity of fluorescence of rhodamine-labelled L-Ctl over time suggests its degradation. One hypothesis could be that the resulting fragments of L-Ctl may then be released into the extracellular compartment, thus explaining the decreasing fluorescence in yeasts formations colonized by Rho-L-Ctl. Actually, little is known about the mechanisms by which cationic HDPs enter or escape cells. However, recent experiments indicate that Ctl, as well as closely related cationic peptides, develop an α-helix-β-sheet structure when in contact with negatively charged membrane interfaces, resulting in phospholipid membrane deformations and pore formations 45,46 . In addition, further investigations will be needed to clearly identify the target of D-Ctl within the cells of Candida albicans. Altogether, our study suggests that D-Ctl constitutes an excellent candidate for the development of a new antifungal agent against Candida albicans. The antifungal protection of the mucosa is an indication of the first order, as a possible prevention of oral fungal infections. A topical application (as a suspension or a gel) of such an antifungal agent therefore constitutes an interesting pathway for the protection of oral mucosa. Further investigations will thus be needed to develop D-Ctl-based gels or suspensions for topical applications in the treatment of oral candidosis. From this perspective, it would be relevant to complement the data obtained with the evaluation of the cytotoxicity of D-Ctl towards epithelial cells of the oral mucosa. Methods Peptide synthesis. L-Ctl (CGA 344-358 , RSMRLSFRARGYGFR) and its derivate D-Ctl, as well as the rhodamined peptides Rho-L-Ctl and Rho-D-Ctl, were synthesized by Proteogenix SAS according to the Merrifield Technique, a stepwise solid-phase peptide synthesis approach with FMOC chemistry, and purified to >95% by MALDI-TOF mass spectrometry and reverse phase high-performance liquid chromatography (RP-HPLC). The rhodamine moiety added to L-Ctl and D-Ctl and used for video microscopy is located on the N-terminal end of the polypeptidic chain. The MIC of the antifungal agents, defined as the lowest concentration of drug able to inhibit 100% of the growth of a pathogen was determined using a modified Gompertz model 47 . HGF-1 cells (10 6 cellules/mL) were plated in 96-well plates for 24 hours, prior being treated with different concentrations of L-Ctl, D-Ctl or the combination ½ MIC D-Ctl + ¼ MIC VCZ for 24, 48 or 72 hours. The culture media was then removed and replaced by MTT diluted in culture media (0,25 mg/mL). Cells were then incubated for an additional 3 hours at 37 °C, 5% CO 2 and lysed with isopropanol/HCl (v/v). After 15 min incubation at room temperature and under agitation, cell viability was assessed by optical density OD 550nm using a spectrophotometer (Multiscan EX). Peptide stability assays in the supernatant of Candida albicans. The supernatant of Candida albicans was prepared as follows: a single colony of Candida albicans was resuspended in 5 mL of Sabouraud medium and incubated at 37 °C overnight. The culture was then centrifuged at 10000 g for 1 min and the supernatant was filtered using a 0.22 μm MillexH-GV (Merck Millipore). In order to check sterility, an aliquot of the supernatant was incubated at 37 °C for 48 h. The absence of growth was interpreted as a lack of viable microorganism. 100 μL of supernatant was then directly incubated or not with L-Ctl or D-Ctl (186 μg/mL = 100 μM) at 37 °C for 24 h. As a control, each peptide (186 μg/mL = 100 μM) as well as the combination ½ MIC D-Ctl + ¼ MIC VCZ were incubated in water (100 μL) at 37 °C for 24 h. Samples were then separated using a Dionex HPLC system (Ultimate 3000) on a Nucleosil reverse-phase 300-5C18-column (4.6 × 250 mm; particle size: 5 μm; porosity, 300 Å) (Macherey Nagel). Absorbance was monitored at 214 nm and the solvent system consisted of 0.1% (v/v) TFA in water (solvent A) and 0.09% (v/v) TFA in 70% (v/v) acetonitrile-water (solvent B). Elution was performed at a flow rate of 700 μL/min with a gradient of solvent B as indicated on the chromatograms. Each peak was manually collected. Peptide stability assays in saliva by LC-SRM. Saliva samples were obtained from a cohort of 11 donors (4 men and 7 women) and collected at the Faculty of Dentistry of the University of Strasbourg (France) (see Supplemental Material for donors characteristics). According to European and French legislation, institutional review board or ethics committee approval was not required for this study. Informed consent was obtained from all donors. Samples were prepared as follows: 100 μL of saliva was directly incubated with or without each peptide of interest (230 μg/mL = 123 μM) at 37 °C for 24 h. As a control, each peptide (230 μg/mL = 123 μM) was incubated in water (100 μL) at 37 °C for 24 h. Samples were then centrifuged at 14 000 g during 5 min at 4 °C and then diluted 3 times in water with 0.1% (v/v) HCOOH. For each sample, 2 µL were injected on the LC-SRM system. All separations were carried out on an Agilent 1100 Series HPLC system (Agilent Technologies). For each analysis, the sample was loaded into a trapping column ZORBAX 300SB-C18 MicroBore Guard 5 µm, 1.0 × 17 mm (Agilent Technologies) at 50 µL/min with aqueous solution containing 0.1% (v/v) HCOOH and 2% CH3CN. After 3 min trapping, the column was put on-line with a ZORBAX 300SB-C18 3.5 µm, 0.3 × 150 mm column (Agilent Technologies). Peptide elution was performed at 5 µL/min by applying a linear gradient of solvent A (water with 0.1% (v/v) HCOOH) and B (CH3CN with 0.1% (v/v) HCOOH), from 40 to 95% solvent B over 5 min followed by a washing step (3 min at 95% solvent B) and an equilibration step (13 min at 40% solvent B). For LC-SRM, the antimicrobial peptide was targeted with either an oxidized or non-oxidized state [RSMRLSFRARGYGFR]. For each state, three different precursors corresponding to three different charged states (3+, 4+ and 5+) were followed. For each of the 6 precursors, 8 transitions were monitored (48 transitions in total) on a QQQ-6490 triple quadrupole mass spectrometer (Agilent technologies) in unscheduled mode and within a cycle time of 3 000 ms. For each transition, the collision energy was experimentally optimized by testing 7 values (step of ±3 V) centred on the reference value. The reference value was calculated using the equation given by the supplier. The isolation width for both Q1 and Q3 was set to 0.7 m/z unit. Mass data collected during LC-SRM were processed with the Skyline open-source software package 3.6.1 48 . Area intensities of each precursor were manually checked. Time-lapse video microscopy. Diluted precultures (1/1000) of Candida albicans were incubated for 1 hour at 37 °C without agitation in glass bottom μ-Dish 35mm,high (Ibidi) previously treated with poly-L-lysine. The rhodamine-labelled peptides (Rho-L-Ctl or Rho-D-Ctl) were then added to the culture at a concentration of 10X MIC and incubated for 30 min at 37 °C under agitation. To perform time-lapse video microscopy, the peptides were removed and replaced by Sabouraud medium. Yeasts were then plated at 37 °C, 5% CO 2 , on a Nikon Eclipse Ti inverted microscope equipped with an Andor Zyla sCMOS camera and a 60X objective. To prevent evaporation, a layer of mineral oil (Sigma) was added on top of the media prior imaging. Images (fluorescence and phase contrast) were captured every 10 min for 20 hours using Nikon NIS-Elements AR software, and processed with ImageJ.
2018-06-19T13:28:07.324Z
2018-06-18T00:00:00.000
{ "year": 2018, "sha1": "805a3807bbb85ccdfe8aacb376f2824c33974047", "oa_license": "CCBY", "oa_url": "https://doi.org/10.1038/s41598-018-27417-x", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "805a3807bbb85ccdfe8aacb376f2824c33974047", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine", "Biology" ] }
236961839
pes2o/s2orc
v3-fos-license
Characterization and clustering of kinase isoform expression in metastatic melanoma Mutations to the human kinome are known to play causal roles in cancer. The kinome regulates numerous cell processes including growth, proliferation, differentiation, and apoptosis. In addition to aberrant expression, aberrant alternative splicing of cancer-driver genes is receiving increased attention as it could lead to loss or gain of functional domains, altering a kinase’s downstream impact. The present study quantifies changes in gene expression and isoform ratios in the kinome of metastatic melanoma cells relative to primary tumors. We contrast 538 total kinases and 3,040 known kinase isoforms between 103 primary tumor and 367 metastatic samples from The Cancer Genome Atlas (TCGA). We find strong evidence of differential expression (DE) at the gene level in 123 kinases (23%). Additionally, of the 468 kinases with alternative isoforms, 60 (13%) had significant difference in isoform ratios (DIR). Notably, DE and DIR have little correlation; for instance, although DE highlights enrichment in receptor tyrosine kinases (RTKs), DIR identifies altered splicing in non-receptor tyrosine kinases (nRTKs). Using exon junction mapping, we identify five examples of splicing events favored in metastatic samples. We demonstrate differential apoptosis and protein localization between SLK isoforms in metastatic melanoma. We cluster isoform expression data and identify subgroups that correlate with genomic subtypes and anatomic tumor locations. Notably, distinct DE and DIR patterns separate samples with BRAF hotspot mutations and (N/K/H)RAS hotspot mutations, the latter of which lacks effective kinase inhibitor treatments. DE in RAS mutants concentrates in CMGC kinases (a group including cell cycle and splicing regulators) rather than RTKs as in BRAF mutants. Furthermore, isoforms in the RAS kinase subgroup show enrichment for cancer-related processes such as angiogenesis and cell migration. Our results reveal a new approach to therapeutic target identification and demonstrate how different mutational subtypes may respond differently to treatments highlighting possible new driver events in cancer. Introduction Melanoma is the deadliest form of skin cancer, with about 232,100 new cases and 55,500 deaths worldwide each year [1]. Although incidence is less than 5% of new cancer cases in the U.S., incidence and deaths worldwide continue to rise, especially in the young adult populations [2]. Stage 1 or 2 disease is easily treated by surgery, where 5-year survival rates are > 90% [1], but if not caught early tumors may metastasize to the nearby lymph nodes and then throughout the body. Once the disease reaches the brain, median survival time decreases to 5 months [3]. Thus, novel systemic treatments for metastatic melanoma are needed. Kinases have become compelling cancer targets because they contain mutations that produce constitutive kinase activation and dysregulate signaling pathways in cancer. Among the 538 known kinase genes in humans, there are numerous relevant targets. Specifically, mutations have been observed in kinases serving as growth factor receptors [4], cell cycle regulators [5,6], nuclear signaling [7], and apoptosis regulators [8]. In melanomas, BRAF is most commonly mutated, along with other kinases including NRAS and NF1. Fleuren et al. identified 23 additional kinases harboring driver mutations for melanoma, including the receptor FGFR3 and cell cycle regulator CDK4 [9]. Additional targets may remain undiscovered as atypical kinases, which can phosphorylate proteins but lack a typical kinase domain. Along with chemotherapy and immunotherapy, treatments for advanced melanomas also incorporate small molecule kinase inhibitors (KI). There are currently 37 FDA approved KIs on the market for cancer treatment, with~150 in ongoing clinical trials [10]. Targets of these small molecule KIs include BRAF, which occurs in about 50% of melanoma patients [1,11], and MEK, a downstream signaling target of BRAF in the MAPK pathway. Despite initial successes for these drugs, limitations remain. For example, half of all BRAF-mutant tumors treated with BRAF inhibitors advance within 6-8 months post-treatment [12] whereas other hotspot mutations, such as in NRAS, lack effective KI treatments altogether [13]. Complementary targeted approaches in the form of immune-checkpoint blockers ipilimumab, pembrolizumab, and nivolumab, have recently been shown to significantly improve survival in some patients, even in those with wildtype BRAF [14][15][16]. Although these treatments do not work in the majority of patients [17], combining them with KIs may improve survival prospects. Thus while existing drugs show promise for a subset of patients, new targets and combination therapies are in dire need to address treatment-resistant tumors, and especially those tumors with wildtype BRAF. There are multiple forms of kinase dysregulation: activating mutations, overexpression, underexpression, copy number alterations, repression, and chimeric translocations; but there has been much less research into gene isoform distributions, in part due to the difficulty of estimating isoform composition from short read RNA sequences [18,19]. For these data, computational approaches are required to estimate isoform counts prompting development of transcript alignment algorithms such as RSEM [20], and faster pseudo-alignment algorithms such as kallisto [21]. The gold-standard of isoform analysis might eventually be achieved through "3 rd generation" long read sequencing technologies such as PacBio [22] and Oxford Nanopore [23], providing more accurate, contiguous isoform sequences, although these currently have a high error rate and are costly compared to 2 nd gen. sequencing [24]. Regardless, long and short read sequencing technologies both discern differential isoform composition to address the question of how alterations in sequential exon continuity can change functional outcomes. Although isoform distributions are not widely reported in the literature, there is reason to suspect they are altered in cancer tissues. First, alternative splicing is highly abundant under normal conditions where up to 94% of human genes undergo alternative splicing [25], and the dominant isoform depends on cell type [26]. Second, in various cancers, trans-acting splicing factors can be mutated or mis-regulated [27][28][29][30][31], potentially skewing isoform distributions. Third, somatic DNA mutations-abundant in cancer-may occur on splice sites, favoring or suppressing splicing events. Kinases are known to undergo alternative splicing events in cancer [18] and these are implicated in tumor progression. Examples include MKNK2 in glioblastoma [32]; CD44 in breast cancer [33]; and KLF6 in prostate, lung, and ovarian cancers [34]. Splicing induces losses or gains of functional or regulatory domains, documented in cancers, altering the functions of affected proteins in the cell. Despite these observations, differential isoform usage is an extra level of detail not normally analyzed in cancer studies. Here we propose to detect and demonstrate the biological relevance of isoform alterations in metastatic melanoma. Notably, a recent study of the human kinome in prostate cancer found that there was little overlap between genes with differential expression and genes with differential splicing [35], suggesting a study of the latter will yield additional therapeutic targets. Despite our emphasis on differential isoform expression, we include differential expression of genes (i.e., representing a gene locus with a single expression value), to show distinct and relevant findings learned from each type of assessment. In this study, we analyze RNA-seq data from The Cancer Genome Atlas (TCGA) skin cutaneous melanoma project (SKCM) to study changes to the kinome of metastatic vs. primary tumor melanomas. Important findings include isoforms downregulated in metastatic samples that correspond with known and novel suppressors of metastasis and additional subgroupings of metastatic samples with narrowly focused therapeutic potential. Our results identify characteristics of wildtype BRAF tumors, as well as new subdivisions among BRAF mutant tumors. TCGA data We obtained RNA-seq data and kinase gene counts-estimated using HTSeq [37]-from the National Cancer Institute (NCI)'s Genomic Data Commons (GDC) portal for TCGA's skin cutaneous melanoma (SKCM) project. This included data from 472 samples gathered from 468 patients: 367 samples for metastatic tumors, 103 for primary tumors, 1 for an additional metastatic tumor from the same patient, and 1 for solid normal tissue. The latter two samples were not used in our analysis. The data was processed in 14 batches, with the largest batch (labeled "A18") having 218 of the samples in three plates. The remaining batches had 10-48 samples in a single plate each. Isoform quantification For the purpose of quantifying the abundance of isoforms in the human kinome, we used the kallisto (v0.45.0) package [21] in conjunction with the transcript sequences of protein coding genes in the Gencode (release 29) annotation of the human genome. We first constructed the kallisto index file using the 98,913 FASTA sequences of transcript isoforms of human protein coding genes included in the Gencode annotation (ftp://ftp.ebi.ac.uk/pub/databases/gencode/ Gencode_human/release_29/gencode.v29.pc_transcripts.fa.gz; accessed March 15, 2019). FASTQ-formatted RNA-Seq reads (48-bp, paired-end) for each TCGA SKCM sample were produced from the bam files obtained from the Genomics Data Commons Data Portal. In order to avoid biases in kallisto estimates of fragment lengths, for each sample we produced FASTQ files in which the order of the reads was randomized. We then used these randomized reads to perform the kallisto "quant" analysis, from which we obtained the transcripts per million (tpm) estimates of each isoform abundance. Sample quality control 3' bias for each sample was estimated using the QoRTs package [38]. For sample purity, we used the consensus purity estimate from Aran et al. [39]. Samples with purity < 70% were removed to create our "high purity" sample set. Samples with a QoRTs 3' bias score > 0.55 (see ref [38] for Methods) were also removed in our "quality controlled" set. After clustering kinase isoform expression in metastatic samples, we also classified 83 metastatic samples as having amounts of immune infiltrate using k-means clustering with 2 centers (see Clustering of Metastatic Samples below). Differential expression (DE) We tested differential expression of all genes between primary tumor and metastatic samples using the DESeq2 toolbox for R [40] with two models: "sample type" and "sample type + batch" to account for batch effects. Reverse phase protein array (RPPA) data The Reverse Phase Protein Array (RPPA) level 3 normalized data were downloaded from the GDAC data portal (http://gdac.broadinstitute.org/). The original data contains 355 SKCM samples consisting of 92 primary tumor and 263 metastatic samples. Since the RPPA data used antibodies in rabbit and mice, we manually mapped the protein names into human gene names, with the aid of GeneCards (https://www.genecards.org/). We found 165 unique genes corresponding to the 208 RPPA protein probes. This included 33 kinase genes with 56 (26.9%) corresponding probes. We focused on the 224 samples with purity �70%, same as in our differential gene expression analysis. We tested differential protein expression between primary tumor (n = 78) and metastatic samples (n = 146) using a two-sided Wilcoxon's rank sum test. Benjamini-Hochberg adjusted p-value < 0.05 was deemed significant. Calculations for differential isoform ratios (DIR) Transcript isoform counts for the TCGA samples were estimated from RNA-seq data with kallisto [21], using isoform information for protein coding loci provided by Gencode v.29 transcriptome annotation. In total, there were 3,040 protein coding isoforms for the human kinome. 69 genes with only one coding isoform and one pseudogene in the kinome list (PRKY) were not tested, leaving 2,971 isoforms. For each gene, isoform counts (in transcriptsper-million or TPM) were grouped as a vector (e.g. a five-element vector for a gene with five coding isoforms), and the vector was normalized to sum to 1. One vector per sample was made, ignoring samples with zero counts for all isoforms. We used two models to test for differential isoform ratios. The first was a permutation method utilizing linear discrimination analysis (LDA). LDA was performed to reduce the space of isoform vectors to the 1D line which best separates sample types, and the LDA statistic Met was calculated. The sample labels were then randomized n iter times and the statistic recalculated to create a null distribution, from which the p-value was found. This method had the benefit of producing a single p-value without assumptions, but could only find p-values as low as 1/n iter . In the second model, principal component analysis (PCA) was performed on the space of normalized isoform vectors, providing us with "n-1" components for "n" isoforms. PCs with zero variance were removed. We tested the difference in isoform coordinates between sample types along each PC using one of three different statistical tests (see below) and combined the p-values using Fisher's method. For both models, p-values were adjusted using Benjamini-Hochberg FDR adjustment. Comparison of statistical tests Given that the permutation test becomes computationally prohibitive for large datasets and high precision, we attempted to find a statistical test that could reproduced the results obtained through permutations. We used three different tests along the principal components of the space of isoform vectors: the Wilcoxon rank sum test, Welch's t-test, and the general independence test from R's conditional inference (coin) package [41]. We combined the p-values from each principal component with both Fisher's method (FM) and the asymptotically exact harmonic mean (HMP) from DJ Wilson [42]. This resulted in six sets of p-values which we compared to the permutation test results. We found that the t-test combined with Fisher's method gave the best correlation between p-values (r = 0.92) and ranks (ρ = 0.92), while the coin test combined with HMP gave the best correlation between the logarithm of p-values (r = 0.89). However, total correlation may be of less interest than the sensitivity and specificity of the tests. We calculated Youden's J statistic (sensitivity + specificity-1) at three significance levels: p = 0.05, 0.01, and 0.001. The t-test combined with Fisher's method performed best at all three levels, with J = 0.79, 0.80, and 0.80 respectively, followed by the coin test with Fisher's method. The geometric mean of these two tests p new ¼ ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi p tÀ test p coin p performed better, with J = 0.80, 0.84, and 0.86 respectively, and also increased all three correlations. We thus adopted this test for scaling up the number of genes. The Wilcoxon test performed poorly due to difficulties handling ties in the data. Clustering of metastatic samples A quasi-Poisson generalized linear model (GLM) was used to test each individual metastatic sample vs. all primary tumor samples for each protein-coding isoform-using TPM counts from kallisto-resulting into a 3,040 x 367 matrix of p-values. Before clustering the data was thresholded into three bins, setting all p < 0.05 to +1 for isoforms with increased expression, all p < 0.20 to -1 for isoforms with decreased expression, and all other entries to 0. The reason we used such a liberal p-value for negative change is because most count data follow a Poissonlike distribution with a low median, which makes decreased expression for individual samples unlikely to test as significant. For example, isoform SLK-202 tests as highly significant for decreased expression (p = 3.4e-9) for all metastatic vs. primary tumor samples but only tests as significant (p = 0.0014 and 0.038) for two individual samples. After digitizing, we applied k-means clustering to the data matrix, using the elbow method to find an appropriate number of clusters. Enrichment for tumor region, mutation subtype [11], batch ID, and kinase phylogenetic group in each cluster were tested using Fisher's exact test. Gene biological process and kinase group enrichment 1,572 biological process (BP) annotations were downloaded from the PANTHER database at geneontology.org. Genes were ranked by p-values (for DE or DIR) and significant genes tested for enrichment using the one-sided Fisher's exact test (i.e. hypergeometric test), using the remaining kinase genes as the background. We found that enrichments could differ drastically depending on the p-value threshold chosen for significance, so we searched for BP enrichment at multiple thresholds. Additionally, testing for DE or DIR with small sample sizes produced less extreme p-values than testing with large sample sizes, resulting in comparing >300 significant genes from one set of results (more than half the kinome) to <10 genes in another set of results. So we tested four percentile-based thresholds-the top 5%, 10%, 20% and 40% of all genes with a p-value-to obtain a comparable set of enrichments between sample sets. Results described are for the top 5% of genes unless noted otherwise. We did not adjust p-values for the biological processes for several reasons. Having discovered a set of significant genes, we wanted to investigate the functional role served by these genes. Some annotations, such as "protein kinase", will never test as significant because all the genes in our background and foreground are kinases, making the expected false discovery rate lower than assumed in Benjamini-Hochberg (BH) correction. Furthermore, GO terms are highly dependent, making common adjustment methods such as BH inappropriate. Finally, GO terms do not account for individual isoform activities, thus do not address our underlying question. We did calculate an empirical false discovery rate (see S1 Results) merely to compare our enrichment results to those of a randomly selected set of "significant" genes. Kinase phylogenetic group enrichment (see "Human kinome" above) was calculated in the same manner using percentile thresholds, with p-values unadjusted. Split-read alignment mapping To evaluate changes in the relative abundance of isoform using an alternative method, we quantified the relative abundance of split reads specifically associated with the isoform of interest. For this purpose, we aligned the RNA-Seq reads using STAR against the hg19 version of the human genome assembly. We used the QoRTs package [38] to quantify split read support for splice junctions. For cases of alternative promoters, we compared the relative abundance of split reads supporting a common exon junction with alternative upstream exons. In cases of isoforms differentiated by a skipped exon, we considered the reads supporting the junction skipping the exon, and the average number of reads supporting the two junctions of the alternatively spliced exon. The relative abundance was expressed as a fraction of reads specifically supporting one isoform out of the total number of reads supporting both isoforms. The difference in the relative abundance was compared between primary tumor and metastatic samples using a one-sided Wilcoxon rank sum test, guided by the expectation set by the output from the kallisto tool. In addition to this method, we also performed local analysis of exon usage using the package DEXSeq [43] on the quality-controlled sample set (see "Sample quality control" above). All kinase genes, including those with only one coding isoform, were tested. Survival analysis We obtained patient survival data, i.e. days until death, from TCGA. To determine differences in survival across sample clusters (see "Clustering of metastatic samples" above), survival events and their respective times up to 4000 days were compiled for samples in each cluster based on vital status. We then used this data to generate a Kaplan-Meier estimator to plot the survival curves of each cluster. Log-rank tests were used to evaluate significance. We assessed the correlation between kinase gene expression and patient survival using overall survival calculated for 205 high purity metastatic samples with survival data. For each kinase gene (n = 538), HTSeq gene counts (normalized by size factor by DESeq2) were correlated against overall survival using the Spearman correlation test. P-values were adjusted using Benjamini-Hochberg method. To determine early apoptosis, the cells were stained with annexin V (BD Biosciences, San Jose, CA), and analyzed by FACS at 24h, 48h, and 72h time points. The positive threshold for annexin V detection was determined by comparing a negative control (cells treated with the same volume lipofectamine used in transfection) and a positive control (cells treated with 1 μM of Adriamycin, a DNA damaging drug which induces apoptosis) at each time point for each replicate. Similarly, the positive threshold for GFP expression was determined by comparing a negative control (cells treated with a volume lipofectamine used in transfection, but no vector) and a positive control (cells transfected with eGFP-only vector) at each time point for each replicate. We analyzed percent annexin V in GFP positive cells over time for the negative control (i.e., Lipofectamine, no vector), eGFP-only, SLK-201-eGFP, SLK-202-eGFP. Cells expressing GFP were binned into five groups of increasing GFP fluorescence intensity for further analysis as follows: B1 (10 4 -10 4.5 ), B2 (10 4.5 -10 5 ), B3 (10 5 -10 5.5 ), B4 (10 5.5 -10 6 ), B5 (> 10 6 ). Overexpression of SLK isoforms in Metastatic Melanoma To examine differences in the actin cytoskeleton, the cells were stained with Phalloidin-iFluor 594 (Abcam, Waltham, MA) and DAPI (Thermo Scientific, Waltham, MA). They were visualized with a Zeiss LSM 880 NLO Laser Scanning Microscope at 24h, 48h, and 72h time points. Results We analyzed the 538 kinase genes comprising the human kinome for changes in total mRNA expression and 3,040 isoforms for altered isoform expression, between metastatic and primary tumors. Using computational tools HTSeq [37] and kallisto [21] with short read sequences, we implemented the data analysis workflow depicted in Fig 1. Along with differential expression defined at the gene level and differential isoform ratios calculated within each locus, we performed a clustering analysis to identify pathway, mutational and functional characteristics that define each subgroup. In this paper, we will first cover the DE results for varying sample sets (all samples, high purity samples only, and samples separated by genomic subtype), covering significant genes and their biological process enrichments. We will then do the same for the differential isoform ratio results before comparing the two groups. Sample demographics Primary (n = 103) and metastatic (n = 367) tumors were obtained from the TCGA skin cutaneous melanoma project (SKCM) ( Table 1). Primary tumors originated in a number of locations including arms or legs, trunk, head or neck, or other areas, such as armpit, genitalia, etc. Metastatic locations included regional cutaneous or subcutaneous tissue, regional lymph nodes, distant metastases, and unclassified metastases. Samples were skewed towards males, and mostly derived from white individuals. Patient age at time of diagnosis ranged from 15 to 90, with a median of 58-years-old. Differential expression (DE) dominated by receptor tyrosine kinases We first tested differential expression at the gene level. Out of 538 kinase genes, 281 (52%) had significant DE (p adj < 0.05) between all primary tumor and all metastatic samples (S1 Table). The top groups, ranked by p-value, included both non-receptor (nRTKs) and receptor tyrosine kinases (RTKs) (Fig 2A). We looked for biological process enrichment in the top 5% and 10% of genes, and found strong enrichment for immune cell activation (both innate and adaptive). Clustering analysis (see Methods) revealed these genes have strongly correlated expression, suggesting their high expression results from immune infiltrate in the metastatic samples, i.e. immune cells attacking tumor cells. Using this approach, we identified 83 metastatic samples with high amounts of putative immune infiltrate (see Methods), which we removed before rerunning the DESeq2 analysis. This action removed the enrichment for nRTKs, whereas RTK enrichment remained (Fig 2B). We next addressed the impact of impure tumor samples, as measured by the consensus purity estimate developed in Aran et al. [39]. When 168 samples with < 70% estimated purity were removed from the original set, which included 80 of the 83 PLOS COMPUTATIONAL BIOLOGY immune infiltrate samples, again we saw enrichment for nRTKs disappear whereas RTKs remained significant ( Fig 2C). We named this filtered group the "high purity" (HP) group. In both assessments, enrichment for RTKs remained significant when assessed as subsets of the top 5% to~20% of genes. There was also a lesser enrichment for the STE kinase group (p = 0.031 at 20% threshold; Fig 2C), which contains kinases upstream of MAPK signaling cascades. When only the high purity (HP) samples were compared we found 197 significant genes including 26 of the 57 RTKs (S2 Table). Of these 26 kinases, 11 are known to be internalized from the cell surface into the nucleus [44]: FGFR1/3, FLT1, ERBB4, INSR, TIE1, CSF1R, EGFR, IGF1R, MET, and KDR. Internalized receptors have been linked to cancer progression and resistance to therapy by, for example, activating DNA damage response pathways [45][46][47]. Absolute fold-changes for significant genes ranged from 0.503 (KSR) to 11.7 (NRK). Next, we examined differential gene expression when the HP primary tumor and metastatic samples were subdivided into their particular genomic subtypes (BRAF, RAS, triple WT and NF1) (S3 Table). Although this approach reduced sample size for each test, a similar enrichment pattern emerged. Specifically, the DE genes for the BRAF hotspot mutants, NF1 mutants, and triple WT samples were all enriched for RTKs (odds ratios = 4.0, 6.9, and 3.2 respectively at 5% threshold: Fig 2D). The deviant result was the case of the RAS hotspot mutants, where DE was dominated not by RTKs but by CMGC kinases (odds = 4.1, Fig 2E). This group PLOS COMPUTATIONAL BIOLOGY contains both cyclin-dependent kinases-which regulate the cell cycle-and downstream MAPkinases-which regulate gene expression-as well as kinases directly involved in splicing regulation (i.e., serine arginine protein kinases). Although RTKs (particularly Ephrin receptors, i.e., EPHA) remain significantly altered in the RAS mutants, this result suggests a distinct set of alterations is associated with metastases in RAS mutants. In metastatic BRAF mutants, mutated BRAF itself had non-significant increased expression. Influence of sample batches on differential gene expression Because not all the samples in the TCGA data set came from the same batch, we also ran DESeq2 using both sample type and batch ID as model variables. This approach increased pvalues, decreasing the number of significant genes. However, 123 kinase genes remained significant at the p adj < 0.05 level, including 16 RTKs (Table 2), compared to 197 total genes when only the sample type was the variable. Gene ranking was not substantially altered (Spearman correlation, ρ = 0.81) and enrichment trends were similar to our prior results for all genomic subtypes (BRAF, RAS, triple WT), with the exception of the NF1 mutant samples. These could not be assessed due to the small sample size (2 primary and 11 metastatic tumors), where the primary tumors and metastatic samples were not from the same batch. Excepting this subtype, for the remaining analyses we included batch ID as a model variable. Biological process (BP) enrichment differs by genomic subtype In addition to the kinase group enrichment, for each set of results (i.e., all samples, high purity, BRAF mutants, etc.) we looked for BP enrichment among significant genes, as a hypothesisfree approach to further characterize the metastatic tumors (S4 Table). Top genes were highly enriched for immune-related annotations when all 470 samples were used, the highest being "adaptive immune system" (p = 5.1e-9) ( Table 3). These enrichments nearly disappeared when samples with < 70% purity were removed. Surprisingly, when only analyzing HP samples, significant BP annotations were depleted, with only four annotations receiving a p-value below 0.05. "Ephrin receptor signaling pathway" (p = 0.033); was the only non-immune-related enrichment. Significant ephrin pathway genes included 7 ephrin receptors and downstream non-RTKs such as ROCK1/2 (regulators of actin cytoskeleton, downstream of RHOA and EPHA4 [48]), PAK3 (downstream of RAC1 and EPHBs and important for cytoskeletal reorganization in dendritic spines [49]) and YES1 (oncogene downstream of EPHA2 which induces cell proliferation and migration [50]). Ephrin receptors are prototypical RTKs that impact cell shape, adhesion, and movement through activation or repression of the Rho GTPase family [51], suggesting an important role in metastatic processes. The lack of BP enrichments suggests either that DE is widely distributed among a number of cell processes, or that enrichment patterns differ by genomic subtype and disappear when lumped together. To address this question, we separated the high purity samples into genomic subtypes and found support for the latter hypothesis, where division into individual subtypes revealed enrichment in distinct processes (Table 3). We observed strong BP enrichment among DE genes for samples with BRAF mutations, with the most significant annotation being "cell differentiation" (p = 1.3e-4). Neurogenesis and cell projection-related enrichments were also discovered. The DE genes for RAS mutants had weaker enrichments, although select examples such as "positive regulation of defense response" and "regulation of angiogenesis" are relevant for cancer. The ephrin receptor signaling pathway was enriched in both the BRAF (p = 0.008) and RAS (p = 0.035) mutants. The NF1 mutant and triple WT sets had smaller sample sizes (13 and 28 samples respectively). The NF1 mutants were enriched for "regulation of MAPK cascade" (p = 0.0054), "chemotaxis", and "neuron projection guidance" among others. The triple WT samples-unlike the other genomic subtypes-were enriched for responses to cytokine stimulation, especially interleukin-1 (p = 0.0053), as well as the inflammatory response and defense response. Reverse Protein Phase Array (RPPA) Data We compared our results with an orthogonal dataset containing reverse protein phase array (RPPA) data. Although isoform information was not available, 33 kinase genes had available RPPA data, wherein 14 genes (42%) had significant (p adj <0.05) differential expression (S1 Fig), compared to 60 of 175 non-kinase genes (34%) (S5 Table). The 14 genes include two RTKs, ERBB3 and KIT. While the number of kinase genes covered by the RPPA data is too small for a signaling pathway enrichment analysis, a gene ontology analysis revealed that the 14 genes participate in vital biological processes related to cell growth and proliferation. In particular, the cell cycle regulatory genes including EEF2K, PRKCD, PRS6KB1, CHEK2, MTOR, and BRAF were all upregulated in the metastatic group. These results corroborate that some of the kinase genes are also dysregulated at the protein level as tumors progress from primary to metastatic state. Kinase genes exhibit differential isoform usage between primary and metastatic tumors To complement the usual procedure of DE analysis, we next tested whether multi-isoform kinase genes exhibit differential isoform ratios (DIR) between primary and metastatic tumors. Per the Gencode v.29 annotation, we tested 468 such genes with 2,971 total coding isoforms. We measured significance in 317 (68%) via a permutation test (p adj < 0.05) when all 470 tumor samples were used, more genes than had tested significant for DE. Our complementary PCA test (see Methods) found p-values as low as 5.3e-28 for LIMK1. This high level of observed DIR could be an artefact of sample impurity-since different cell types might express isoforms in different ratios-or experimental artefacts such as fragment sequence bias [52]. Fragment bias results from degraded RNA reads. Because these reads are sequenced from the 3' end following poly(A) enrichment protocols, high levels of degradation results in overestimation of 3' fragment isoforms and underestimation of 5' fragment isoforms (Fig 3A and 3B), although the total gene count estimate is unaffected. We inspected the isoform counts and found that genes with the strongest DIR had 3' fragment isoforms, suggesting samples with high 3' fragment bias could be driving significance. This bias was concentrated in the primary tumor samples (two-sided Wilcoxon, p = 1.3e-8) (Fig 3C). We also found sample impurity was concentrated in metastatic samples (p = 1.6e-4). Thus, both could contribute to the observed levels of significance. Using the histograms as a guide, we removed samples with less than 70% purity or a QoRTs estimate of 3' bias > 0.55 from further analysis (Fig 3C). This reduced the number of samples to 50 primary tumor and 178 metastatic (S3 Table), which we deemed the "quality-controlled" (QC) sample set. In this stringent QC set, only 60 genes had DIR with p adj < 0.05 (Permutation test), and the most significant kinase was SLK at p = 7e-6 ( Table 4, full list in S6 Table). As a post-hoc analysis we tested the effects on individual genes by removing samples one-by-one to assess the influence of fragment bias or sample impurity. (See S1 Results, S2 and S3 Figs) Differential gene expression does not predict differential isoform ratios Having controlled for fragment bias and impurity, we asked whether genes with differential expression between primary and metastatic tumors were also likely to exhibit DIR. We compared the p-values for DIR from the QC set to the p-values for DE from the HP set. Genes with p adj < 0.05 had non-significant overlap (Fisher's exact test, p = 0.310), with only 15 genes overlapping (Fig 4). Gene rank (by p-value) had no correlation (Spearman, ρ = 0.038). The gene MAP3K3, for example, had the third highest level of DIR (p = 5.6e-5) but no observed change in expression (p = 0.77, rank 487). Interestingly, genes with significant DIR were enriched for nRTKs (Fig 5A) but not RTKs, the opposite of what we observed for DE genes. Thus DE and DIR affect different genes. We separated the QC samples into genomic subtypes, as we did for the DE analysis, and calculated DIR for each subset. Due to the small sample sizes, few genes tested as significant with our permutation test. For example, the BRAF mutants revealed only four genes with p adj < 0.05 (SLK, MOK, ABL2, and SYK) while the other 3 subtypes revealed no significant genes after p-value adjustment (summarized in Table 5). As seen for the full sample set, no ranked gene list for any DIR sample group correlated with its DE counterpart. DIR affects different biological processes than seen for DE Because many unadjusted p-values were significant for DIR we elected to search for gene ontology (GO) enrichments. For each sample set, we searched for biological process (BP) enrichment in the top genes (ranked by p-value) using percentile thresholds from 5% -40% (see Methods). Enrichments are described for the top 5% (24) genes unless noted otherwise. For comparative purposes, we examined the full sample set first without filtering, which contained low purity and high fragment bias samples, we revealed 221 BP terms with p < 0.05 and 10 additional terms with p<0.001. The most significant terms included "positive regulation of translation" (p = 1.5e-4), "cytoskeletal organization", "response to amino acid starvation", and "blood vessel development" (Table 5). Immune-related enrichments were strongest at the 40% threshold, indicating putative immune infiltrate may affect DIR results, but the most significant genes were not immune-related. The QC set had fewer BP enrichments than the full sample set (Table 5). These enrichments included "regulation of endocytosis" (p = 3.5e-4) "cytoskeletal organization", "endothelial cell migration", "cell differentiation", and "cell cycle arrest" (Fig 5B), all of which have a putative relevance to cancer. The genomic subtype sets revealed distinct BP enrichments-as they did when testing DE genes. In contrast to the DE genes, the DIR genes between BRAF mutant primary and metastatic tumors did not show strong BP enrichments, while the DIR genes between RAS mutant samples showed enrichment for 94 BPs. The strongest of these was "positive regulation of angiogenesis" (p = 1.6e-4) and related enrichments such as "vasculature development". Other enrichments included "cell-cell communication", "protein transport", and "membrane organization" (Fig 5C). Such enrichment patterns would not be discovered if DE alone was studied. In contrast, significant genes from the BRAF mutants had 27 processes enriched below p = 0.05 -these included 6 cell locomotion-related enrichments-and none below p = 0.008. Resolving alternative splicing events in kinase genes Focusing on DIR with discrete splicing changes, we identified skipped exons, alternative promoters, and alternative terminal exons (Table 4). For example, ABL1 has two long isoforms (ABL1-201 and -202), which differ only in their promoter site, that have increased expression in metastatic samples. An additional isoform (ABL1-203) encodes a shorter 5' fragment, and decreases in expression. However, ABL1 does not test as significant in DE between primary and metastatic samples, indicating that the DIR analysis can reveal aberrations that differential gene expression does not capture. To test the kallisto DIR data for evidence of splicing differences, we quantified RNA-seq reads mapped directly to the nucleotide sequences of exon junctions in several genes from Table 4. This provides a resolved view of exon splicing patterns in the samples which did not rely on kallisto (see Methods). Within the melanoma sequence data, we confirmed exon skipping in three genes-MAP3K3 (exon 3), FES (exon 11) (Fig 6) and SLK (exon 13) (Fig 7). We also confirmed switching to mutually exclusive exons in two genes-exon 8 of FGFR3 and the terminal exon of MKNK2-and increased use of an alternative promoter in LIMK1 (Fig 6). We illustrate the fraction of split reads, out of all reads, supporting these events. In SLK, the most significant gene on our list, expression of the long isoform SLK-202 is decreased, whereas the short isoform SLK-201 increases (Fig 7). The short isoform skips exon 13 predicting a putative role for loss of this exon in cancer. We compared expression of this alternative exon in normal melanocytes using RNA-seq data from Zhang et al. [53]. Exon 13 was absent in the normal cells, and largely specific to primary tumor samples. Some genes have DIR which coincides with significant DE. For example, 6 of the 7 coding isoforms of FGFR3 are suppressed in metastatic samples (S4 Fig), while the remaining isoform -205 has mildly increased expression. PAK6, with 14 isoforms, undergoes a similar alteration. In BLK, DIR of 3 isoforms is driven by an unequal increase of 2 major isoforms, rather than all 3. To address the functional consequences and biological implications of isoform switching, we matched the alternatively spliced regions in these five genes to domain annotations obtained from UniProt ( Table 4). The skipped exon in SLK encodes a section of a coiled-coil region in the C-terminal domain. SLK uses this domain to dimerize at high concentrations, and these dimers activate apoptosis [54]. MKNK2 switches to a shortened terminal exon which lacks the MAPK binding site, interfering with downstream signaling. The 11 th exon of FES encodes the SH2 domain, which is necessary to activate the kinase domain [55]. The 3 rd exon of MAP3K3 is not mapped to any domain, but it precedes the PB1 protein-interaction domain. These data indicate that the isoform changes modulate the usage of important domains in the kinases, which can ultimately affect their function and participation in signaling networks. Finally, the alternative promoter of LIMK1 shortens the first zinc-binding domain, a domain that inhibit the protein's kinase activity [56]. Comparison to DEXSeq results In a parallel approach, we analyzed the primary and metastatic sample data using DEXSeq, a method commonly used to measure differential exon usage. DEXSeq found only 11 exonic bins in 5 genes to have significant differential usage (p adj <0.05), compared to 60 genes with our method (S7 Table). Three of these genes were also highly significant with our method: MAST4, FGFR3, and SLK (S5A-S5C Fig). The remaining two, PDGFRA and LMTK3, are likely false positives due to the low number of counts for their significant exons (median~1) (S5E Fig). Before multiple test correction, the alternate promoter of LIMK1 was significant (p = 0.0014); but not the SH2 domain of FES, MAPK-binding region of MKNK2, nor the 3 rd exon of MAP3K3; despite our confirmation with direct junction sequence alignment. We found multiple reasons for the low sensitivity of DEXSeq (see S1 Results), which led to us to elect to measure DIR using isoform counts. Overexpression of SLK isoforms in Metastatic Melanoma SLK is involved in apoptosis and in the disassembly of actin [57]. We wished to see if overexpression of the two SLK isoforms could produce cell death in metastatic melanoma. We hypothesized that expression of the full-length isoform (SLK-202) would produce more cell death compared to the short-length isoform (SLK-201) due to the lack of one dimerization domain (coiled-coil region) in the shorter isoform. We also hypothesized that there would be differences in actin disassembly between SLK isoforms. In these experiments, SLK-201 and SLK-202 were cloned into p-RECEIVER-M98, an eGFP-fusion expression vector. We transiently transfected A375 metastatic melanoma cells with the negative control (i.e., Lipofectamine, no vector), eGFP-only, SLK-201-eGFP, and SLK-202-eGFP. Cells were collected at 24h, 48h, and 72h post transfection. We found no endogenous SLK-202 in A375 using RNA-seq data from the Sequence Read Archive (SRR961660; S6A Fig). We (Figs 7D and S7A). These data indicate that the long SLK isoform (SLK-202) induces apoptosis at a higher rate. This finding corresponded to increasing construct expression levels, indicating that the functional impact of the longer isoform could be detected only at higher expression levels and longer timepoints (S7B-S7F Fig). We also found that SLK-202 co-localizes with actin filaments more strongly than SLK-201 or the eGFP-only control (Figs 7E and S8A-S8D). At 48h, the SLK-202 transfected cells begin to lose their structure, and by 72h, the cells have mostly detached. Since the N-terminus of SLK contributes mainly to the cell death [57], we removed the N-terminal 373aa of SLK-202 (Δ 1-373 SLK-202). Clustering on DIR identifies correlations with genomic subtype and tumor location To identify similarities in metastatic samples based on isoform expression patterns, we clustered the samples (columns in Fig 8). Rather than clustering raw expression data, we determined which of the kinase isoforms was significantly upregulated or downregulated in each of the 367 metastatic samples (see Methods) relative to all primary tumor samples. This allowed us to address the simpler question of which isoforms are altered in which samples. To identify correlated patterns of upregulation or downregulation we also clustered the isoforms (rows in Fig 8). Of the 3,040 protein coding kinase isoforms, 235 had significant altered expression in > 13% of metastatic tumor samples. Clustering this reduced dataset with the k-means elbow method identified 4 sample clusters and 4 isoform groups (S9 Fig). However, we found that using k-means with 5 isoform groups strengthened certain BP enrichment patterns. These 5x4 clusters are depicted in Fig 8. For each sample cluster, we tested enrichment for batch ID, region (skin/soft tissue, lymph node, and distant metastasis), and genomic subtype. Notable enrichments in Cluster A (n = 55 samples) include the tissue location of skin/soft tissue cluster and BRAF hotspot mutations. Cluster B (n = 69 samples) was identified as a lymph node cluster with mild enrichment in triple WT samples. Distant metastases were depleted in both A and B clusters. Cluster C (n = 60 samples) had no region enrichment but was strongly enriched for RAS hotspot mutations (Fisher's exact test, p = 4.4e-4, odds = 2.9). Cluster D (n = 183 samples) stood out as a low expression cluster, which had expression largely similar to the primary tumor samples, with little upregulation of isoforms compared to other groups. Moreover, decreased expression of isoforms (shown in blue) occurred in many samples. This cluster was enriched for distant metastases. The batch ID enrichment analysis identified batch A18 in Cluster C, suggesting batch effects could have influenced our results. To address this issue, we clustered only the 199 metastatic samples (54% of all such samples) in batch A18 (S10 Fig), originally found in groups A-D. We found four clusters comparable to the four described above, and Cluster 3 was still significantly enriched for RAS hotspot mutants (p = 0.032, odds = 2.1). Clustering all samples not in A18, originally present in groups A-D, also revealed 4 clusters and though genomic subtype was not available for most of these samples, Cluster C still had the highest enrichment for RAS mutants (p = 0.16, odds = 2.3). Thus, the RAS group enrichment appears to be independent of the batch. Cluster D in our main heatmap was enriched for batch A37, a smaller batch (n = 41 samples), considerably smaller than the cluster it was in. We also compared the level of 3' bias and sample impurity in each cluster and found that Cluster B had low purity (median 42%) compared to the other three (median of 72%, 80%, and 79% respectively). Median 3' bias did not differ noticeably, although Cluster C had a lowest mean bias (0.517, QoRTs score), indicating higher quality samples. Taken together, these data suggest that metastatic samples have characteristic subgroups related to tumor location and genomic subtype, where isoform expression patterns may help to identify the most similar samples to test as treatment subgroups. Isoform groups correlate with biological process annotations We performed a similar analysis on the five isoform groups (i.e., rows), looking for kinase phylogenetic group and BP enrichments compared to the total human kinome. Group 1 was PLOS COMPUTATIONAL BIOLOGY enriched for genes involved in blood vessel morphogenesis (p = 3.6e-6) and related annotations, as well as MAPK regulation. These isoforms are upregulated in Clusters A and B. Since these genes are active in the skin/soft tissue sample cluster and regional lymph nodes, the isoforms may be important in the first transition from primary tumor to metastatic melanoma. This group is also enriched for RTKs. Group 2 was strongly enriched for nRTKs and contained genes in the category of immune response, for example, used by leukocytes such as T-cells and B-cells (p = 5.2e-11). These isoforms are consistently upregulated in Cluster B. Due to their highly correlated expression and the low estimated purity of the Cluster B samples (median 42%), this group likely arises from immune cells infiltrating the tumor, consistent with previous findings from Akbani et al. [11]. Cluster B is also enriched for samples taken from lymph nodes, a prime location for immune cells to interact with the tumor. Group 3 was enriched for kinases that regulate cell motility (p = 0.0081). No phylogenetic kinase group enrichments were found, although this group had weak CMGC enrichment compared to the other four groups in Fig 8. These isoforms had the highest expression in Cluster C, containing RAS hotspot mutant samples and distant metastases. We note a strong pattern of exclusivity for Group 3 isoforms with the immune infiltrate cluster of Group 2 isoforms, suggesting a novel means of stratifying samples for clinical testing. Group 4 was enriched for kinases which positively regulate apoptosis (p = 0.010) and cell differentiation (p = 0.0045), and for STE kinases. These isoforms were upregulated in Clusters A and C. This group contains two isoforms of CDK19, a gene implicated in cancer proliferation (a third isoform, CDK19-203, lacks the seventh exon and decreases in metastatic samples). The function of these isoforms in apoptosis is not explored; on the one hand apoptotic processes may occur spontaneously in cancer due to cellular stress and DNA damage [58], on the other hand alternate splicing can modulate pro-and anti-apoptotic functions in the same gene, like BCLX [59]. Samples with high levels of immune infiltrate (i.e. Cluster B) appear to have no enrichment of these isoforms, indicating how therapeutics could be specific for one subgroup and be ineffective in another. Group 5 contained isoforms of genes enriched for regulation of RNA biosynthesis and transcription (p = 0.0059). These isoforms had correlated downregulation in several samples (Clusters B and D), although they are not universally downregulated and in fact increase in some samples. One such gene, NME1, is a known suppressor of metastasis [60]. Also in this group are two isoforms of MAPKAPK3 (-201 and -208), a gene which activates autophagy in response to stress [61] and represses transcription factor E47 [62]. A shorter isoform, -202, is increased in metastatic samples. This isoform lacks the p38 MAPK-binding site, meaning it cannot be activated by p38. This apparent isoform switching was not identified by our DIR analysis because isoform -201 increases in some metastatic samples. RPS6KA4-201 also significantly decreases, though not the gene's two secondary isoforms -202 and -205. These isoforms lack a nuclear binding site on the 3' end, suggesting it is RPS6KA4's nuclear binding that is selected against. The list of isoforms in Group 5 is given in Table 6, and the full list for each sample cluster and isoform group may be found in S8 Table. Some isoforms had divergent expression patterns depending on cluster. For example, the major isoform of BRD4, BRD4-201, was found in Group 5, indicating decreased expression in several samples. In contrast, this isoform increased in RAS-mutant metastatic samples, as did two shorter isoforms BRD4-205 (a member of Group 3) and BRD4-203 (S11 Fig). This suggests BRD4 may be a drug target specific to RAS-mutant melanoma; indeed, a recent study found that Vemurafenib-resistant melanoma was susceptible to BRD4 degradation [63]. Consistent with this observation, DE analysis revealed an 11% increase in BRD4 expression in metastatic RAS mutants, but this increase is not significant (p unadjusted = 0.452). Furthermore, we could not confirm kallisto's isoform assignments using exon junction alignment, although the reported increase in isoform 205 -a shortened isoform which includes the two bromodomains but not the C-terminal region or NET domain-may suggest an underlying switching effect. Immune infiltrate correlates with increased survival In our analysis of survival across sample clusters, Cluster 2 (n = 67) was observed to have a higher median survival compared to the other three sample clusters (Fig 9A). Fittingly, this cluster corresponds to samples with immune infiltration. A log-rank test comparing Cluster 2 survival against the rest of the samples showed a low level of statistical significance (p = 0.065). Clusters 1 (n = 54), 3 (n = 60) and 4 (n = 175) demonstrated no significant difference in patient survival after applying pairwise log-rank tests. We also analyzed the correlation between overall survival and HTSeq gene counts for each kinase gene. Of the 538 genes tested, WNK2 and OBSCN presented the strongest negative correlation between expression (see Methods) and patient survival (Spearman ρ = -0.26 and -0.24, respectively), while PRKACB showed the strongest positive correlation (ρ = 0.233) (Fig 9B). The unadjusted correlations were significant, however after multiple test correction (BH procedure), none of the correlation values rise to a level of statistical significance, with WNK2 having the lowest adjusted p-value of 0.110. Discussion Given the rise in melanoma cases across the world, and preliminary success of new therapeutic approaches combing kinase inhibitors and other treatments, we were encouraged to look for differential isoform expression, which has not been intensively studied, and compare it to differential expression identified using conventional approaches (i.e., using the gene locus as a proxy for average expression). We show that both differential expression and altered isoform ratios are prevalent in the human kinome in metastatic melanoma compared to primary tumor melanoma. Furthermore, these changes differ by genomic subtype and tumor location. Affected genes were enriched for several biological processes including immune response, angiogenesis, cell differentiation, chemotaxis, and cell projection organization. Our results provide insight into the regulation of melanoma progression and possible new routes for grouping therapeutic targets. Different genes were affected by differential expression (DE) and differential isoform ratios (DIR). These genes differed in both phylogenetic groups, e.g. receptor tyrosine kinases in DE vs non-receptor tyrosine kinases in DIR, and biological process enrichments. Thus, isoform analysis may reveal novel information about cancer progression that DE analysis cannot. The drivers behind these splicing events are unknown, but can be multifactorial. For example, mutations in splicing factors can determine outcomes of alternative splicing, but so may somatic mutations or SNPs [64]. Additional determinants derive from epigenetic changes such as aberrant DNA methylation [65] and RNA modifications [66]. Isoform switching may affect protein function We chose to examine six genes with especially significant isoform switching in greater detail. Metastatic samples showed SLK overexpression in our study, something that has been previously observed in other cancer types such as ErbB2-driven breast cancer [67]. Knocking down this gene markedly reduces cell migration in 3T3 MEF cells [68]. It appears that invasion is the functional benefit provided by SLK overexpression to metastatic melanoma. However, while the short form of SLK (SLK-201) is overexpressed in metastatic samples, the long form (SLK-202) is underexpressed. Overexpression of SLK can cause dimerization via the C-terminal coiled-coiled domain; these dimers then activate apoptosis [54]. The short form of SLK (SLK-201) skips an exon that encodes a coiled-coil region in the C-terminal domain. Our experiment found introduction of the SLK-202 isoform to be more apoptotic at high concentrations; it is therefore possible that the decrease in the long SLK-202 isoform, seen in TCGA metastatic samples, decreases apoptotic potency and facilitates the transition toward metastasis. Thus, SLK-202 isoform expression may provide a therapeutic target. Furthermore, the transfected SLK-202 isoform localized to actin filaments along the nuclear periphery more readily than the SLK-201 isoform. Further experiments are needed to address the impact of the differential localization. MAP3K3 has been identified as an oncogene in various cancers [69][70][71]. Although we observed no differential expression of the gene (after immune-infiltrate samples were removed), we found that skipping of exon 3 was significant in metastatic samples. The functional effect of this skipping is unknown; it precedes, but is not part of, the PB1 protein-protein interaction domain. MAP3K3 plays important roles in angiogenesis, cell differentiation, and proliferation and may regulate its partners through this structural edit. In metastatic samples, MKNK2 was found to switch to a shortened terminal exon which lacks the MAPK binding site. This switching has been previously observed in glioblastoma [32] (compared to normal samples), where the short terminal exon showed pro-oncogenic activity. The authors demonstrated that use of splice switching oligos in glioblastoma reduced the presence of the short terminal isoform and inhibited the oncogenic properties, suggesting this approach might also work in melanoma. Another event we observed was in the FES gene, a non-receptor tyrosine kinase. The 11 th exon, which encodes the SH2 domain and is necessary to activate the kinase domain [55], was skipped at a significantly higher rate in metastatic samples. FES has been previously identified as a tumor suppressor in melanoma [72], but we did not observe significant DE in our analysis. We predict that the skipping of the SH2 domain effectively turns off the kinase activity without decreasing the overall gene count. This effect would be consistent with reports of wild type FES acting as a tumor suppressor [73]. DE analysis alone would have missed this important effect. Notably, FES has several known inhibitors that target the SH2 domain and thus would not be effective against the short isoform [73]. FGFR3, which has highly significant negative DE, also has a significant alternative splicing event which affects the third Ig-like domain. There was a comparatively higher level of isoform FGFR-205 (also known as FGFR3-IIIc) and less of FGFR-202 (or FGFR3-IIIb). This IIIb/c imbalance has been observed in other cancers, such as colorectal [74]. The same study found that knocking down FGFR3-IIIc inhibited cell growth and induced apoptosis, but not FGFR3-IIIb. The negative DE was unexpected given FGFR3 is often considered an oncogene, but the gene is known to limit growth in tumors of epithelial origin [75]. Hence the decreased expression of IIIb and switching to IIIc may be two separate mechanisms of altering FGFR3 activity. Finally, an isoform of LIMK1 with an abrogated N-terminal LIM domain was expressed at a significantly higher level in metastatic samples. Deleting both LIM domains was previously found to increase kinase activity 3-7 fold [76], suggesting this isoform has greater kinase activity. Targeting LIMK1 with small molecular inhibitors has been shown to reduce migration and invasion of malignant melanoma [56], suggesting increased activity would promote malignancy. LIMK1 also did not have significant DE in our dataset. Expression pattern of RAS hotspot mutants Our various analyses discovered that RAS mutants have an expression level pattern distinct from the other three genomic subtypes. BRAF and MEK inhibitors, while useful for treating BRAF-mutant melanoma, have no or limited effectiveness against RAS mutants [77]. BRAF mutants that gain resistance to BRAF inhibitors often acquire a secondary NRAS mutation [78], meaning any effective RAS mutant treatment may also aid in treating drug-resistant BRAF-mutants. We found that DE of kinases in RAS mutants is concentrated in CMGC kinases (as opposed to receptors as in the other three subtypes) and that DIR is concentrated in kinases involved in angiogenesis. Thus anti-angiogenics [79] are also possible treatments. Analysis of kallisto counts also identified the bromodomains of BRD4 as a possible target. A recent study found that Vemurafenib-resistant melanoma was susceptible to BRD4 degradation [63], and bromodomain inhibitors such as OTX015 and BI-2536 have already had some success in treating carcinomas [80]. However, this result was not supported by the HTSeq gene counts or exon junction analysis. Another genomic subtype, triple WT melanoma, had DE mostly affecting Ca 2+ /calmodulin-dependent protein kinase (in addition to RTKs), and we found CAMKK1 expression had a negative correlation with survival ( Fig 9B). These may also serve as a new set of drug targets for this rarer subtype. Further biological implications One interesting result from the clustering analysis was the apparent mutual exclusivity of some kinase clusters in metastatic tumors. In particular, the isoform group involved in cell motility (i.e., Group 3) only had high expression in samples lacking in immune response markers (i.e., Cluster C). It is possible samples with this expression pattern, which includes many RAS mutants, may evade immune detection, which would explain this apparent mutual exclusivity. But it is also possible the low purity of these samples obscures increased expression of Group 3. Additionally, cell differentiation and apoptotic markers were highly expressed in regional soft tissue tumors (i.e., Cluster A) and RAS mutants (Cluster C), but not lymph node tumors (i.e., Cluster B). BRAF V600E mutations are present in Clusters A and B, indicating that in addition to the driver mutation, location of the tumor and isoform content is relevant to discern tumor biology and treatment choices. We conclude that the heterogeneity of sample types displayed in Clusters A-D suggests that the complexity of tumor biology is greater than indicated by driver mutations alone, and that the isoforms in our heatmap may be useful for screening metastatic samples. Limitations The present study has limitations that may impact the interpretations of our data. For example, isoform count estimation is a computational approach to predict isoform expression levels from short read data. Other short read algorithms-using direct alignment approaches such as RSEM, Sailfish, or Cufflinks-may produce different count estimates than kallisto. The accuracy of these algorithms decreases as the number of gene isoforms increases. However, one study found that for genes with <15 isoforms, kallisto estimated counts still had >0.95 correlation with simulated "ground truth" counts, excluding very short transcripts [81]. Tested genes in our study had a median of 5 and mean of 6.3 coding isoforms. Nonetheless, we also analyze reads aligned to exonic junctions to verify kallisto findings. Because kallisto requires isoform transcript sequences, our method does not account for novel isoforms. Specialized tools exist for this, such as psiCLASS [82], but this was not the focus of the present study. Here we rely on a fast isoform quantification that relies on an existing genome annotation. We compared our method to a standard approach, DEXSeq, which performs local exon analysis based on the architecture of DESeq2. Our method proved more sensitive to exon splicing events and is computationally faster than DEXSeq for hundreds of samples. 3 rd -gen RNA sequencing technologies such as PacBio [22] and Oxford Nanopore [23] are anticipated to provide more accurate knowledge of isoform sequences, both annotated and novel. Sample artefacts could also affect our results. As indicated by the results presented, computational estimates of isoform counts are highly impacted by sample impurity or 3' fragment bias. We removed problem samples in our study to obtain higher confidence results. Although our quality-controlled sample set had little difference in purity between primary tumor and metastatic samples (two-sided Wilcoxon, p = 0.88), primary tumor samples still exhibited increased 3' bias compared to metastatic tumors (p = 4.4e-4). Estimates of fragment bias could be incorporated into the existing tools to reduce artefactual results. With one exception, the melanoma TCGA samples are not matched, i.e. the primary tumor and metastatic samples do not come from the same patient. However, our sample size is large enough to make meaningful comparisons between sample categories. Summary We have compared differential gene expression and differential isoform expression to address the hidden effect of differential splicing of kinases in metastatic melanoma. We demonstrate novel, plausible stratification of tumors for clinical testing, for example, immune infiltrate vs. cell migration groups. These groups are consistent with presence of a specific driver mutation (i.e., BRAF V600E ), but a mixture of samples could be found in each group. Additionally, we identified a group of isoforms with significant downregulation in metastatic tumors. These include a known suppressor of metastasis (NME1), and may provide a rich source of discovery for additional suppressors. Although we focused here on the kinome in metastatic melanoma, in future work we can expand the analysis to the entire human genome, as well as other cancer types having a rich source of expression data. Further experimental work can confirm links between isoform switching and angiogenesis or other cell processes. The alternate promoter of LIMK1 was also significant before p-value adjustment. (E-F) The 3 rd exon of MAP3K3 (bin 9) and MAPK-binding region of MKNK2 (bin 4) did not test significant with DEXSeq, even before p-value adjustment, despite testing as significant using exon junction alignment. (G-H) The 14 th bin of LMTK3 and 1 st bin of PDGFRA also tested as highly significant. However, these two exons have low expression (median~1 count) so this result is likely due to noise and is unlikely to have biological relevance. (TIF) S6 Fig. SLK isoform expression induces apoptosis. A) Plot of uniquely mapping sequence reads for A375 cells showing skipping of SLK exon 13. Original RNA-seq data are from the Sequence Read Archive SRR961660, https://www.refine.bio/samples/SRR961660. B) A bar graph showing annexin V staining over the 72h time course for 2 biological replicates. We see an increase in percent annexin V for both SLK isoforms at 48h and 72h compared to the eGFP-only control. All significant t-tests ( � ) had p-values < 0.05. All non-significant (NS) ttests had p-values > 0.05. T-tests for the negative control were not included on the graph. Although total BRD4 counts did not test as having significant DE between any group of primary and metastatic tumors, isoforms BRD4-203 and BRD4-205 have heightened expression in RAS-mutant metastatic samples. Exon junction analysis could not confirm these particular isoforms from sequence reads. (TIF) S1
2021-08-10T13:09:46.337Z
2021-08-06T00:00:00.000
{ "year": 2022, "sha1": "86605cd8c47fdec10390f8b9fc080a776fc0c804", "oa_license": "CC0", "oa_url": "https://journals.plos.org/ploscompbiol/article/file?id=10.1371/journal.pcbi.1010065&type=printable", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "84d8baa32b6e76d7a2dc9d292cfa1d9bb76a8e85", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Computer Science", "Medicine", "Biology" ] }
55836043
pes2o/s2orc
v3-fos-license
Geomaritime-Based Marine and Fishery Economic Development in Maluku Islands The design of national economic development should never ignore three important aspects, namely integration, and sustainably and local contexts. Insufficient comprehension over these three aspects has caused delays of economic progress in several regions like Maluku. This region is characterized with archipelagic geo-profile where marine and fisheries resources are abundant but economic progress is sluggish. To catch up with the achievement shown by regions in the western part of the country, there must by effective efforts done in Maluku. This research is aimed at analyzing the three aspects mentioned above as related to acceleration of marine and fisheries economic development based on the region’s maritime geo-profile. In line with it, primary and secondary data were applied on a SWOT Analytical Approach. Based on the analysis, it was concluded that acceleration of marine and fisheries economic development in Maluku can be carried out through both local and national policies focused on facilitating prospective economic players in making massive investment in the marine and fisheries sector. Among others, this should be done by improving the capacity of Maluku marine ports and directing them to be local economic transmiters, through more effective functions as hubs for ships carrying commodities and products for both national and international markets. This research found that in line with it, a pre-requirement that has to be advanced by the government is detailed zoning of marine and fisheries resources, which is supported by a legal umbrella. Introduction Based on the UNCLOS (United Nations Convention on the Law of the Sea) 1982, Indonesia is categorized as an archipelagic state.The country meets all UNCLOS' criteria set for an archipelagic state, including that marine area coverage is not less than 50% and that distances between islands are not more than 100 miles, with an exception (120 miles) of up to 3%.As an archipelagic country composed of 80 % marine area and 20% land area, Indonesia secures a great number advantages on one side, but also faces threat to the sovereignty and territory on the other side To Indonesia, the marine areas are expected to play important roles, including to be media to unite the nation, media of transport, media of defense and security, media diplomacy maritime, and media for economic development.Recognizing this, the country promoted a maritime state concept through Djoeanda Declaration on December 13, 1957.This declaration was then enacted by the Law No. 4/60 re.Bodies of Water and UNCLOS 1982.It is worth noting that the 1982 UNCLOS III establishes a comprehensive framework for the regulation and management of the ocean space.The convention consists of 320 articles and nine annexes and covers a broad spectrum of issues relating to regulation of navigation, marine protection, and scientific research and seabed.The salient features of the convention are: (a) The drawing of base lines, (b) A 12 nautical miles territorial sea, (c) Unimpeded transit passage through international straits, (d) An EEZ extending up to 200 nautical miles, (e) Continental shelf regime and rights to manage living and non-living resources of the continental shelf to a minimum of 200 nautical miles.Archipelagic baselines joining the outermost points of the outermost islands and drying reefs of the archipelago, and designate the waters within the baseline as internal waters.The baseline is also the starting point for measuring other claims such as EEZs Indonesia's strategic geo-position is associated mainly with the fact that the country represents a busy crossroad that connects two continents, namely Asia and Australia and two oceans, namely Pacific Ocean and Indian Oceans.In addition, the strategic geo-position of Indonesia also is related with its geographical factors and the social and economic condition, all of which put Indonesia in an important position in the global environment.Such a situation makes Indonesia can affect and be affected by the political and economic stabilities, both regionally and internationally. Later, Indonesia's maritime zones were then established in accordance with international maritime law.Territorial seas, contiguous zones, exclusive economic zones and continental shelves of Indonesia, if do not border with neighboring countries, were unilaterally set in accordance with existing regulations.In the case of borders with neighboring countries, they were established with neighboring countries in accordance with international maritime law.And just recently, Indonesia introduced the so called Global Maritime Fulcrum (GMF).GMF focuses on five key areas, namely maritime culture, marine resources, archipelagic connectivity, maritime diplomacy and naval development.With GMF, Indonesia wants to take advantage of being an archipelagic state / country to basically lead the global economy.The spirit of GMF certainly reflects Indonesia's interests being the world's largest archipelagic country which is, geo-strategically located at the crossroads of major power interests. Regarding vision of being a global maritime fulcrum, Hasjim Djalal [2014] proposes that a maritime country is a country that can optimize the presence of ocean.Even a country with no ocean, as long as it can mobilizes assets including science and technology, to control and utilize ocean resources, it is considered a maritime country. Geoffrey [2009] says that in order to be a maritime country with a power on the sea, four components should exist.These are: (1) a community that has the preference of sea (maritime community), (2) maritime resources, (3) geographical position, (4) political will.From geographical point of view, Smith [1986] suggests that this can be approached by assessment which involves three aspects and these are (1) geographical knowledge structure, (2) the application of geography in maritime management, and (3) relationship between study fields, for example between international studies and marine environmental impacts.From geography perspective, six components should exists in maritime fulcrum.These are maritime history, maritime resources, maritime social-economy, maritime culture, geoliteration, and maritime global constellation. Indonesia to some extent already possesses basic components to be a maritime country.However, not all of the potential has been explored.The development of marine and fisheries for example, has not been tackled seriously by the government or local governments (provincial and district / city).Marine and fisheries development still faces a number of challenges such as IUU fishing, marine pollution, overexploitation, resource degradation, use conflicts and others. There are a number of approach in developing fishery management.Marine management is that the relevant marine social organizations guide and restrict human marine development behavior by political, economic and public opinion means to protect marine environment, coordinate balanced development costal and inland.Cuif [2003] then applies an approach that is meant to promote sustainable, stable and coordinate implementations of human marine development activity and finally realize the target of harmonious co-exist between people and sea.Following such a philosophy, Huang and Shaw, 2004 adopted simultaneous equations in their assessment of the EKC using time-series data from Taiwan.The second method is referred to as a vector auto-regression (VAR) model, and was proposed by Sims as a simplified form of dynamic structural equations and a simpler alternative.And, considering the broad range of aspect characterizing fishery management, Giddens [2000] suggests that comprehensive and integrated approach should be applied, for example mention the importance of integrating environment, ecology and sociology. Maluku is a good representative of Indonesia in the context mentioned above.This Province is an archipelagic province with almost unlimited marine resources but where all advantages have not brought significant positive impact to the people there.Recognizing this, the paper aims at analyzing aspects related to acceleration of marine and fisheries economic development based on the region's maritime geo-profile in the province. The Methods The method used in this research is descriptive quantitative research with survey method.The survey, intended to map and identify field data and information, was designed to conform the research objectives and done in the location covered by the study. The data used in this study are primary and secondary data.The methods used in collecting data was interview with respondents that were drawn through the purposive sampling and snowball sampling techniques. Following the snowball sampling technique, information we secured at the initial step of our research was used to develop questions for the next respondent.This was iterated until the information was sufficient for analysis.Figure 1 shows the snowball sampling process diagrammatically. Province is an archipelagic province with almost unlimited marine resources but where all advantages have not brought significant positive impact to the people there.Recognizing this, the paper aims at analyzing aspects related to acceleration of marine and fisheries economic development based on the region's maritime geoprofile in the province. The Methods The method used in this research is descriptive quantitative research with survey method.The survey, intended to map and identify field data and information, was designed to conform the research objectives and done in the location covered by the study. The data used in this study are primary and secondary data.The methods used in collecting data was interview with respondents that were drawn through the purposive sampling and snowball sampling techniques. Following the snowball sampling technique, information we secured at the initial step of our research was used to develop questions for the next respondent.This was iterated until the information was sufficient for analysis.Figure 1 shows the snowball sampling process diagrammatically. Fig 1 Snowball Sampling Method To improve validity of data, validity tests were also performed.This was done following the procedure directed by Sugiyono [2004].From this test, instrument accuracy was measured, Vaidity was calculated by correlating each of variable score of respondent's answer. The value of this correlation was then compated with the critical value at the significance level of 0.05 and 0.01.The formula is the Pearson Product Moment: Furthermore, reliability test is meant to determine a questionnaire that can be considered as an indicator of the variable.A questionnaire is considered reliable if the answer of a respondent is consisten.The Alpha is used when the score is in the scale forms (e.g., 1-4, 1-5) or range scores (e.g., 0-20, 0-50). The formula of Alpha (Cronbach's) is Remark: = number of total variant The indicator of reliability measurement are categorized as follows: 1. if the calculated alpha or r equals 0.8-1.0, the reliability is good.2. if the calculated alpha or r equals 0.6-0.799,reliability is acceptable.3. if the calculated alpha or r is less than 0.6, reliability is not good [Sekaran 2000]. These respondents represent local government and community institutions ang other relevant stakeholders namely, academia, and the private sector.The analysis tool used is SWOT. Following the SWOT approach, a number of internal factors (strengths and weaknesses) and external factors (opportunities and threats) were Second Sample Fig 1 Snowball Sampling Method To improve validity of data, validity tests were also performed.This was done following the procedure directed by Sugiyono [2004].From this test, instrument accuracy was measured, Vaidity was calculated by correlating each of variable score of respondent's answer.The value of this correlation was then compated with the critical value at the significance level of 0. Furthermore, reliability test is meant to determine a questionnaire that can be considered as an indicator of the variable.A questionnaire is considered reliable if the answer of a respondent is consisten.The Alpha is used when the score is in the scale forms (e.g., 1-4, 1-5) or range scores (e.g., 0-20, 0-50). The formula of Alpha (Cronbach's) is Remark: The indicator of reliability measurement are categorized as follows: 1. if the calculated alpha or r equals 0.8-1.0, the reliability is good. 2. if the calculated alpha or r equals 0.6-0.799,reliability is acceptable. 3. if the calculated alpha or r is less than 0.6, reliability is not good [Sekaran 2000]. These respondents represent local government and community institutions ang other relevant stakeholders namely, academia, and the private sector.The analysis tool used is SWOT. Following the SWOT approach, a number of internal factors (strengths and weaknesses) and external factors (opportunities and threats) were identified.Based on value measurement of strengths, weaknesses, opportunities and threats, an analysis was performed to determine strategies [Saaty, 1987].Applying SWOT analysis, four kinds of strategies were identified: (1) S-O strategies, i.e., those that seek opportunities and take maximum advantage of the strength points; (2) W-O strategies, i.e., those applied to overcome weaknesses, so that opportunities could be benefited; (3) S-T strategies, Atikah Nurhayati and Agus Heri Purnomo GEOMARITIME-BASED MARINE i.e., those that are used such that they could minimize risk taking from threats; (4) W-T strategies, which are totally defensive strategies, wherein damages due to weaknesses against threats from external environment are minimized.kinds of strategies were identified: (1) S-O strategies, i.e., those that seek opportunities and take maximum advantage of the strength points; (2) W-O strategies, i.e., those applied to overcome weaknesses, so that opportunities could be benefited; (3) S-T strategies, i.e., those that are used such that they could minimize risk taking from threats; (4) W-T strategies, which are totally defensive strategies, wherein damages due to weaknesses against threats from external environment are minimized. Figure 2.SWOT Quadrant Supporting the Aggressive Strategy (Quadrant 1): This relates to a very favorable situation, meaning that the opportunity and strength are there.In this case, the strategy to be implemented is to support a policy of aggressive growth (Growth Oriented Strategy).Supporting the Diversification Strategy (Quadrant II): In this condition, a variety of threats exists, however internally there are considerable strengths.The strategy relevant to this condition is to use strength to optimize long term opportunity by implementing the diversification strategy.Supporting the Turn Around Strategy (Quadrant III): In this case, opportunities are a lot, however there is a significant weakness.For this situation, the appropriate strategy is to handle problems related to the existing weakness such that opportunities strategy is the most appropriate one Supporting the Aggressive Strategy (Quadrant 1): This relates to a very favorable situation, meaning that the opportunity and strength are there.In this case, the strategy to be implemented is to support a policy of aggressive growth (Growth Oriented Strategy).Supporting the Diversification Strategy (Quadrant II): In this condition, a variety of threats exists, however internally there are considerable strengths.The strategy relevant to this condition is to use strength to optimize long term opportunity by implementing the diversification strategy.Supporting the Turn Around Strategy (Quadrant III): In this case, opportunities are a lot, however there is a significant weakness.For this situation, the appropriate strategy is to handle problems related to the existing weakness such that opportunities van be optimized.Supporting the Defensive Strategy (Quadrant IV): This is the most unfavorable situation, where both threats and weakness exist.For this situation, a defensive strategy is the most appropriate one Result and Discussion a. Geo-maritime setting of Maluku as associated with marine and fishery development Maluku province with the capital city of Ambon, is astronomically located between 2 ° 30 '-8 ° 30' latitude and 124 ° 00 '-135 ° 30' E (Figure 2.1).Maluku province is geographically bounded by North Maluku and Seram Sea in the north, on the east by the province of Papua, southeast Sulawesi and Central Sulawesi in the west and the Democratic Republic of Timor Leste, Australia, Indonesia Ocean and Arafura Sea in the south The total area of 646,295 km2 region is composed of the territorial waters and the land area which is formed of 1,412 islands (the analysis, 2005).Maluku province has a dominant condition territorial waters about 90%.Comparison between the area of land and sea area is 1: 9.The largest island is the island of Seram (18 625 km2) and Buru Island (9,000 km2), Yamdena (5,085 km2) and Wetar Island (3624 km2).Maluku province is very open access to interact with the surrounding provinces, even very open to international trade lanes, considering its existence as a liaison trade between North and South.Climate in the Maluku islands are tropical climate and monsoon climate, because the area is surrounded by vast seas.Thus, the climate is strongly influenced by a vast ocean.Maluku region recognizes two seasons namely: season west or north and south or east punctuated by two kinds of transition which is the second transition season.West monsoon in the Moluccas lasted from December to March, while April is the transition to the southeast of the season.Season southeast apply an average of 6 months starting from the month of May and ends in October.The transition to the west season is in November.Seasonal conditions are not homogenous in the sense that every season prevailing in the area. Figure. 3 Maluku Island The mangrove ecosystem very important role in supporting the balance of coastal ecosystem, its function as a barrier coastal erosion, as well as spawning habitat, enlargement and feeding grounds wide variety of marine life.Development of mariculture and tourism businesses in the surrounding area will impact on the mangrove forests of mangrove forest destruction, resulting in loss of coastal protection and / habitat for marine life in coastal.Sea-based resources are being exploited to meet the ever-growing demand.As a matter of fact, the seas have become the last reservoir of resources.Sea based resources can be divided into at least four categories: (a) Hydrocarbon like oil and gas.(b) Food (fish, plankton, salt, seaweed).(c)Metals (manganese, copper, gold, coal, tin, etc).(d) Other resources (sand, gravel, calcium and poly-metallic sulphides). b. Validity and Reliability Tests From the total of 20 items of external and internal factors in the SWOT Analysis, the value of validity coefficient is >0.300.Referring to Kaplan and Saccuzo [1991], this means that the factors are valid and can be proceeded for further analysis. Meanwhile, from the same total 20 items, it was found that α value is 0.812, which is greater than 0.6.Referring to Malhotra [2007], this means that the factors are reliable. c. SWOT Analysis As suggested in the background, geomaritimebased marine and fishery economic development in maluku islands, the analysis was carried out accordingly.In SWOT analysis, the following Streghts, Weakness, Opportunity, Threat factors were identified during the field work.Strength factors were: Energy and mineral mining the seabed, has a very high economic value (S1), the potential of fishery resources (S2), islands which is a path national and international transport (S3), the potential for marine tourism (S4), marine growth centers development primary (S5).Weakness factors were: human resources remains low (W1); low investment in the fisheries and marine sector (W2); lack of conservation (W3); conflict of interest (W4); natural resource management (W5).Table 1 shows the values of weight, ranking and the respective scores of these strength and weakness factors. It is shown in Table 1 that the highest score among other strength factors is the score of The existence of marine tourism economic potential (S4) 0.52, the abundance of fishery resources potential (S2) 0.48, the potential of fishery resources, production of economically important fish in the pelagic fish group dominated species of fish, namely: (1) Katsuwonus pelamis or Skipjack tuna this fish is one large pelagic fish species that have economic value is important.Area deployment of skipjack in almost all waters Maluku province; (2) Euthynnus affinis this fish is one type of pelagic fish the important economic value: (3) Rastrelliger spp.This fish is a small pelagic fish swim in groups and is one of economic fish important The fact that there are many islands that can function as paths for national and international transport (S3) 0.30 Good connectivity between regions in Indonesia will be able to facilitate the movement of people, goods, services and capital.Marine Growth Centers Development Primary (S5) 0.30, the potential for marine tourism 0.30; Energy and mineral mining the seabed has a high economic value (S1) 0.30.Indonesia must continue to provide the data, textual and geospatial either in the form of geological maps, oceanographic, hydrographic and biodiversity, as well as on the content of the contents wealth of the homeland in the waters of Indonesia, especially at sea in Indonesia.In the mean time, Table 1 shows the values of weight, ranking and the respective scores of opportunities and threats factors. It is shown in Table 2 that the highest score among other opportunity factors is the score of Geostrategic position which has three Indonesian archipelagic sea lanes (O1) 0.48, the Indonesian archipelago with geostrategic position that has three Indonesian archipelagic sea lanes (ALKI) and five regions choke points (the Strait of Malacca, Singapore Straits, the Strait of Sunda, Lombok Strait, and Strait Ombai-Wetar) which also must respect the freedom of sailing requires the support system strong defense and security Various forms and nature of the threat may occur at sea, such as various shipping lines that can be passed nuclear submarine foreign, are vulnerable to acts of armed violence at sea, arms smuggling, slavery at sea, smuggling, human trafficking, destruction of marine resources, theft underwater cultural heritage, and theft of marine resources.Hence the doctrine of defense and security "Minimum Essential Force" should be developed, so we need a doctrine and posture of defense and security in accordance with an area of sovereignty and sovereign rights of Indonesia. Security challenges of traditional and nontraditional small islands, outermost and isolated, Indonesian maritime economy not only from the wealth of natural resources and non-biodiversity, but also should develop in the field of port logistics services commercial ships and yachts, marine tourism.authority to the Provincial Government for managing maritime resources and small islands and outermost within a radius of 12 nautical miles and the rights of county and city to get for marine products in the territory of 4 Nautical Miles and measurably involvement of local government and communities in the area of maritime surveillance.It needs special attention in the design of central and local government relations with the community and the government in relation to the design of national and regional institutions.The development strategy of the Maluku Islands Geomartime more focused on marine and fisheries sector in the short term is known as a sector that could have an impact on production activities from other sectors (Output Multiplier / OM) and the improvement of people's income (Income Multiplier / IM).It adds to the belief that the marine and fisheries sector can support the economy of the province of Maluku.Efforts are needed to solve these problems is the involvement of all stakeholders both communities, governments, and businesses in order to accelerate the development of marine and fisheries sector in the province of Maluku.Developning maritime connectivity to eliminate social and economic inequalities and to perform various interest both among local governments by the central government such as governance, security, trade, education, health, and communication.Reviewed swot analysis can be seen several threatsTrafficking and Smuggling The sea is the main medium for the illegal movement of people and goods because larger shipments can be carried, covert transhipment is possible at sea and maritime borders are more porous than land and air borders.its own network of human intelligence and transnational organization [Karsten,2011].Threats that damage fisheries resources, so it should be done fishery and marine resources conservation to promote sustainable development. Conclusions Through a swot analysis can be concluded that the accelerated development of marine and fisheries economy in Maluku can be done through an aggressive strategy either through local and national policies focused on facilitating economic actors candidate in making massive investments in marine and fisheries sector.Among other things, this should be done by increasing the capacity of the Maluku sea ports and directs them to become transmiters local economy, through more effective function as a relationship for ships carrying commodities and products to national and international markets.This study found that in line with the pre-requisites that must be filed by the government is the detailed zoning of marine and fishery resources. of variable score n = number of respondent This analysis was used by correlating the item score with total score.Total score is the sum of all item.Validity was presented in a coefficient, namely validity coefficient.Error test on the research measurement was based on a formula of r ≥ 0,30 [Azwar 2009].A statement is valid if validity coefficient > 0,300 [Kaplan dan Saccuzo 1991]. 05 and 0.01.The formula is the Pearson Product Moment: Remarks: X = Valiable score Y = Total of variable score n = number of respondent This analysis was used by correlating the item score with total score.Total score is the sum of all item.Validity was presented in a coefficient, namely validity coefficient.Error test on the research measurement was based on a formula of r ≥ 0,30 [Azwar 2009].A statement is valid if validity coefficient > 0,300 [Kaplan dan Saccuzo 1991]. Atikah Nurhayati and Agus Heri Purnomo GEOMARITIME-BASED MARINE National marine policy should also refer to the six basic principles, namely (1) the insights of the archipelago; (2) sustainable development; (3) The blue economy; (4) management of integrated and transparent; (5) participation; and (6) equality and equity. Development in the Maluku Island capitalize marine resources as a complement and mutually reinforce each other synergistically.Ocean becoming very large development capital.However the complexity of the maritime sector into a characteristic and the fact that must be faced by the stakeholders (stakeholders). Figure 4 . Figure 4. Capture Fisheries of Maluku Island Source: Faculty of Fisheries and Marine Science, Padjadjaran University Table 1 . Matrix of Internal Factor Strategy Analysis Table 2 . Matrix of External Factor Strategy Analysis Internal The existence of high economic potential that can be derived  Human Resources remains low  Investment in the fisheries and
2018-12-12T09:26:34.549Z
2017-12-27T00:00:00.000
{ "year": 2017, "sha1": "315f43ab6d841d57f77aa4914232d4174a2d1850", "oa_license": "CCBYNC", "oa_url": "https://jurnal.ugm.ac.id/ijg/article/download/27668/pdf", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "315f43ab6d841d57f77aa4914232d4174a2d1850", "s2fieldsofstudy": [ "Economics" ], "extfieldsofstudy": [ "Business" ] }
225292297
pes2o/s2orc
v3-fos-license
Motives of Training and Sport Routine Highly Qualified Athletes of 5-a-Side Blind Football National Sport Team of Russia This article presents the results of studies on the motivation of highly qualified athletes involved in Paralympics sport football five-a-side, which was held at the international friendly tournament in Silvi Marina (Italy) in June 2018. The main factors determining the importance of results in achieving the results are physical and mental stresses corresponding to critical values. The study of sports motivation of qualified Paralympics football players of the national team of the Russian Federation began with the definition of a list of motives for playing football five-a-side (sports of the blind). Analysis and generalization of literary sources made it possible to form an extended circle of motivation, and interviewing and questioning current athletes and coaches, allowed to determine the list of motives of highly qualified athletes, included in the questionnaire for study. The study involved active athletes of the youth and main staff of the Russian national team. Football players represented three regions: Moscow, Moscow Region, and the Republic of Mari El. In total, 13 respondents took part in the survey. The data obtained were statistically processed using the method of average values (calculations were performed using the standard Microsoft Excel for Windows software package). Introduction The development of adaptive sports in Russia at the territorial level is not evenly. Often several cities, regions or republics cultivate only a single species; therefore, athletes from two or three regions form the country's national sport team. The problems of developing Paralympic futsal (blind sports) remain without due attention, and only a few regions are actively involved in resolving them. One of these problems is the methodological content of the sports training programs for the sports reserve for five-a-side blind football. The program should include best practices in training the national team of the country, be applied in nature. The normative and methodological documentation for this Paralympics discipline is not informative or completely absent today, which means that there are no guidelines for the development of sports training programs both in the country as a whole and in the regions separately. The current methodological and used material is formal and borrowed from other sports, often intended for healthy athletes. The study draws the attention of trainers and specialists about the need to apply a scientific and methodological approach to managing the training process not only in the preparation of high-class players of the national team, but also in the sports reserve at earlier stages of preparation. Motives of training and sport routine highly qualified athletes of five-a-side blind football national sport team of Russia In recent decades, the sport of the blind in the world is developing at a rapid pace. In the Russian Federation, according to the All-Russian Register of Sports, the sport of the blind includes 203 sports disciplines in 9 sports included in the program of the Paralympic Games (cycling tandem, goal ball, judo, athletics, swimming, futsal, skiing, ski racing, biathlon) [1]. The number of sporting events, both international and national, is increasing annually, and the number of participants is growing. With close cooperation between the federations, state, and public organizations of the disabled, significant work is underway to develop various sports for the visually impaired. Experienced coaches train disabled athletes (hereinafter referred to as athletes) for the national teams of Russia and the Russian Paralympic team. Together with the Russian Ministry of Sports, national and international competitions of various levels are organized and held. Particular attention is paid to working with children from sponsored boarding schools who undergo rehabilitation in clubs and sections [2]. Moreover, according to the President of the Blind Sports Federation Abramova Lidia Pavlovna, there is a tendency in Russia to uneven development sports disciplines of the sport of the blind at the regional level. Five-a-side blind football did not pass this trend, despite the fact that, along with athletics, swimming and ski racing, it is the most popular sport among blind and visually impaired people [3,4]. So, in Moscow, Moscow Region, Nizhny Novgorod Region, the Republic of Dagestan, the Republic of Mari El and the Republic of Tatarstan, the Khabarovsk Territory, much attention is paid to the development of 5 Â 5 (B1) indoor football (blind sports) (hereinafter referred to as five-a-side blind football), then in other regions this is not observed. Today, problems associated with the training of qualified coaching personnel, the lack of a special methodology for training athletes and insufficient provision of scientific and methodological literature for the preparation of a sports reserve continue to remain unresolved [5,6]. In this regard, for a more effective development of five-a-side blind football, it is necessary to resolve problems associated with the insufficient development of a scientifically based system for training coaches and athletes in this sport. It also requires refinement and improvement of the methodological content of the content of sports training programs, which should be based on many years of experience in training highly qualified Paralympic futsal players, players of the national team of the Russian Federation, demonstrating high results at international competitions in recent years. The sport of the highest achievements is associated with high social significance, a public assessment of successes and failures, publicity, interaction with the media. In stressful situations of competitive activity, under equal training conditions, when physical and mental stress reaches a critical value, the level of motivation and personal characteristics are crucial in achieving the result [7]. A highly qualified athlete enters into complex interactions and relationships with the chosen sport, which in turn presents special specific requirements for physical qualities, behavioral habits, personal characteristics, and his sports motivation. Therefore, it is so important for a high-class athlete, along with full compliance with the requirements of the sport, exceptional sports motivation, which will allow him to realize his potential, achieve high sports results, and become one of the best athletes in his country. In parallel with this study, we carried out work on the study of the main motives for playing five-a-side blind football of qualified Italian football players [8]. Prerequisites for the work were the thesis based on scientific and methodological literature and coaching experience, according to which not all athletes who are gifted by nature achieve significant success. Therefore, the determination of the motivation features of highly qualified athletes can help the trainer not only in planning sports training with the optimal amount of training and competition load, but also in creating pedagogical conditions for implementing this training program. Despite the great attention to sports motivation by scientists and experts in the field of physical culture and sports, an analysis of domestic Russian scientific literature showed that the motivation of high-class athletes has not been studied enough. Moreover, the study of the motives of sports activity of athletes in team types of adaptive sports was not carried out at all before. Research methods and organization The study of sports motivation of qualified Paralympic football players of the national team of the Russian Federation began with the definition of a list of motives for playing five-a-side blind football. Analysis and generalization of literary sources allowed us to form an expanded circle of motivation [2,10,14,15], and interviewing and questioning existing athletes and coaches that are part of the country's youth and main Paralympic futsal team, allowed us to determine the list of motives of highly qualified athletes, included in the questionnaire for this study ( Table 1). Respondents were asked to indicate the degree (point) of importance of the proposed motives on a 10-point scale (1 point-minimum, 10 points-maximum). Moreover, depending on the degree of significance of the motive, expressed in points, the answers were divided into groups: 9-10 points: "extremely important", 7-8 points: "very important",5-6 points: "pretty important",3-4 points: "not very important",1-2 points: "absolutely not important". The questionnaire was conducted in June 2018 during the period in which the international friendly football tournament 5 Â 5 (B1) (sport of the blind) was held in Silvi Marina (Italy). The study involved active athletes of the youth and main staff of the Russian national team. Football players represented three regions: Moscow, Moscow Region and the Republic of Mari El. In total, 13 respondents took part in the survey. The data obtained were statistically processed using the method of average values (calculations were performed using the standard Microsoft Excel for Windows software package). Results and its discussion Questioning of Russian highly qualified Paralympic futsal players showed that four motives are not significant and are classified in the category "Absolutely not Dear colleagues! The research group of state-financed institution of the Republic of Mari El "Sports-adaptive school of Paralympic reserve" (Yoshkar-Ola) and the Russian state social University (Moscow) addresses to you. Could you please answer the questionnaire? The questionnaire Please give us some information about yourself: your age is _________. Place of residence (country, city) __________________________. Below you can find a list of motives of qualified athletes to practice five-a-side blind football, evaluate the importance of each of them on a 10-point scale. category is the motive "To quit bad habits, break with bad company, move away from the street" with an average of 2.923. Values of mode (Mo) in the group of motives "Absolutely not important"-1 point, medians (Me)-2-3 points; the standard error (m) from 0.40 to 0.62 indicates the unanimity of the opinion of the athletes and the regularity of falling of these motives in the category of "absolutely not important". 9- The motive group "Not very important" (3-4 points) is the largest, 15 motives from the average value of 3.462 ("Approval and support from important people for me: relatives, friends, other close people") to 4.923 ("I'm engaged in this activity for a long time. I got used to, and can't do anything else"). The group is characterized by equal values of analytical indicators: mode from 1 to 5 points, median-2-5 points, standard deviation (σ) does not exceed the value of 3.26 points, and the error is not more than 0.9 points. The homogeneity of the motives of this group under consideration is confirmed by the indicators of excess (Ex) and asymmetry (As) close to the symmetric distribution. The motive group "Quite important" (5-6 points) consists of 10 motives from 5.154 ("To be more attractive to the opposite sex") to 6.692 ("Desire to become a master of sports (master of sports of international class)"). The group is characterized by a symmetrical distribution, close points of the mean, mode and median. The values of the standard deviation and standard error also do not stand out from the general trend. Everything speaks of the homogeneity of the motives in question and the same opinion of the respondents regarding them. The motives category (7-8 points) included motives with an average score of 7.154-8.769 (indicated by increasing average value): "The opportunity to express yourself, your abilities, skills, personal qualities", "Thedesiretoimprovemy abilities, there is no limit to perfection", "It's nice to feel a sense of accomplishment in front of teammates", "Desire to become the champion of the country, Europe, the world and Paralympic games", "Develops character, mental and physical qualities", "Improvement of personal qualities such as endurance, will, mutual assistance, patience", "The opportunity to join the national team and represent my country at international competitions", "High prestige of victories in major competitions" and "Your motive is achievement of success which is constantly supported by intermediate achievements: a goal, a victory, a medal". Despite a slight divergence of motives in terms of analytical indicators in each individual case, the general characteristic of their homogeneity and regularity of attribution to this group remains. The most significant and relevant motives for the respondents-players of the Russian national team (9-10 points) were the motive "It's nice to experience the joy of the victory" with average value of 9.0 and the motive "Sport of the highest achievements as a way of material and financial support for myself and my family"-9.462. In both cases, the median and mode correspond to the average score, the standard deviation is close to unity, and the error showed no more than 0.3 points. The exponents of the symmetric distribution of Ex and As are close to the normal distribution ( Table 2). Conclusions Summarizing the results, it should be emphasized the homogeneity and the same attitude of the Russian national team players to the motives proposed in the questionnaire to engage in five-a-side blind football. The values of the totality of analytical indicators reinforce the conclusion about the regularity of ranking and classifying each motive in the corresponding category of significance. In this regard, the relevance of developing and improving the methodology for sports training of qualified 5 Â 5 (B1) football players (sports of the blind) is beyond doubt. Summing up the research attention should be paid to homogeneity and collective opinion regarding the group of motives of each significance category. Analytical calculations confirm this thesis. The survey results suggest that highly qualified. Paralympic blind football players, possessing significant baggage of competitive experience, mainly international, are aware of the significant requirements of the sport in question, appreciating the importance of the correct way to prepare an athlete. In accordance with this, the sports training of highly qualified Paralympic blind football players should be determined by the scientific and methodological content and be based on the international best practices of the best teams in organizing the sports training process. Such an approach will allow not only high-class athletes to realize their potential and achieve the highest results, but also less qualified players to improve their skills and become candidates for joining the national team of the country in the future.
2020-09-10T10:17:12.056Z
2020-09-09T00:00:00.000
{ "year": 2020, "sha1": "db9b2ccf3aa51c71e06cb5ce49577302606d4b4a", "oa_license": "CCBY", "oa_url": "https://www.intechopen.com/citation-pdf-url/70863", "oa_status": "HYBRID", "pdf_src": "MergedPDFExtraction", "pdf_hash": "1f113fd0b4f6f4715f973c8b2478c723cfb84651", "s2fieldsofstudy": [ "Education" ], "extfieldsofstudy": [ "Psychology" ] }
257409761
pes2o/s2orc
v3-fos-license
Reaching unreachables: Obstacles and successes of microbial cultivation and their reasons In terms of the number and diversity of living units, the prokaryotic empire is the most represented form of life on Earth, and yet it is still to a significant degree shrouded in darkness. This microbial “dark matter” hides a great deal of potential in terms of phylogenetically or metabolically diverse microorganisms, and thus it is important to acquire them in pure culture. However, do we know what microorganisms really need for their growth, and what the obstacles are to the cultivation of previously unidentified taxa? Here we review common and sometimes unexpected requirements of environmental microorganisms, especially soil-harbored bacteria, needed for their replication and cultivation. These requirements include resuscitation stimuli, physical and chemical factors aiding cultivation, growth factors, and co-cultivation in a laboratory and natural microbial neighborhood. Introduction The planet we know today is largely the result of the microbial activity in the biosphere. Earth's smallest and simplest organisms created the conditions for the development of the vast number of life forms we all know. The microscopic world is even vaster, and its diversity is stunning, but it is very difficult to reach. Even though its existence has been acknowledged for several centuries, it has been very challenging to study its roles. A crucial advance in the study of "the unreachables" arose in the days of Robert Koch at the end of the 19th century. He established a causative relationship between a microbe and its impact on a host (disease). Koch's postulates demanded the presence of a microorganism in pure culture, isolated from the host, to confirm the link between the pathogen and the disease. From this point on, microbes were no longer considered scientific curiosities, but rather modelers of our bodies and Earth's ecosystems (Turnbaugh et al., 2007;Graham et al., 2016;Gilbert et al., 2018). Much more efforts have been taken over the following decades to study microorganisms: these progressed from the description of and fight against the most critical human and plant pathogens, which dramatically improved our quality of life, to the later investigations on the community composition of different environments, the most advanced of which used marker gene or metagenome sequencing (Lane et al., 1985;Lynch et al., 2012). In recent years, sequencing technologies have addressed many environmental and human health-associated issues, such as the analysis of microbial responses to contamination (Hemme et al., 2010), the discovery of novel taxa to be used for bioremediation, the discovery of novel producers of antibiotics (Ling et al., 2015), or revealing the co-occurrence of antibiotic resistance genes in different environments (Li et al., 2015), to name a few. Microorganisms live in virtually any environment, including those considered extreme due to their high temperature, pH, salinity, or concentration of pollutants (Mirzaie et al., 2015;Mehetre et al., 2018;Panda et al., 2018;Power et al., 2018;Maza et al., 2019). The physiological and biochemical potential of microbes living within these extreme environments is enormous. Thriving at the limits of life, extremophilic and extremotolerant microorganisms can provide enzymes such as the widely used Taq polymerase isolated from Thermus aquaticus (Brock, 1967;Brock and Freeze, 1969); or uncommon metabolites, such as previously unknown lipids (Schneider et al., 2019), unusual polyunsaturated fatty acids (Řezanka et al., 2019), antioxidants, pigments (Asker et al., 2012), bioactive natural compounds and other secondary metabolites with a wide range of applications Manivasagan et al., 2014). Microbes can also offer improved bioremediation possibilities (Pascoal et al., 2020), can assimilate unusual substrates including toxic compounds, or resist and detoxify several antibiotics (Rettedal et al., 2014;McLain et al., 2016). In order to fully describe these microorganisms and reveal their vast potential, it is necessary to obtain them in pure culture. Moreover, cultivation provides context to the metagenomic data (Nichols, 2007) and helps us verify metagenome-based conclusions on microbial interactions (microbe-microbe, microbe-plant, microbeenvironment). However, bringing environmental microbes to pure culture under standard laboratory conditions has proven to be a very challenging task. Cultivation can be labor-intensive, tiring, timeconsuming, and may not ensure success; but it can be rewarding if all the factors required for microbial growth are included ( Figure 1). Here we discuss some generalities that elucidate the phenomenon of unculturability, with special attention paid to soil, being a habitat that harbors the greatest diversity of microorganisms, to build a foundation upon which to review some of the recent strategies to better reach "the unreachables. " Why do you not grow? If it is alive, no microorganism is unreachable: we just do not know how to recreate their natural environment in order to obtain a pure culture (Watve et al., 2000;Stewart, 2012). With this in mind, the key step toward successful cultivation would be to replicate essential aspects of the microorganism's natural existence as thoroughly as possible ( Figure 1). Some of the environmental variables are easily discovered and can be readily incorporated into cultivation methodologies, but many other factors that influence growth are much more obscure, and including them in cultivation strategies is not as straightforward. The environment in which microorganisms exist is usually different from the one we create for them in the laboratory. Microorganisms live under what Koch (1971) called a "feast and famine existence. " As a consequence, the growth dynamic observed under nutrient-rich laboratory conditions does not necessarily exist in nature, where environmental changes are common and poor nutritional conditions need to be withstood for longer periods of time (Koch, 2001;Pinto et al., 2015). Microorganisms can be categorized by their resource intake characteristics either as oligotrophs or copiotrophs (Meyer, 1994;Fierer et al., 2010). The main distinguishing parameters between these categories, as Ho et al. (2017) states, are their growth kinetics, substrate affinity, and efficiency at substrate utilization. Copiotrophs have higher Michaelis-Menten kinetics and maximal growth rate. Conversely, oligotrophs are slow-growing but have higher substrate utilization efficiency, and thus higher biomass yields per substrate molecule utilized. Oligotrophs thrive in environments with low nutrient flows, but not in substrate-rich/ diverse environments. Copiotrophs, on the other hand, can utilize highly concentrated substrates rapidly and react promptly to substrate changes; they nevertheless lack the necessary regulatory mechanism of starvation, and are thus generally unable to grow in nutrient-poor sites (Ho et al., 2017). The proportion of copiotrophs to oligotrophs in the environment, as well as under laboratory conditions, is governed by a dynamic process called succession (Fierer et al., 2010). Microbial communities change over time after they colonize a certain environment. For heterotrophic bacteria, organic carbon can be constantly supplied, i.e., exogenous succession, or present all at once at the initial colonization point, i.e., endogenous succession (Fierer et al., 2010). In the initial stage of endogenous succession, when nutrients are plentiful, copiotrophs are more abundant in the community; oligotrophs become dominant when highly concentrated substrates are depleted (Song et al., 2016). Both the changing environmental conditions in nature and an inappropriate choice of growth conditions in the laboratory hinder the ability of microorganisms to replicate and could thus render them dormant and seemingly unculturable. Do not wake up until it is beautiful outside The low number of microbes cultivated in the laboratory compared with the total number of microorganisms observed under the microscope hinted at the existence of other states in which microorganisms may exist in nature, apart from being alive (replicating) or "dead" (non-replicating). This discrepancy, known as the "great plate count anomaly, " is a large difference, by several orders of magnitude, between the viable plate counts and the total direct microscopic counts (Staley and Konopka, 1985). This phenomenon reveals our failure to isolate all cells from a particular environment in pure cultures. Just as cells wait in a quiescent state for environmental conditions to be favorable again and start replicating (Kaprelyants et al., 1994), they can be waiting for these optimal conditions when deposited in the laboratory environment. Grandly said, microbes can be unreachable because they are "sleeping" (Xu et al., 1982). The term "sleeping cells" encompasses several dormancy or quiescence phenomena that can cause unculturability under laboratory conditions. Dormancy is "any rest period or reversible interruption of the phenotypic development of an organism" (Sussman and Halvorson, 1966), or simply a state of metabolic inactivity as defined by Kell et al. (1998): cells exhibit negligible metabolic activity but can later transit to a growing state. This inactivity can be caused by the advent of unfavorable conditions, for example, the famine period in the dual feast-famine existence. Several dormancy phenomena have been identified, which suggests the existence of a "dormancy continuum, " where some states of dormancy can be deeper than others (Ayrapetyan et al., 2015). The most Frontiers in Microbiology 03 frontiersin.org well-known state of dormancy is sporulation (Morrison and Rettger, 1930;Keep et al., 2006), in which some bacterial and fungal cells form spores as a survival strategy and outlast deleterious conditions. Spores then germinate when environmental conditions become favorable again. Another dormancy-related phenomenon is that of "persistent cells, " first coined by Bigger (1944). This phenotype was already described in a study by Hobby et al. (1942), who observed that after exposing an infection-causing community to penicillin, 1% of the cells persisted. Persistent cells are non-growing phenotypic variants, completely dormant cells or cells inactivating genes selectively, frequently occurring in bacterial and fungal biofilms as small subpopulations (Harriott, 2019). They usually appear during the stationary phase or rarely in the exponential phase, and exhibit high tolerance to antibiotics (Wood et al., 2013). They avoid the antibiotic's effects without undergoing genetic changes, so they play a significant role in population survival and biofilm re-creation (Lewis, 2010). In environmental biofilms, they create a subpopulation that supports biofilm survival against stress conditions such as starvation or other factors causing dormancy (Balaban, 2011;Carvalho et al., 2018). Another common dormancy phenomenon is the viable but non-culturable (VBNC) state, believed to be widespread throughout gram-negative bacteria (Giagnoni et al., 2018). VBNC is a survival strategy that is similar to sporulation but present in non-sporulating cells (Mukamolova et al., 2003). It can be triggered by deleterious environmental changes, such as oxygen or substrate concentration Different activity states of cells in the environment. 1: Active community or population. 2: In the presence of substances such as pollutants and antibiotics, a portion of the population dies and some cells can persist. These latter cells can then divide again when the substance is removed. 3: Cells in the viable but non-culturable state (VBNC). If a cell stochastically awakens in growth-permissive conditions (section number 2), the population starts replicating. If not, the cells die off (section number 4). 4: Cells in the VBNC state. Cells can resuscitate if environmental conditions become growthpermissive again (section number 1). This represents the resuscitation mediated by present environmental queues. Created with BioRender.com. Frontiers in Microbiology 04 frontiersin.org changes or pH changes (Du et al., 2007). The inoculation of cells from their environment into artificial media can potentially trigger such a state. For example, when cultivating oligotrophs, the usage of a nutrient-rich medium can lead to cellular death; this may be a result of a depletion of energy for balanced growth or by osmotic shock caused by the sudden intake of non-metabolic complex substrates (Ho et al., 2017). In the VBNC state, cells do not replicate but remain viable after being exposed to stressful conditions (Xu et al., 1982). VBNC cells are also different from metabolically active and dividing cells, since they perform respiration and gene expression at low rates Shleeva et al., 2004;Li et al., 2014). They are also able to change their adhesion properties and virulence potential (Rahman et al., 1994;Du et al., 2007). Furthermore, their lower metabolic rate, strengthened cell wall and higher peptidoglycan cross-linking confer them better physical and chemical resistance, as opposed to normally dividing cells . When activity rates are reduced, VBNC cells also reduce their size (Biosca et al., 1996), increase their surface-to-volume ratio , and, as a consequence, their nutrient intake increases (Baker et al., 1983). This size reduction was observed in Burkholderia pseudomallei and Vibrio cholerae cells when changing from rods during exponential growth to cocci in the VBNC state (Inglis and Sagripanti, 2006;Senoh et al., 2010). Dormancy can be one of the many reasons for unculturability, which biases the picture of the community observed via culturing methods. Fortunately, dormant cells are not totally unculturable but can be more challenging to culture because not only must their growth conditions be elucidated but also their resuscitation mechanisms. Two different mechanisms are thought to resuscitate microorganisms from dormant stages: either they depend on some environmental queue to do so or they do not, the latter situation being called the scout hypothesis (Epstein, 2009;Buerger et al., 2012). This stochastic reactivation of growth is the consequence of phenotypic variation within the dormant population (Sturm and Dworkin, 2015), which resembles the idea of the "dormancy continuum" previously mentioned. In both cases, knowing which factors are present in the environments where microorganisms dwell can teach us what is necessary for their effective culturing in the laboratory ( Figure 2). If they wake up stochastically, they still need environment-resembling conditions where they can thrive after awakening. If they need environmental stimuli, then these would need to be included in in vitro cultivations for microorganisms to resuscitate and grow ( Figure 2). The stimuli needed to resuscitate microorganisms from dormancy include physical and chemical stimuli, which can be provided by the environment or by organisms to which yet-unculturable microbes are associated (Zhang et al., 2021). In this sense, the conditions needed to support growth in the laboratory medium can overlap with those to resuscitate microbes from dormancy but both phenomena correspond to different physiological processes, namely the exit from a reduced metabolic existence, after which comes the ability to replicate. Zhang et al. (2021) reviewed the factors that play a role in the resuscitation of VBNC organisms such as the addition of metabolites to minimize oxidative stress, quorum sensing autoinducers or temperature changes. In the present manuscript, the focus will be on those factors aiding the growth of microorganisms in the laboratory environment and what cultivation implies for modern microbiology. A helping hand from the environment -Physical and chemical factors Temperature, pH, osmotic pressure, and oxygen and nutrient concentrations are ever-changing factors in the environment (Puspita et al., 2012). These changing conditions are stress factors that shape the composition of microbial communities as well as the environments in which they live. Soil pH has a major impact since it influences soil chemistry, including the availability of organic matter, redox conditions, and oxygen availability . Energyyielding metabolisms such as microbial respiration (Jin and Kirk, 2018) and the hydroxylated lipid membrane composition (Wang et al., 2016) also respond strongly to pH changes. The impact of pH on soil chemistry even shapes the assembly of microbial communities on a global scale (Feng et al., 2014;Tripathi et al., 2018). In this regard, according to a cross-continental phylogenetic survey of over eighty soils representing a wide range of ecosystems, soil pH was significantly correlated with the overall bacterial community composition (Lauber et al., 2009). The pH has been shown to significantly influence the community structure of other environments such as lakes (Ren et al., 2015), permafrost (Ren et al., 2018), and animal microbiomes (Sylvain et al., 2016). Even small changes in this variable can thwart growth on an artificial medium since some microorganisms have a very narrow zone of pH tolerance (Rousk et al., 2010). Adamberg et al. (2003) used a pH-auxostat to study the growth rate decrease of different lactic acid bacterial strains. A pH decrease from 6 to 4.3 was enough to slow down the bacterial growth rate, and ATP production was also lowered. However, microbial growth is not only affected by drastic changes in pH disabling microbial growth, but also by suboptimal pH, at which cell growth is detectable but the growth rate is significantly decreased, as was shown in the cultivation study of Bacillus termoamylovorans when pH changes by ~1.5 from the optimal pH for its growth caused a significant reduction in the growth rate and thus caused a reduction in energy yield per glucose molecule consumed (Combet-Blanc et al., 1995). However, there are cases when the microbes themselves, intentionally and unintentionally, are able to adjust the pH of their near environment, even by excreting basic metabolites or enzymes, and thus shape the microbial community and subsequently determine the interactions between individual species of the consortium (Ratzke and Gore, 2018). Oxygen concentration also shapes the composition of entire microbial niches: whether it is oxygen-requiring algae, microaerophilic or facultatively anaerobic purple non-sulfur photoheterotrophs, anaerobic green-sulfur bacteria, or any chemotrophs, the development of individual subpopulations is impacted based on their relationship to oxygen. Not just the simple dichotomy of aerobic and anaerobic conditions is important, but also small, specific changes in oxygen concentration matter. For instance, Coxiella burnetii, the intracellular pathogenic agent of Q-fever, infects mammalian cells at a microaerobic concentration of O 2 ~ 3%. Omsland et al. (2009) successfully cultivated an axenic culture of Coxiella burnetii on an improved acidified citrate cysteine medium under an oxygen tension of 2.5%-5%. Because of its ability to grow at lower oxygen levels, the hitherto uncultured Coxiella burnetii was able to utilize up to 17 different substrates and form visible colonies in the absence of host cells. Recently, C. burnetii was cultured in a modular hypoxic chamber that maintains the required O 2 concentration (2.5%) without constant airflow, which greatly reduces the evaporation of the medium (Miller et al., 2020). Oxygen concentration also induces oxidative stress caused by reactive oxygen species. Generally, the ideal oxygen conditions depend on oxidative stress sensitivity and the need for a reduced form of a nutrient (Vallejo Esquerra et al., 2017). Since reactive oxygen species often have a lethal effect on cells, it is desirable to reduce their concentration to a minimum. Oxidative stress during cultivation can be reduced by procedures such as autoclaving the agar and the phosphates separately (Tanaka et al., 2014;Kato et al., 2018Kato et al., , 2020 or by adding catalase or pyruvate to media (Bogosian et al., 2000;Tanaka et al., 2014). Another decisive factor that enhances cultivation success is the choice of substrates and notably their concentration. Differing carbon concentrations create niches that are occupied by different bacteria (Eichorst et al., 2011;Wu et al., 2020). In environments prone to drastic environmental changes such as soil or water, selective pressure favors cells with a low metabolic cost existence (Mukamolova et al., 2003). Diluted, low-carbon media favor slow-growers and increase the overall diversity, thus increasing the chances of culturing unknown taxa. Low-carbon media have successfully increased the culturing of microorganisms coming from a wide range of environments, such as sea sponges (Karimi et al., 2019;Gutleben et al., 2020), aquatic environments (Imazaki and Kobori, 2010;Sun et al., 2019), or soils (Janssen et al., 2002;Molina-Menor et al., 2021). Aquatic environments offer the advantage of using the water directly from the source as part of the cultivation media. Applying this strategy, Kapinusova et al. (2022) isolated over 100 bacterial species, including several novel species of Alphaproteobacteria, Betaproteobacteria, Flavobacteriia, and even a member of a novel genus of Thermoleophilia . A similar strategy combined with a prolonged incubation time was used for the culturomics of the world-renowed thermal springs of Karlovy Vary and led to the acquisition of several thermotolerant strains of the Bacillota phylum and isolation of novel microorganisms of Bacilli, Gammaproteobacteria, and Actinomycetia classes. The dilution-to-extinction technique, based on the cultivation of soil oligotrophic microorganisms on media containing 100-fold diluted nutrients, resulted in the isolation of a wide spectrum of the most abundant soil representatives, and also of members of two previously undescribed actinobacterial lineages (Bartelme et al., 2020). The combination of the above-mentioned factors into one modified cultivation procedure, namely an adjusted N 2 /CO 2 atmosphere (80:20), low substrate concentrations, the temperature corresponding to the original environment, etc., led to the successful isolation of members belonging to the OP5 phylum (Mori et al., 2008), first described by the 16S rRNA gene analysis in a hot spring in Yellowstone National Park (Hugenholtz et al., 1998). Since many unreachables are slow-growers, prolonged incubation times can lead to their successful cultivation. Prolonged cultivations, usually coupled with culturing diluted cell suspensions, have proved to be useful in many studies (Eilers et al., 2001;Connon and Giovannoni, 2002;Rappé et al., 2002;Kakumanu and Williams, 2012;FIGURE 2 The list of factors affecting microorganisms in their environment (inner circle), and strategic approaches reflecting these factors in the cultivation (outer circle). Created with BioRender.com. Frontiers in Microbiology 06 frontiersin.org Adam et al., 2018;Bender et al., 2020). In a study by Davis et al. (2005), autochthonous soil cells, as well as non-native cells from constructed consortia, were counted on six different media at 7-day intervals. Cell counts increased even after 12 weeks of incubation. Another successful example of prolonged cultivation, and an important microbiological milestone, was the isolation of the previously uncultured archaeon Candidatus Prometheoarchaeum syntrophicum MK-D1 (Imachi et al., 2020). This extremely slowgrowing Asgard archaeon, related to the Lokiarchaeota, was isolated from a 2,533 m deep-water sediment in the Nankai trough, Japan. Aiming to achieve deep-sea microbial cultivation, Imachi et al. (2020) set up a methane-fed-continuous bioreactor in which the enrichment cultivation ran for 2,000 days, resulting in the isolation of this archaeon from a symbiotic culture. The growth of some organisms from cold and oligotrophic environments, such as those isolated from Antarctica, can only be seen in culture after prolonged incubation times (Pulschen et al., 2017;Tahon and Willems, 2017). These organisms form very small colonies which often have to be observed under a microscope (Pulschen et al., 2017). Longer incubations in a Petri dish or batch liquid medium can be problematic because the composition of the medium tends to change over time, either because of the action of the organism's metabolism or other processes, such as water evaporation. Even though microbial species with apparently long cultivation times can have these incubations shortened upon subculturing (Buerger et al., 2012), their initial isolation from the natural environment could fail if they are cultured together with a faster-growing species. Slow-growing microorganisms can be disadvantaged mainly when microorganisms from complex consortia are attempted to be cultured together. Physically separating or sorting the microorganisms before their culturing is a helping strategy to overcome this problem and it is discussed further in the text. Periodically varying conditions exist in nature, from the feast and famine cycles (Koch, 1971) to alternating oxic and anoxic periods (Dorofeev et al., 2019) and seasonality (Steiner et al., 2020), all of which can affect microbial communities. Besides culturing in continuous cultures (open systems) or batch cultures (closed systems), cyclic cultivation can be useful for microorganisms with a cyclic type of metabolism. This metabolism is divided into two phases: first, energy and carbon sources are accumulated, which are then used in the second phase to biosynthesize biomass (Dorofeev et al., 2014). Any of the above-mentioned culture parameters (e.g., temperature, oxygen, or substrate concentration) can be the cycling factor in the cultivation strategy (Dorofeev et al., 2014). Some growth-influencing factors can be more enigmatic. One such factor is acoustic vibration, which is useful as a cultivation enhancement in several biotechnological studies (Bochu et al., 2003;Avhad and Rathod, 2015;Huang et al., 2017). By causing (i) cavitation and repairable damage in microbial cells, (ii) loosening of microbial aggregates in liquid cultures, and (iii) an increase in cell membrane permeability, ultrasonic low-intensity waves (∼20 kHz) can increase the substrate intake in microbial cells and subsequently enhance microbial proliferation (Huang et al., 2017(Huang et al., , 2021, and thus can help in the cultivation of the unreachables. Similarly, all the aforementioned culturing parameters can be combined in a high-throughput fashion to describe as much of the community composition as possible using cultivation, with each condition used being "a different aspect of the community's picture. " This approach is referred to as culturomics (Greub, 2012). Bacteria obtained in culture are massively characterized using MALDI TOF-MS, or 16S rRNA gene sequencing (Strejcek et al., 2018;Nowrotek et al., 2019). A helping hand from the surroundings -Carrier particles Many prokaryotes prefer to live attached to surfaces rather than in a dispersed, single-celled planktonic state (Mills, 2003;Flemming and Wingender, 2010;Hemkemeyer et al., 2018). In soils, different particle size fractions (PSFs) have a different impact on the concentration, chemical composition, and availability of organic matter (Christensen, 1992;Hemkemeyer et al., 2018). Organic matter is associated with fine-sized particles such as silt and clay; nevertheless, the sand fraction contains most of the free particulate organic matter (POM; Christensen, 2001), and therefore represents the fraction with the highest availability of substrates. The reported reduction in diversity among larger-sized fractions can be caused by low nutrient availability, protozoan grazing, and competition with fungi (Sessitsch et al., 2001). Hence, Hemkemeyer et al. (2018) observed the suitability of different PSFs and their associated POM to harbor microbial communities differing in their structure, functional potential, and sensitivity to environmental conditions. Genetic fingerprinting showed very strong preferences of the observed bacterial communities (up to 56% OTUs) for specific PSFs, while the archaeal populations did not exhibit significant preferences. Members of Bacteroidota and Alphaproteobacteria preferred the sand-sized fraction with POM, while Actinomycetota and Betaproteobacteria preferred fine silt, Planctomycetales clay, and Gemmatimonadales coarse silt (Hemkemeyer et al., 2018). If cells prefer living in close contact with surfaces, it can result in it being difficult for them to grow in liquid media. Surfaces composed of different materials such as glass, steel, or synthetic polymeric substances such as polyurethane foams can enhance the cultivation of biofilm-forming bacteria from different natural environments (Yasumoto-Hirose et al., 2006;Gich et al., 2012;Dellagnezze et al., 2016). Liquid media provide many advantages compared to solid media: they guarantee a homogenous distribution of nutrients and oxygen, while also facilitating the manipulation of cultures. Aiming to combine the benefits of liquid media while meeting the requirements of microorganisms that live attached to surfaces, liquid media can be improved by adding a small amount of gelling agents such as gellan gum, xanthan gum, or carrageenan (Das et al., 2015), glass beads (Nguyen et al., 2005;Droce et al., 2013), or sand (Suman et al., 2019). Adding these supplementary solid agents can help the microorganisms to attach to the surface but still live and divide in the liquid or semiliquid medium. A helping hand from your neighbors -Growth factors Trace elements from the environment, apart from the carbon source, are necessary to guarantee growth in vitro. To give a simple example, genera of the slow-growing Acidobacteriota living in manganese-enriched environments benefit from the addition of this Frontiers in Microbiology 07 frontiersin.org element into their growth medium . Complex matrices, such as soil, harbor many phylogenetically diverse microorganisms (Bahram et al., 2018) that not only participate in important biogeochemical cycles (Louca et al., 2019), but also create conditions that enable the growth of other microorganisms by sharing metabolites and essential growth substances (Schink, 2002). These molecules include those that play a role in quorum sensing, biofilm community cooperation, or in the mutualism between plants and plant-growth promoting organisms (Jacoby et al., 2017), such as rhizobacteria and endophytes (Papik et al., 2020). If a metabolite is available in the environment, microorganisms can lose the metabolic capability of producing it and thus become metabolically dependent on their neighborhood (Pande and Kost, 2017). The absence of neighbors in pure culture, and consequently the absence of the necessary metabolites, is then one of the reasons behind unculturability (Pande and Kost, 2017). Bacteria living in certain environments, such as endophytes, benefit from the use of highly specialized growth medium containing the environment's original metabolites (Gerna et al., 2022). With the above said, some bacteria can only grow in a pure medium when in co-culture with another community member, also called a helper strain, which can be a phylogenetically different bacterium or even a different organism such as an amoeba (Boilattabi et al., 2021). Co-culturing can be achieved either by direct culturing of the helper strain together with the bacterium of interest or by using spent supernatants as a proxy for the helper strain (Stewart, 2012). Spent supernatants are the media where the helper strain grew, so the supernatants contain the metabolites that are potentially essential for other members of the community. Microbes can also be cultured together with the host from their natural environments (Knobloch et al., 2019;Lopez Marin et al., 2021). High-throughput co-culture is also now possible with devices such as microscale microbial incubators (Ge et al., 2016), micro-petri dishes (Ingham et al., 2007), microfluidic devices (Frimat et al., 2011;Burmeister et al., 2019), or agarose-based microwell chips , where hundreds of single cells can grow in parallel in individual compartments, sharing metabolites and necessary substances for growth. The latter approach has proved very helpful in culturing bacteria directly related to human health, such as antibiotic-resistant pathogens from the human gut (Versluis et al., 2019). Metabolites from associated bacteria can provide nutrients or trigger other stimuli necessary for growth. As was previously mentioned, when water and nutrients are on the wane and the surrounding conditions are unfavorable, some cells can enter dormancy. Dormant cells can be resuscitated by different resuscitation stimuli (Pinto et al., 2015). There can be many sources of such stimuli, but they often include substances such as amino acids and peptides (Nichols et al., 2008;Pinto et al., 2011), metabolites such as N-acyl homoserine lactones (Batchelor et al., 1997), or resuscitation promoting factors (Mukamolova et al., 2006;Pinto et al., 2013;Lopez Marin et al., 2021). For example, in a study by Bruns et al. (2002), the signaling molecules cAMP and N-(butyryl)-DL-homoserine lactone (BHL) increased total bacterial counts in highly diluted inocula from aquatic environments by several orders of magnitude. Thanks to this effort, the previously uncultured bacterial clone G100, Citreicella manganoxidans, belonging to the Rhodobacteraceae family, was cultured (Bruns et al., 2002;Wirth and Whitman, 2018). Less ambitious but still hopeful results were provided by the follow-up studies of Bruns, where the addition of cAMP led to a 10% increase in MPN values (Bruns et al., 2003). Yet, in several studies where signaling compounds were used for increasing cultivation yields, the influence of cAMP on culturability has been disproven (Pernthaler et al., 2003;Sangwan et al., 2005). The resuscitation promoting factor (Rpf) produced by Micrococcus luteus promotes bacterial resuscitation and growth in the same producing organism (Mukamolova et al., 2006), but can influence taxa distributed along several other phyla, such as Pseudomonadota and Bacteroidota (Su et al., 2018;Lopez Marin et al., 2021;Su et al., 2021). This small protein (16-17 kDa) with a lysozyme-like structure (Cohen-Gonsaud et al., 2005) promotes bacterial cell growth even at picomolar concentrations (Mukamolova et al., 1998;Sexton et al., 2015). Rpf-like encoding genes are distributed among other prokaryotic genomes, especially in G + C rich gram-positive Actinomycetota (Nikitushkin et al., 2016), but Rpf-like proteins extend to other bacterial phyla, such as Bacillota and Pseudomonadota . The addition of Rpf during cultivation has resulted in the isolation of novel bacteria, such as organisms of the genera Rhodococcus and Arthrobacter, or of the family Alcaligenaceae (Su et al., 2013(Su et al., , 2015(Su et al., , 2018(Su et al., , 2021. Lopez Marin (Lopez Marin et al., 2021) isolated 51 novel bacterial species belonging mainly to the phyla Actinomycetota, Pseudomonadota, and Bacteroidota on reasoner's 2A (R2A) agar and an agar made from the soil's water-soluble fraction after supplementing Micrococcus luteus Rpf-containing supernatant to soils. Some of these species were members of novel genera, such as Pedomonas mirosovicensis of the family Sphingosinicellaceae, or Solicola gregarius of the family Nocardioidaceae (Lopez Marin et al., 2022, 2023. Spent supernatants containing growth factors have also aided the cultivation of Chloroflexota strains (Xian et al., 2020) or Leucobacter, the growth of which was supported through the action of zincmethylphyrins and coproporphyrins produced by Sphingopyxis sp. (Bhuiyan et al., 2016). Do you want to stay in your neighborhood? The identification of specific substances promoting cell growth is not an easy task. To bypass the search for crucial growth factors, microorganisms can be co-cultured with growth-promoting microorganisms or can be cultivated in situ in the environments they come from Bollmann et al. (2007) and Remenár et al. (2015). In situ cultivation allows for the isolation of microorganisms that are more adapted to the original environment than those originating from the same habitat but obtained on standard agar media (Jung et al., 2016). Several innovative devices have been envisioned to deal with in situ cultivation. In an early attempt, Kaeberlein et al. (2002) developed a diffusion chamber that allowed the nutrients from the natural environment to migrate to the site where bacteria were inoculated. Seawater solidified with agar was sandwiched between two polycarbonate membranes, which allowed the flow of nutrients from the natural environment to the agar while at the same time isolating the inoculum from the natural environment (Kaeberlein et al., 2002). Diffusion chambers have since increased the diversity of culturable bacteria (Bollmann et al., 2007), including those that are difficult to culture, such as members of the phylum Verrucomicrobiota (Pascual Frontiers in Microbiology 08 frontiersin.org et al., 2017) or bacteria highly resistant to heavy metals (Remenár et al., 2015). A similar device to the diffusion chamber is the soil substrate membrane system (SSMS), which allows the growth of colonies over a membrane (made of materials such as polycarbonate), through which the nutrients and growth factors of the natural environment permeate and reach these colonies (Ferrari et al., 2005). Using the SSMS, Ferrari et al. (2005) isolated previously uncultured members of the genera Aminomonas, Nocardia, Pseudomonas, and Enterobacter. This membrane system has also been used to recover hydrocarbondegrading bacteria from diesel-spiked polar soils (van Dorst et al., 2016) and was proven to recover rarer bacterial taxa from ice-free polar desert compared to conventional cultivation approaches (Pudasaini et al., 2017). Later modifications of the diffusion chamber have been designed to culture microorganisms in the natural environment but using liquid media instead. One such early device was the hollow-fiber membrane chamber developed by Aoi et al. (2009). It is composed of hollow polyvinylidene tubes where microbes are inoculated and grown. The tubes are porous, so they allow the transport of molecules from the natural environment to the inside of the tube. In comparison with standard petri dish methods, the hollow-fiber membrane chamber technique yielded a higher ratio of novel phylotypes, mostly of Pseudomonadota, Actinomycetota, Bacteroidota, and Spirochaetota, and also resulted in an overall higher diversity of the recovered isolates (Aoi et al., 2009). Another liquid medium-based diffusion chamber is a bioreactor separated from the surrounding environment by a polycarbonate membrane (Chaudhary et al., 2019;Chaudhary and Kim, 2019). With this device, 35 previously uncultured bacteria belonging to the phyla Pseudomonadota, Bacillota, Bacteroidota, and Actinomycetota were isolated; the largest number of novel isolates was obtained when soil extract was used for the preparation of the medium (Chaudhary et al., 2019). Diffusion chambers have been manufactured in 3D printers, which increases their customization possibilities for their use in different applications (Wilson et al., 2019). Diffusion chamber devices have been subject to further modifications. One such example is the so-called microbial trap, which consists of two semipermeable membranes with agar or gellan gum "sandwiched" between them (Gavrish et al., 2008). Filamentous Actinomycetota can access the medium from the outside through the semipermeable membranes. A similar trap was designed by Jung et al. (2013), with the difference that the trap's access size can be modified. This latter trap has been used to culture various microorganisms from extreme environments, such as saline lakes (Jung et al., 2013) and hot springs (Jung et al., 2018). Yet another modification to the microbial trap uses sub-micrometer constrictions, where microorganisms compete to reach a chamber with nutrients going through a thin opening that allows only one bacterium to access and form a pure culture (Tandogan et al., 2014). Both groups of devices, diffusion chambers and microbial traps, have been shown to help reduce cultivation bias by culturing bacterial representatives which metagenomics approaches identified as the main representatives in a specific community (Pathak et al., 2020). A successful high-throughput modification of the diffusion chamber technique is a system of multiple diffusion chambers called the isolation chip (iChip), first coined by Nichols et al. (2010). It consists of an assembly of three flat plates, a central one, and two symmetrical external plates. The external polyoxymethylene plates are provided with a set of 384 holes, since every chamber in the central plate is designed to capture, ideally, just one cell. The inoculated central plate is covered, as with Bollman's device (Bollmann et al., 2007), with a standard polycarbonate membrane, which permits the flow of nutrients from the environment and at the same time keeps the cells inside the chambers. The external plates prevent the cells from migrating in and out, and also keep them literally trapped inside their chambers. This chip can then be placed in the natural environment to serve as a cultivation chamber in situ (Berdy et al., 2017). Among others, the Antarctic bacterium Aequorivita sp., possessing antimicrobial and anthelmintic activity, was isolated using the iChip system (Esposito et al., 2018;Liu et al., 2021). The iChip has also aided in the cultivation of antibioticproducing bacteria, such as the bacterium Eleftheria terrae, which produces the antibiotic teixobactin (Ling et al., 2015). Devices similar to the iChip have been used recently to culture fastidious bacteria. The diffusion sandwich system, a device based on the iChip, led to a successful culturing of Pseudomonas soli which can produce xantholysin congeners (Pascual et al., 2014) or the gellan gum-degrading bacterium Luteolibacter gellanilyticus (Pascual et al., 2017). Acuna (Acuna et al., 2020) used microwell chambers, devices similar to the iChip in design, to culture rhizobacterial populations. Rhizosphere microorganisms were also cultured in situ using the Rhizochip, an acrylic device with holes, in which microorganisms are randomly and not evenly inoculated, and placed into a plant rhizosphere (Gurusinghe et al., 2019). All these examples show that when the unreachables stay in their environments, we are more likely to reach them in cultures. Want to be sorted or isolated before cultivation? Because of the enormous number of microorganisms awaiting cultivation, it is natural to assume that automation and highthroughput culturability will be more and more common. Organisms in a community can be individually sorted and cultured under a broad range of conditions. Among these sorting approaches are the preselection of cells by their size, shape, or by any other characteristic. This results in the division of the total microbial community into several subpopulations consisting of similar microorganisms. Such a separation requires equipment such as optical tweezers, flow cytometry coupled with sorting cell assays, or the integration of both methods (Tewari Kumar et al., 2020). In 2002, Zengler and his team presented a method involving microdroplets of solidified agarose for encapsulating single bacterial cells. The encapsulated cells were then grown in a column with low nutrient media, and thus were able to grow "together but apart" (Zengler et al., 2002). This high-throughput cultivation method resulted in the growth and successful isolation of newly identified Planctomycetales and Alphaproteobacteria (Zengler et al., 2002). An advantage of this microdroplet cultivation is the broad range of environments to which the technology can be applied. Later, in 2005, Zengler presented an improved version of the method, Diversa's high-throughput cultivation using microcapsules, by which it is possible to obtain more than 10,000 bacterial and fungal isolates from a matrix (Zengler et al., 2005). More recently, alginate microbeads have been successfully used for the high-throughput culturing of Frontiers in Microbiology 09 frontiersin.org bacteria that usually resist cultivation such as Verrucomicrobiota and Epsilonproteobacteria (Ji et al., 2012), and also to cultivate anaerobes (Börner et al., 2013). Analogously to the co-culture strategy, bacteria grow in the presence of other members of the community, just separated from each other in individual capsules or drops. Encapsulated microorganisms can then be sorted, for example by using fluorescence-activated cell sorting, according to their phenotype of interest (Eun et al., 2011) or other distinguishing properties such as the presence or absence of growth in each droplet (Zang et al., 2013;Ota et al., 2019), their growth rate (Akselband et al., 2006;Ota et al., 2019), chemotactic motility , or their metabolic activity (Espina, 2020). Focusing on slow-growing microorganisms after sorting can result in the cultivation of rare taxa (Watterson et al., 2020). Jian and coworkers developed a microbial microdroplet culture system, where cells are cultured in water-in-oil droplets placed in Teflon tubes. This system uses up to 200 droplets with a volume of 2 μL, in which microbes are cultured in a high-throughput fashion (Jian et al., 2020). The droplets can be manipulated to meet the needs of different experimental designs. Microbe-harboring beads or droplets (or, in general, sorted cells) can also be cultured in their natural environments, which can be achieved by encapsulating the beads inside an extra polysulfonate membrane to isolate the encapsulated cells from the environment (Ben-Dov et al., 2009) or, more recently, using devices such as the Microbe Domestication Pod (Alkayyali et al., 2021). The pod, which holds agarose microbeads containing encapsulated cells, is placed in the environment, allowing the encapsulated microorganisms to be cultured individually but guaranteeing cell-to-cell communication and the presence of important environmental necessities. Sorted droplets can also be placed in microwell slides in order to facilitate downstream cultivation and analysis (Bai et al., 2014). The sorting of cells in compartments can also be exploited to research cell-to-cell interactions among encapsulated bacteria (Ohan et al., 2019) and biofilm formation or growth (Chang et al., 2015;. The elucidated interactions can cast light upon each cell's needs for growth, and thus on its effective cultivation. Devices such as the SlipChip, composed of two conjoined plates, allow the duplication of a microbial colony so that half of it can be further preserved or cultured, while the other half can be used for destructive analyses (Ma et al., 2014). Another way to sort the unreachables is to separate them while growing on a petri dish. Cultures can be sprayed onto medium plates instead of being spread with a hockey stick. This procedure effectively compartmentalizes microorganisms in droplets, hence the aggregation of cells and interspecies competition, once they land on the medium, is significantly reduced (Huang et al., 2021). Gao and co-workers developed a microbe observation and cultivation array (MOCA) that allows the recovery of microbes on a small scale and does not require any complex equipment (Gao et al., 2013). MOCA involves a petri dish with arrays of oil-covered droplets of cells. The oil covering provides a separation between cells and thus enables the cultivation of multiple separated droplets of cells (Gao et al., 2013). Several marine microorganisms were isolated using this technique, including Pseudoalteromonas spp. and previously uncultured members of the genera Shewanella and Colwellia (Gao et al., 2013). Compared to conventional approaches, MOCA offers an easy system for compact, parallel cultivation and multiple variations of different media on a relatively small scale. "Streaking pen" developed by Jiang and his group is a robust, high-throughput method based on a simple streaking and picking strategy to achieve single-cell cultivation on microfluidic streak plates. Using this technique, a previously unknown fluoranthenedegrading Blastococcus species was isolated (Jiang et al., 2016), and so were novel species of bacteria from a marine sediment (Xu B. et al., 2018;Hu et al., 2020). This method has also been used to culture termite-associated bacteria of the genera Burkholderia, Micrococcus, and Dysgonomonas . In general, cell sorting enables the design of complex experiments using just a few plates, and thus represents a great experimental simplification that allows for a better examination of individual subpopulations and, as a result, increases the chances of culturing novel taxa. Let us seek information about the cultivation of the unreachables in the (meta)genome Successful cultivation of just a few novel taxa while adding "vital" molecules to the media, trying different media and cultivation conditions, or the combination of all the above, is a lengthy and material-consuming way to find the requirements for microbial growth of specific taxa, given the vast diversity of the unreachables. Nowadays, the metagenome has become a promising source of information on cultivation needs, since it reveals "who is there and what their roles are" (Remenár et al., 2015;Nowrotek et al., 2019). In other words, why try dozens of media or condition combinations, when each cell's growth requirements can be found in its genome? As was mentioned earlier, a common phenomenon in a community is the loss of the ability to metabolize certain compounds if these are provided by other organisms (Pande and Kost, 2017). Such a gene loss can be ultimately seen in the genome (Carini et al., 2013). Reconstruction of the metabolic pathways through genomic information reveals the bacterium's deficiencies or needs, which can be provided in the medium (Liu et al., 2022). For example, the nutritional requirements of Pelagibacter ubique, most likely the most abundant bacterium on Earth, were determined in part by its absence of genes for assimilatory sulfate reduction and its need for reduced sulfur compounds for growth (Carini et al., 2013). Karnachuk et al. (2020) isolated a thermophilic spirochete thanks to information from a metagenome-assembled genome which suggested the presence of 12 alpha-amylase hydrolases. This bacterium was then cultured using a medium composed mainly of starch (Karnachuk et al., 2020). Metagenomic data can also be used to create co-occurrence network approaches based on network inference techniques in order to model the abundance or roles of specific community members in an environment (Faust and Raes, 2012). These relationships can be exploited in co-culture approaches, which can represent these relationships, e.g., by using spent media from other culturable bacteria in the community (Xian et al., 2020). Information contained in an RNA sequence (metatranscriptome) can be even more useful because it reflects the necessary genes being expressed in a given environment and time. For instance, the metatranscriptome of the leech Hirudo verbana was characterized, revealing the expression of Frontiers in Microbiology 10 frontiersin.org genes coding for sulfated-mucin desulfatases and sialidases (Bomar et al., 2011). A medium with added mucin then allowed the cultivation of a Rickenella-like leech symbiont in vitro (Bomar et al., 2011). A metagenome is a very complex collection of information, so discerning specific, individual genomes out of this mixture is often a difficult task, and techniques that provide a link between identity and function can help to discern which specific organisms carry which metabolic activity. One such technique is stable isotope probing (SIP), a method that links certain metabolic capabilities to individual community members. Upon probing with stable isotopes, the metagenome of these community members can be separated and sequenced to reveal their identity (Uhlik et al., 2013). SIP in tandem with metagenomics helped culture different bacteria with, for example, biodegradative functions. The bacterium Polaromonas naphthalenivorans was isolated in a pure culture after its role in the degradation of naphthalene was determined by SIP (Jeon et al., 2003(Jeon et al., , 2004. A similar approach was followed for isolating novel phenanthrene-and biphenyl-degrading Ralstonia populations (Li et al., 2019), novel isoprenedegrading bacteria belonging to different genera (Larke-Mejía et al., 2019), or hydrocarbon-degrading bacteria from the sea basin (Mishamandani et al., 2014;Gutierrez et al., 2015) or oil spills (Gutierrez et al., 2013). In these examples, the stable isotope-labeled, or "heavy" molecule used for the biodegradation analysis was also included in the cultivation efforts, but "heavy" genomes could also highlight other requirements that the degrading bacteria may need. Finally, metabolic needs can be elucidated by single-cell genomics (Wurch et al., 2016), which can be boosted with the cellsorting approaches described earlier. Single-cell genomic information has enabled metabolic reconstruction and aided the isolation of difficult-to-culture organisms such as symbiotic archaea (Wurch et al., 2016). Additionally, Cross et al. (2019), using singlecell genomic data, developed a method to capture specific microorganisms using antibody engineering. These antibodies are designed based on membrane-associated proteins, whose sequences can be found in the genome. The antibodies are labeled with a fluorescent dye, and thus the cells to which the antibody binds can be sorted by flow cytometry and cultivated in different media (Cross et al., 2019). Conclusion and future perspectives In recent years, a large number of microbes have been cultured employing the procedures discussed here in. Over the last two decades, in particular, a great deal of effort has been spent to improve culturing work, and many new taxa have been described; in fact, more bacteria have been cultured and described in the first 20 years of the 21st century than in all previous years of microbiological research combined (Figure 3; Parte et al., 2020). The high-throughput sequencing revolution that enabled the analysis of the metagenome has great potential to aid the cultivation progress. There is an unavoidable synergy between culture-independent and culture-dependent knowledge: as our knowledge of metagenomes increases, so does our knowledge of what microbes need to grow. The majority of Earth's environments still harbor mainly hitherto uncultured microorganisms (Lloyd et al., 2018). Just as an ebb primarily uncovers areas close to the shore, and maybe never reveals the perpetually hidden abyss, so the phylogenetically distant cells, or "phylogenetically divergent non-cultured cells" as described by Lloyd et al. (2018), may remain undiscovered. These unreachables are the real "dark matter" of the microbial world and keep on shaping our planet right under everyone's noses. But in theory, nothing is impossible to culture, and what we do not successfully culture today can be brought to culture tomorrow. Just like the Yellowstone National Park's Obsidian Pool gave us a hint of the then so-called OP5 or OP10 phylum (Hugenholtz et al., 1998), whose members were isolated more than a decade later (Mori et al., 2008;Lee et al., 2011;Tamaki et al., 2011), other environments will reveal their secret inhabitants via culture-independent, omics-based approaches, after which culturing will be applied in search of their objectification. But not just simple, low-scale culturing; automatized, high-throughput culturomics will be needed. Sorting technologies such as those based on microfluidic systems could already be coupled with machine learning systems (Srikanth et al., 2021) so that growth needs can be elucidated and a high number of microorganisms can be cultured in the shortest time possible. This is the same as with many other big questions that still afflict us: it seems that machines and algorithms are coming to the rescue. So many microbes will be unreachable no more, and the time for this is already being reached. As mentioned in the previous section, there may be an interest in culturing a specific organism from the environment and techniques have been proposed to tackle this challenge (Cross et al., 2019). Throughout the course of microbiological research, several taxa have been categorized as "most wanted" because of the important roles they play, such as in the human microbiome (Almeida et al., 2016) or other environments in the biosphere (Steele et al., 2011). At the same time, some microorganisms exist as obligate symbionts: their genomes have been reduced because of the loss of functional genes, and these lost functions can be guaranteed by the host (Moran and Bennett, 2014). Entire bacterial phyla such as the Candidate phyla radiation are thought to be composed mostly of symbionts (Castelle and Banfield, 2018). Should we force them to try to exist by themselves in a pure culture, despite their loss of basic structural features such as cell wall components and extremely small genomes (<200 Kbp) with maybe no possibility of growing away from their host, or should we better make more flexible regulations of what is required to propose new prokaryotic species? Cultivation is made difficult not only because of the intricacies needed for the growth of microorganisms in the laboratory but by placing unreal requirements for their study through culturing. There are calls to reform the one species-one publication formula (Rosselló-Móra and Amann, 2015) and, due to the diversity of bacteria in the environment, it is not difficult to imagine that it may be impossible to describe all bacterial species using the polyphasic approach employed today for circumscribing new species, even if all microbes were culturable. Recent estimates suggest that the number of different bacterial taxa in the biosphere (established with a 16S rRNA gene similarity cutoff of 97%) is 2.2-4.3 million (Louca et al., 2019). New bacterial descriptions are also constrained by journal capabilities (Tamames and Rosselló-Móra, 2012). In order to give an identity to the mass of uncultured microorganisms, the availability of a pure culture is maybe not Frontiers in Microbiology 11 frontiersin.org necessary anymore. High-quality genome sequences are being proposed as nomenclatural types (instead of viable anexic cultures in culture collections), and a new classification system, the SeqCode, is being developed to exist (at least temporarily) parallel to the International Code of Nomenclature of Prokaryotes Whitman et al., 2022). The requirements of a pure axenic culture of the ICNP as the only type material possible for naming new microbial species has been criticized as self-limiting, hindering microbiological research and raising the costs associated with naming new taxa . If the "dream of a phylogenetic system" was materialized upon the bases of genomics (Woese, 1992), the development of a reliable system based on genomics must be pursued and supported. These recent developments in prokaryotic systematics will not negatively affect the importance of cultivation because microbiology is a science whose reach extends far beyond taxonomy and the basic knowledge of microbes. It is expected that, by 2024, the economic value of the global microbes and the microbial market will exceed USD 675.2 billion (Estevinho et al., 2020). These figures are reached by allocating organisms in high-value biotechnological industries which produce the goods previously mentioned in the introduction. The "dark matter of life" conceals not only the answer to "who is there, " but also "what are they doing. " This second question is still what may be most relevant contributing to the advancement of technology. The future of cultivation is one that begins with its strengths: the ability to select and culture microorganisms relevant to their functions and technological potential. But we must be openminded enough to not limit our horizons with just apparent and obvious applications: a world of possibilities can be opened with each microorganism isolated and studied. Conflict of interest The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest. Publisher's note All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article, or claim that may be made by its manufacturer, is not guaranteed or endorsed by the publisher. The number of validly published species within the last 70 years (Parte et al., 2020). Glossary bacterial persisters microorganisms that survive exposure to a given antibiotic/action that limits their cellular division, and have the capacity to replicate once it is removed (Zhang, 2014) community a multi-species group of organisms, living together in a shared environment and interacting with each other (Konopka, 2009) copiotrophs organisms adapted to utilize available resources promptly when available; usually associated with nutrient-rich environments (Koch, 2001). They have higher Michaelis-Menten kinetics and maximal growth rates co-culture biological systems (cultivation strategy) where two or more different microbial populations coexist with some degree of contact between them (Goers et al., 2014;Rosero-Chasoy et al., 2021) culturomics high-throughput, cultivation-dependent methods describing an environment's microbial community (Lagier et al., 2018) dormancy a survival strategy characterized by a reduction in metabolic activity, usually undetectable under laboratory conditions iChip an isolation chip consisting of a customizable set of chambers, where environmental cells are kept separately and subsequently cultivated in situ (Nichols et al., 2010) in situ incubation a cultivation method leading to the facilitated growth of cells that are difficult to cultivate ex situ, usually performed in the environment that the cell originated from Nichols et al. (2010) and Epstein (2013) metabolomics a metabolic profiling that links genotype and phenotype based on the targeting of small molecules (peptides, amino acids, nucleic acids, etc.; Zhang et al., 2012) metagenomics a study of the collective genomes of all microorganisms found in a given site/sample (Handelsman et al., 1998) metatranscriptomics a culture-independent microbial profiling based on their gene expression (Filiatrault, 2011) microbial succession change in the composition of microbial communities over time after the colonization of a new environment (Fierer et al., 2010) microbiome an entire habitat, including the microorganisms, their genomes, and the surrounding environmental conditions (Marchesi and Ravel, 2015) mixotrophs organisms relying on both heterotrophy and autotrophy (Crane and Grover, 2010) oligotrophs organisms capable of growing in low-nutrient environment/media (0.5-15 mg of C/L) and, conversely, unable to grow on substraterich media immediately after removal from their natural environment (Cho and Giovannoni, 2004) microbial population a collection of cells of one species living in the same environment and interacting with each other (Thompson, 2020;Behera et al., 2022) resuscitation-promoting factors factors enabling the cell division or resuscitation of dormant cells (Hett et al., 2008), usually referring to a protein/proteins of various gram-positive bacteria (Mycobacterium and Micrococcus genus; Mukamolova et al., 2002Mukamolova et al., , 2006 single-cell-sorting a sorting device based on the compartmentalization of a heterogeneous mixture of particles/cells of different types (one or more), into different volumes (Seeger et al., 1991;Grover et al., 2001) viable but non-culturable (VBNC) a cellular survival strategy (Giagnoni et al., 2018) in which cells retain indicators of metabolic activity while being incapable of sustaining cellular division on media that normally support the growth of the microorganism (Rice et al., 2000)
2023-03-09T16:10:07.044Z
2023-03-07T00:00:00.000
{ "year": 2023, "sha1": "d472c89b33ba711c4b1d09a9c94736ae8d30ab1b", "oa_license": "CCBY", "oa_url": "https://www.frontiersin.org/articles/10.3389/fmicb.2023.1089630/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "89e2bf9ffa4dd93c13cbf8a27b84993082cb7449", "s2fieldsofstudy": [ "Biology", "Environmental Science" ], "extfieldsofstudy": [ "Medicine" ] }
252817448
pes2o/s2orc
v3-fos-license
Morphometric evaluation of great vein of Galen and its clinical implications The Galenic venous system plays a vital role in the drainage of blood from deeper parts of the brain. This venous system is contributed by many major veins. These veins are located closer to the pineal gland making the surgical approach in this region difficult. Any accidental injury or occlusion of the vein of Galen could lead to devasting results. Thus, studying the dimensions of the vein of Galen is more important. Hence, we aimed to evaluate the morphometry and trajectory to the vein of Galen. About 100 computed tomographic venography records were evaluated and the length, diameter of vein of Galen, angle between straight sinus and vein of Galen and distance from internal occipital protuberance and roof of fourth ventricle to vein of Galen were studied. The mean length and diameter of vein of Galen were 9.8±2.7 and 4.08±1.04 respectively. The mean angle between straight sinus and vein of Galen was 64.2°. The mean distance between external occipital protuberance and roof of fourth ventricle to vein of Galen were 52±6.9 and 33.3±4.5 respectively. No significant morphometric differences were observed between the age groups as well as between the sexs. The results obtained from this study may be helpful for the neurosurgeons in better understanding of the anatomy of the Galenic venous system and to adopt a safe surgical approach to improve the efficacy of the surgeries of the pineal gland and also in the region of vein of Galen. posed by the authors, still the removal of the pineal tumor without damaging the vein of Galen and its tributaries is a nightmare for the neurosurgeons. So, a thorough knowledge about the anatomy of the vein of Galen along with its tributaries and the pineal region is essential for the neurosurgeons to avoid complications [4]. The vein of Galen malformation, though it is a rare condition accounting only about 1% of intracranial venous system malformations, has proven to be fatal if left untreated [5]. This malformation may result in high cardiac output due to hyperdynamic circulation leading to death within weeks of life. The treatment options of vein of Galen malformations are continuously evolving with the advancement in the intervention radiological techniques [6]. The amount of literature available for vein of Galen dimensions are significantly less. The aim of the present study is to determine the morphometry of the vein of Galen and trajectory to the vein of Galen from nearby landmarks using computed tomographic venography (CTV). Thus the findings of the present study would serve as the much needed data to the neurosurgical approach to the vein of Galen and also provides reference points that can be used during neurosurgical interventions. Materials and Methods The current study was a descriptive study with data collected retrospectively. The study was done in the Department of Radiology and Anatomy in Jawaharlal Institute of Postgraduate Medical Education and Research, Puducherry between 2020-2022. The normal CTV studies of patients evaluated for various conditions of the brain were taken from the Department of Radiology through Picture Archiving and Communication System (PACS), from the year 2017-2020. Approval from both the Departmental Postgraduate Research Monitoring Committee and the Institute Human Ethics Committee was obtained. Inclusion criteria All cerebral CTV studies done during 2017-2020, between the age group of 18 to 60 years which were reported as normal by a neuro-radiologist were included in the study. Exclusion criteria • Poor quality of CTV due to motion artifacts or inadequate contrast opacification • Pathologies affecting the normal venous anatomy like venous thrombosis, tumor invading vein of Galen CTV records of patients satisfying inclusion criteria were used in the study. The CTV studies of the brain were retrieved from PACS, Department of Radiology, after obtaining permission. The CTV studies from 2017-2020 were analysed. The CTV data sets were loaded in SIEMENS SYNGO VIA SERVER WORKSTATION and the venographic images obtained in axial plane were reconstructed by Multiplanar reformation and the morphometric measurements were taken in the appropriate plane. The measurements like length, width of the vein of Galen and angle formed between vein of Galen and Straight sinus were taken. The trajectory of vein of Galen, the distance between vein of Galen and the external occipital protuberance (EOP) and the distance between vein of Galen and roof of fourth ventricle were noted. All measurements were done by a single investigator thrice and its mean was taken as final. The first consecutive 25 CTV images evaluated by the principal investigator was repeated again after a month and the intraobserver variability was checked. Another 25 CTV images were selected randomly and measured independently by another observer and the interobserver variability was checked. The demographic data like age and sex were noted. The records were divided into five groups based on the age as ≤20, 21-30, 31-40, 41-50, 51-60 and the data were evaluated. Statistical analysis Continuous data were analysed for normality distribution using Shapiro wilk test. Independent student t-test was done to compare sex difference between the above-mentioned parameters. One way ANOVA was done to compare the variables between different age groups. Post-hoc Tukey test was done to compare between each age group. P-value<0.05 was considered as significant. IBM SPSS Statistics for Windows, Version 19.0 (IBM Co., Armonk, NY, USA). Dimensions of vein of Galen The vein of Galen was visualized in all 100 CTV records. The mean length of the vein of Galen was 9.8±2.7 mm with the maximum length of 15.9 mm and minimum length of 3.5 mm, whereas the mean diameter was 4.08±1.04 mm with the maximum diameter of 6.8 mm and minimum diameter of 2 mm. The CTV image showing the measurement of length and diameter of the vein of Galen is given in Fig. 1. The mean angle between the straight sinus and the vein of Galen was 64.2° with the maximum angle of 99° and minimum angle of 24°. The CTV image showing the measurement of angle between the vein of Galen and the straight sinus is given in Fig. 2. The dimensions of vein of Galen were compared between the sex and the age groups. However, there was no significant difference in dimensions between the sex and the age groups. The overall dimensions and comparison of dimensions of vein of Galen between sex and age groups are given in Table 1. Trajectory to vein of Galen The trajectory to vein of Galen, mean distance from two reference points EOP and the roof of the fourth ventricle were 52±6.9 and 33.3±4.5 mm respectively. The measurements are shown in the Fig. 3. There was no significant difference between the sex and the age groups. The overall trajectory measurements of vein of Galen are given in Table 2 Discussion In the present study about 100 CTV records were studied and the dimensions of vein of Galen were noted. Though many authors across the globe have tried studying the dimensions of the vein of Galen, there is paucity in the available literature, especially in the radiological studies. The various studies showing the dimensions of the vein of Galen are summarised in the Table 3. In a cadaveric study by Chaynes [4], the mean length of the vein of Galen was 10.33 mm. According to the author, the vein of Galen was found to be shorter if they are formed above the splenium of corpus callosum and it is said to be longer if the vein is formed below the splenium of corpus callosum [4]. While Ono et al. [1] measured the mean size of the vein of Galen at the point of its termination which was about 7.3 mm. In other cadaveric studies by Browder et al. [7] and Isao Yamamomo and Kageyama [8] the mean length was 15 mm and 12 mm respectively. The mean length of the vein of Galen in the present study using CTV was found to be 9.8±2.7 mm. In the study using magnetic resonance venography (MRV), the mean diameter of the vein of Galen was 4.39 mm using three-dimensional spoiled gradient recalled (SPGR) Echo MRV, and it was 4.04 mm using two-dimensional time of flight (TOF) MRV [9]. Great vein received many tributaries as it coursed, increasing its width along the course. The mean diameter of the vein of Galen in the present study using CTV was found to be 4.08±1.04 mm. In our study we have measured the angle between the vein of Galen and the straight sinus using CTV. The mean angle was 64.2°. In a cadaveric study by Chaynes [4]. the author measured the angle between the vein of Galen and the straight sinus using a goniometer in 25 cadavers. The angle ranged from 16°-117° with a mean of 75.25°. Further the author classified the angle as acute, obtuse and right angle based on their range. Angle was sharp in the range of 16°-62°. This acute angle was noticed in 9 cases by the author when the flow of vein of Galen was along the curve of the splenium of the corpus callosum. The angle was said to be right angle when the range is from 75°-104°. This type of angulation was seen in 15 cases where the vein of Galen was not in close proximity to the splenium of the corpus callosum. Obtuse angle was noticed when the vein did not follow splenium of corpus callosum rather; it had a horizontal course. The angle was about 117°, seen in a single case. The angle also varied depending on the location of the apex of the tentorium [4]. The tentorial apex may be located either above or underneath the corpus callosum, making the angle acute or flat, respectively [1]. In another cadaveric study by Ghali et al. [10] the minimum angle was 60° and maximum angle was 80°. Further in this study we have used two different reference points from where the distance to the vein of Galen is measured. The first reference point used was the EOP. The mean distance from the EOP was 52±6.9 mm. The second reference point used was the roof of the fourth ventricle. The mean distance from the roof of the fourth ventricle was 33.3±4.5 mm. These distances will give an idea of approximate location of the vein of Galen. This would help the neurosurgeons in planning the approach to the vein of Galen during various neurosurgical procedures like in the removal of pineal tumour. The various surgical approaches used for the removal of the pineal tumour are supra-cerebellar-infratentorial, occipital interhemispheric transtentorial, posterior transcallosal posterior trans ventricular and combined supra and infra tentorial approach. The approaches are selected based on the location of the pineal tumour [1,3,4]. The various surgical approaches to the pineal gland tumour is given in Table 4. Though there were many approaches provided in the literature the supracerebellar infratentorial is the most effective, safe and commonly used, because this method provides the complete removal of the tumour with low morbidity and good histopathological diagnosis [11]. The occipital interhemispheric transtentorial approach is found suitable for vascular malformations [11]. This occipital transtentorial approach gives a good visibility of the tentorial notch paving the way for dissection of large tumours. Even though there are lot of approaches suggested the iatrogenic injury to the deep veins and their tributaries are inevitable leading to disturbances in consciousness, visual impairment, hemiplegia and death. Thus it is imperative to know about the skull base vascular anatomy before any surgical approaches [12]. (Fig. 6) An accessory straight sinus called falcine sinus appears at the falx cerebri around the fifth month of gestational life. This falcine sinus appears transiently above the straight sinus connecting the superior sagittal sinus and Galen's vein [13]. The remanent of this falcine sinus appears as bulbous prominence. Widjaja and Griffiths [13] reported two such bulbous prominences in their study using MRV. These prominences are normal variants in the absence of a fistula, provided the patient is asymptomatic [13]. This necessitates the importance of differentiating these bulbous prominences in the absence of fistula, as they mimic aneurysm of great vein of Galen. Vein of Galen malformation and aneurysm The vein of Galen is developmentally the remanent of the caudal portion of the median prosencephalic vein of Markowski. This median prosencephalic vein usually regresses, but here, in case of malformation, instead of regression, there is a continuous enlargement of median prosencephalic vein leading to the aneurysmal formation in the vein of Galen malformation. The endovascular embolization using liquid embolic agents like Onyx or N-butyl-cyanoacrylate along with multidisciplinary approaches like shunting for hydrocephalus after embolization, has improved the prognosis [14]. There was increased incidence of post-procedure haemorrhage in transvenous approach when compared to the transarterial approach. This is because of the connections of the bulbous enlargements in malformations to the superficial subependymal veins through choroidal veins or to the deeper veins through underdeveloped internal cerebral veins. Stereotactic radiotherapy can be used in older people after staged endovascular embolization [5]. Aneurysm in the vein of Galen more or less has the same presentation as that of the vein of Galen malformation. This necessitates us to differentiate between the vein of Galen malformation and aneurysm. Aneurysmal dilatation of the vein of Galen occurs in a normally developed vein of Galen as a part of dural arteriovenous fistula. There is dilatation in the vein due to the blockage or stenosis in the outflow [14]. With the advancement in neuroradiological techniques, di- agnosis has become fast and easier. Multimodality management involves surgical removal and endovascular embolization [15]. Thus the findings of the present study would give a rough idea for the neurosurgeons about the location of the vein of Galen and its dimensions. This might be helpful for approaching the malformations or aneurysm during endovascular procedures. In conclusion, the anatomy in and around the vein of Galen is very complex. So, the knowledge of the morphometry and trajectory to the vein of Galen, is of utmost importance for the neurosurgeons for opting the definite surgical approach to the clinical conditions arising in and around the vein of Galen.
2022-10-12T06:18:02.453Z
2022-10-11T00:00:00.000
{ "year": 2023, "sha1": "e3905fc9ed2043bd6cbf2c75cf2268c6953c5ae1", "oa_license": "CCBYNC", "oa_url": "https://acbjournal.org/journal/download_pdf.php?doi=10.5115/acb.22.051", "oa_status": "HYBRID", "pdf_src": "PubMedCentral", "pdf_hash": "7383a6e17093ea0a20894a0238dfdcc3bf7b2647", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
53103989
pes2o/s2orc
v3-fos-license
Mild Pulmonary Hypertension Is Associated With Increased Mortality: A Systematic Review and Meta‐Analysis Background Recent studies have demonstrated a continuum in clinical risk related to mean pulmonary artery pressure that begins at >19 mm Hg, which is below the traditional threshold used to define pulmonary hypertension (PH) of 25 mm Hg. Because of the implications on patient diagnosis and prognosis, the generalizability and validity of these data need further confirmation. Methods and Results Databases were searched from inception through January 31, 2018, to identify studies comparing all‐cause mortality between patients with mildly elevated mean pulmonary artery pressure near but <25 mm Hg versus the referent group. The meta‐analysis included 15 nonrandomized studies and 16 482 patients (7451 [45.2%] with measured or calculated mean pulmonary artery pressure of 19–24 mm Hg by right heart catheterization [n=6037] and echocardiography [n=1414] [mild PH]). The mean duration of follow‐up was 5.2 years. Compared with the referent group, mild PH was associated with an increased risk of mortality (risk ratio, 1.52; 95% confidence interval, 1.32–1.74; P<0.001; I2=47%). Secondary analysis using risk‐adjusted time‐to‐event estimates showed a similar result (hazard ratio, 1.19; 95% confidence interval, 1.09–1.31; P<0.001; I2=42%). The findings were consistent between subgroups of right heart catheterization and echocardiography studies (P interaction>0.05). There was evidence of publication bias; however, this did not influence the risk estimate (Duval and Tweedie's trim and fill adjusted risk ratio, 1.34; 95% confidence interval, 1.15–1.56). Conclusions The risk of mortality is increased in patients with mild PH, defined as measured or calculated mean pulmonary artery pressure >19 mm Hg. These data emphasize a need for diagnosing patients with mild PH with consideration to enrollment in PH clinical studies investigating pharmacological and nonpharmacological interventions to attenuate clinical risk and improve outcomes. T here are accumulating data suggesting that the spectrum of clinical risk related to mean pulmonary artery pressure (mPAP) is wider than described originally. 1 Findings from 2 large right heart catheterization (RHC) registries 2,3 and numerous other smaller clinical studies 4,5 have demonstrated that mPAP of >19 mm Hg (previously termed "borderline pulmonary hypertension [PH]") is an independent risk factor for increased mortality. This observation indicates that the traditional mPAP threshold for defining PH of ≥25 mm Hg may be insufficient. This, in turn, has important potential implications for diagnosing and prognosticating patients with PH. Yet, the acceptance of a lower threshold of mPAP to identify patient populations at risk remains controversial, and formal assessment of published literature to determine a more precise estimate of risk across patient populations is lacking. Furthermore, the clinical relevance of mildly elevated PA pressure, measured by echocardiography, remains unclear. 6 Hence, the primary objective of this study was to perform a systematic review and meta-analysis of RHC and echocardiography studies to determine the association between mildly elevated PA pressures and mortality. Data Sources We searched PubMed (MEDLINE), CINHAL, EMBASE, Web of Science, and Cochrane Central Register of Controlled Trials from inception through January 31, 2018, for Englishlanguage, peer-reviewed publications. The following keywords and Medical Subject Heading terms were used: "hypertension, pulmonary (Medical Subject Heading)," "pulmonary artery hypertension," "pulmonary arterial hypertension," "pulmonary artery pressure," "pulmonary arterial pressure," "pulmonary artery systolic pressure," "right ventricular systolic pressure," "mPAP," "mortality (Medical Subject Heading)," and "death (Medical Subject Heading)." Reference lists of systematic reviews, meta-analyses, and original studies identified by the electronic search were reviewed to find other potentially eligible studies. The authors declare that all supporting data are available within the article and references. Study Selection Studies were included in the meta-analysis if they: (1) included a clearly defined or identifiable study group with mildly elevated PA pressure and (2) provided the number of events and/or risk estimates for mortality in the mild PH versus referent group. We followed the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) and Meta-Analysis of Observational Studies in Epidemiology (MOOSE) checklists for the protocol of our meta-analysis. 7,8 Data Extraction and Quality Assessment Three physician reviewers (D.K., S.L., and G.C.) evaluated independently study eligibility and quality, and performed data extraction using standardized data collection sheets. Disagreements were resolved by consensus. Study quality was evaluated using the Newcastle-Ottawa Scale, which assigns a star for 3 areas of study quality: selection (4 criteria), comparability (1 criterion), and outcome (3 criteria). 9 A study can be awarded a maximum of 1 star for each numbered criterion within the selection and outcome categories. A maximum of 2 stars can be given for comparability (Table 1). Exposure The exposure was mild PH. Mild PH was defined as a lower limit mPAP of 19 to 21.5 mm Hg and an upper limit mPAP of %25 mm Hg, except some studies in which few patients (n=11 patients) with mPAP >25 mm Hg were also included in the mild PH group because of unavailability of mortality data Clinical Perspective What Is New? • Mildly elevated mean pulmonary artery pressure %19 to 24 mm Hg, which is below the traditional threshold of >25 mm Hg used to define pulmonary hypertension (PH), is associated with an increased risk of all-cause mortality. • The association between mildly elevated mean pulmonary artery pressure and increased mortality is consistent when PA pressure is measured by right heart catheterization or estimated by echocardiography. What Are the Clinical Implications? • Our data support efforts to update the current definition of PH and affirm the reproducibility of mean pulmonary artery pressure >19 mm Hg as an appropriate and clinically accessible level distinguishing patients with PH from patients without PH. • Acknowledging this would identify previously undiagnosed patients with PH and provide a framework in clinical practice by which to initiate careful monitoring and efforts to modify risk factors and improve outcomes. Continued separately for these patients ( Table 2). For echocardiography studies that reported only the tricuspid regurgitation velocity or gradient (n=3), pulmonary artery systolic pressure (PASP) was calculated as tricuspid regurgitation gradient plus an assumed right atrial pressure of 5 mm Hg. This approach has been used in prior studies. 17,20-22 mPAP was calculated using the following formula: (0.619PASP) + 2 mm Hg. 23 Outcomes The primary outcome of interest was all-cause mortality. Statistical Analysis Random-effects models of DerSimonian and Laird were used to calculate pooled risk ratio (RR) and corresponding 95% confidence interval (CI) for mortality. For studies that reported risk-adjusted time-to-event estimates (adjusted hazard ratio [HR]) and the corresponding 95% CI, we performed secondary analyses using the generic inverse variance method to estimate pooled HR using a random-effects model. Secondary analyses were also performed using fixed-effect models. Heterogeneity was assessed using the Higgins I 2 statistic, with values <25% and >75% considered indicative of low and high heterogeneity, respectively. Pooled estimates were calculated for all studies as well as the a priori defined subgroups of echocardiography and RHC studies. Publication bias was assessed visually by asymmetry in funnel plots and formally using Egger's regression test and the Begg-Mazumdar rank correlation test. To assess the impact of publication bias on the risk estimate, we used Duval and Tweedie's Trim and Fill as well as cumulative meta-analysis (after sorting the included studies from largest to smallest size). 24,25 All tests were 2 tailed, with a P<0.05 considered statistically significant. Analyses were performed using the Review Manager Version 5.3 (The Nordic Cochrane Center, The Cochrane Collaboration, 2014, Copenhagen, Denmark) and Comprehensive Meta-Analysis Version 3.0 (Biostat, Englewood, NJ). Results The database search yielded 11 184 articles. After removing duplicates, 7920 articles were screened at the title/abstract level and 7880 were excluded for various reasons (eg, systematic reviews, case reports, studies that included only patients with known PH, and studies with no data on mortality). Forty full-text articles were assessed for eligibility ( Figure 1). Fifteen (8 RHC and 7 echocardiography) studies were included in the meta-analysis. [2][3][4][5][10][11][12][13][14][15][16][17][18][19][20] The characteristics of the included studies are shown in Table 2. Of the 15 studies, 12 were retrospective, 1 was prospective, 1 was ambispective, and 1 was a post hoc analysis of a randomized (Figures 2 and 3). Secondary analysis using risk-adjusted timeto-event estimates showed results consistent with the direction of primary finding (random-effects model: HR, 1.19; 95% CI, 1.09-1.31; P<0.001; I 2 =42%; fixed-effect model: HR, 1.17; 95% CI, 1.11-1.23; P<0.001; I 2 =42%) (Figures 4 and 5). The findings were consistent between RHC and echocardiography studies (P interaction >0.05). The results were unchanged in a sensitivity analysis, excluding 2 studies that accounted for >70% of the patients (RR, 1.64; 95% CI, 1.36-1.97; P<0.001; I 2 =47%; and HR, 1.22; 95% CI, 1.07-1.39; P=0.004; I 2 =45%). 3,20 There was evidence of publication bias on the basis of asymmetry in the funnel plot as well as results of the Egger's regression test and the Begg-Mazumdar rank correlation test ( Figure 6). However, the presence of publication bias did not influence the risk estimate and overall result (Duval and Tweedie's Trim and Fill adjusted RR, 1.34; 95% CI, 1.15-1.56) (Figure 7). Similarly, cumulative meta-analysis demonstrated that the RR had stabilized with the inclusion of the larger studies and did not shift significantly with the addition of smaller studies, suggesting that the inclusion of smaller studies did not introduce bias (Figure 8). Discussion Data from this study address controversy on the mPAP level required to capture clinical risk in patients referred for RHC or echocardiography using a meta-analysis, which is the optimal research tool for determining aggregate risk across study populations. 26 We report that mildly elevated PA pressures, estimated by either echocardiography or invasive catheterization, are associated with 19% increased risk of mortality over 5 years. Despite analyzing both community-based and referral populations with varying underlying comorbidities, there was only mild statistical heterogeneity of risk across the included studies. This suggests a truly generalizable association between mild PH and mortality. These data support the ongoing efforts to identify optimal cutoffs for hemodynamic parameters in PH, including pulmonary arterial hypertension, to identify at-risk populations earlier. 27 A notable finding of this study is the consistent association between mild PH and increased mortality when PA pressure was measured by either RHC or echocardiography. Although RHC is the only test available to diagnose PH, echocardiography is the preferred screening test for at-risk patients. 6 Current guidelines recommend further evaluation of patients for PH if the tricuspid regurgitation velocity is >2.9 m/s (corresponding to PASP >%40 mm Hg). 6 However, our findings demonstrate that PA pressure assessed by echocardiography, at levels considered currently to be below the range of clinical significance (tricuspid regurgitation velocity %>2.5 m/s, PASP >35 mm Hg) across both communitybased and referral populations, is, in fact, a valid predictor of mortality. Thus, our data suggest that when PA pressure can be measured by echocardiography, this is an effective screening tool for detecting patients at risk for mild PH who may benefit from further evaluation, close follow-up, and risk reduction interventions. 17,20 Nonetheless, a recent study in patients with PH (mPAP >25 mm Hg) demonstrated modest correlation and poor agreement between echocardiographyderived and RHC-measured PASP and mPAP. 28 Fisher et al 29 showed that approximately half of the cases of PASP overestimation are related solely to right atrial pressure overestimation by echocardiography. Thus, although we used an assumed right atrial pressure of 5 mm Hg for calculating PASP in echocardiography studies that reported only tricuspid regurgitation velocity or gradient, concern for Figure 2. Association between mild pulmonary hypertension (PH) and mortality (random-effects model). Studies included in this analysis are Valerio et al, 10 Heresi et al, 11 Kovacs et al, 5 Suzuki et al, 12 Maron et al, 3 Takahashi et al, 13 Douschan et al, 4 Assad et al, 2 Abramson et al, 14 Kjaergaard et al, 15 Shalaby et al, 16 Lam et al, 17 Damy et al, 18 Cabrita et al, 19 and Choudhary et al. 20 Our meta-analysis, together with findings from the individual studies analyzed, provides comprehensive evidence in support of defining PH in contemporary and clinically relevant terms. Specifically, these data affirm the reproducibility across selected and unselected populations for mPAP >19 mm Hg as an appropriate and clinically accessible level distinguishing patients with PH from patients without PH. Acknowledging this would identify previously undiagnosed patients with PH and provide a framework in clinical practice by which to initiate careful monitoring and efforts to modify risk factors. 30 These patients should also be considered for enrollment in ongoing and future PH clinical trials investigating pharmacological and nonpharmacological (eg, exercise program and weight loss) interventions to attenuate risk and improve outcomes in this population. Indeed, randomized controlled trials examining the efficacy and safety of established pulmonary artery hypertension therapies in patients with mild mPAP are already underway. 31 Limitations First, this is a meta-analysis of nonrandomized studies and has all limitations of observational data, including selection bias and unmeasured confounding variables. Second, restricted by the nature and characteristics of this metaanalysis, we did not have access to patient-level data. Therefore, we are unable to characterize covariates that modulate risk within the mild PH group, such as elevated pulmonary vascular resistance. This is a particularly important limitation, because the addition of pulmonary vascular resistance to analysis of cardiopulmonary hemodynamic data is needed to exclude mildly increased PA pressure that is Valerio et al, 10 Heresi et al, 11 Kovacs et al, 5 Suzuki et al, 12 Maron et al, 3 Takahashi et al, 13 Douschan et al, 4 Assad et al, 2 Abramson et al, 14 Kjaergaard et al, 15 Shalaby et al, 16 Lam et al, 17 Damy et al, 18 Cabrita et al, 19 and Choudhary et al. 20 physiologic in the setting of increased cardiac output or cor pulmonale from a diagnosis of mild PH. Similarly, although we performed analysis using adjusted HRs, the precise impact of comorbidities cannot be readily assessed across studies in the absence of patient-level data. Third, the possibility that differences in cardiopulmonary comorbidities and not PA Figure 5. Pooled hazard ratio for mortality using risk-adjusted time-to-event estimates (fixed-effect model). Studies included in this analysis are Heresi et al, 11 Suzuki et al, 12 Maron et al, 3 Takahashi et al, 13 Assad et al, 2 Shalaby et al, 16 Cabrita et al, 19 and Choudhary et al. 20 11 Suzuki et al, 12 Maron et al, 3 Takahashi et al, 13 Assad et al, 2 Shalaby et al, 16 Cabrita et al, 19 and Choudhary et al. 20 CI indicates confidence interval; PH, pulmonary hypertension; IV, Inverse Variance; RHC, right heart catheterization. pressure were responsible for differences in clinical outcome in this group cannot be excluded. However, pooled HRs derived from risk-adjusted time-to-event estimates demonstrated a 19% increased risk of mortality in patients with mild PH compared with the referent group. Fourth, the association of mild PH and mortality in the subgroups of patients with cardiac, pulmonary, hematologic, or connective tissue disease could not be determined because of unavailability of mortality data in these different subgroups. Last, the mortality estimates are based on an mPAP cutoff of %19 to 20 mm Hg that was selected from prior published data, although a lower cutoff may yield slightly different results. Conclusion In a meta-analysis of 15 nonrandomized studies, mild PH was associated with an increased risk of all-cause mortality compared with the referent group. This finding was consistent in the subgroups of RHC and echocardiography studies. These data add to the growing body of literature on the hemodynamic range and prognostic significance of mild PH. Overall, our data affirm efforts to update the current PH definition and provide definitive data in support of future clinical trials to improve outcome in this vulnerable patient subgroup.
2018-11-11T01:39:44.780Z
2018-09-18T00:00:00.000
{ "year": 2018, "sha1": "f1d9ac25672b7431b1324f4b0636e8a32de51922", "oa_license": "CCBYNC", "oa_url": "https://www.ahajournals.org/doi/pdf/10.1161/JAHA.118.009729", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "f1d9ac25672b7431b1324f4b0636e8a32de51922", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
239056378
pes2o/s2orc
v3-fos-license
Design of MIMO/Smart Antenna Arrays Using Different Array Modules for Handheld Device |In this paper, an eight-element MIMO smart antenna system consisting of two different array modules for handheld device is proposed. The (cid:12)rst module is a six-element array operating in N78 (3.3{3.8 GHz) band for 5G, which achieves MIMO functions for receiving and beam scanning for transmitting. The second module is a two-element antenna array, which operates in LTE/WWAN/N78 (0.7{0.91 GHz, 1.63{2.61 GHz, 3.3{3.8 GHz) bands. To take full advantage of the existing antenna resources in the mobile device, the six elements in the (cid:12)rst module are combined with the two elements in the second module to form an 8-element array in the overlapping N78 band. Good isolations and envelope correlation coefficients are achieved in the receiving mode by loading L-shaped slots for the combined module. The distribution of excitations for the combined array in the transmitting mode is optimized by the method of maximum power transmission efficiency to direct the beam to the desired direction with maximum possible gain and is realized by an in-house designed beamforming controller. The impacts of the environments on the antenna array performance are investigated. INTRODUCTION Massive MIMO systems in the future mobile device are required to be integrated with 2G/3G/4G and 5G band antennas with different structures in a limited internal space, which raises a challenge to antenna engineers [1]. There have been reports on the design of multi-frequency band antennas, which integrate multiple antenna structures in one space [2][3][4][5][6][7][8][9][10][11]. In [3], an integrated design with MIMO antenna systems for 4G and 5G applications is proposed. The design contains a two-element slot-based MIMO antenna system for 4G and a connected antenna array-based two-element MIMO antenna system for a potential 5G band. A proof-of-concept solution for co-designed millimeter-wave and LTE antennas in a metal-rimmed handset is introduced in [5]. The design shows that the two different antennas can be accommodated in a shared volume and integrated into the same structure, in which the mm-wave antenna does not hinder the low-band performance. However, the two antenna modules of the above mentioned designs do not share the antenna elements of each other. A dual-polarized hybrid 8-element array for 5G application in the smartphone is presented in [6]. The proposed hybrid antenna array is composed of two different 4-element arrays and can achieve MIMO performance in the 2.6 GHz band (2550-2650 MHz). A compact building block composed of a slot antenna and a loop antenna is studied in [7], where the slot antenna and loop antenna share a rectangular clearance, which improves the compactness of the building block. Four such building blocks are used to implement a compact 8-port MIMO array operating in 3.5 GHz band (3.4-3.6 GHz) for 5G metal-rimmed smartphone applications. As a typical application of multi-antenna technology, beamforming technology has been widely considered in 5G communication research [12][13][14]. By controlling the excitation of the transmitting antenna elements, beamforming technology can direct the electromagnetic field energy in the desired direction, thus improving the spectrum efficiency, achieving better signal coverage and higher antenna gain. Several beamforming structures have been reported in [15][16][17][18][19][20]. An 8-element MIMO antenna array operating in GSM1900 (1.88-1.92 GHz) and LTE2300 (2.3-2.4 GHz) bands is proposed for handheld devices in [19]. The antenna array consists of eight planar inverted-F elements printed on an FR4 substrate, exhibiting good beam directivity at different scanning angles. In [20], a millimeter-wave endfire 5G beam steerable array and a low-frequency antenna are integrated in a mobile terminal. The low frequency antenna can be transparent by using some grating strips between the low frequency antenna and high frequency antenna. The working frequency of millimeter-wave antenna is 22-31 GHz, and the array can scan ±50 degrees with end-fire radiation, which leads to a good coverage. The above beam scanning arrays are all composed of identical elements. For the purpose of leveraging the existing antenna assets, the antenna array design using different antenna elements to realize the function of MIMO and beamforming simultaneously is a useful solution but rarely reported. Due to the limitation of space in handheld devices, there will inevitably be strong mutual coupling between antenna elements. How to combine different antenna modules in a compact space to realize MIMO and beamforming technology simultaneously is still a challenge. In an attempt to address the above challenge, this paper proposes a novel design and combines a 2-element planar antenna array module operating in LTE/WWAN/N78 bands with a 6-element inverted-F antenna array module operating in N78 band for 5G, to form a larger 8-element array integrated in a handheld device. In the overlapping N78 band, the combined 8-element array functions as a MIMO system when it is in the receiving mode and as a smart antenna system when it is in the transmitting mode. The configuration of combined array is first optimized to insure that the MIMO performance is achieved. The method of maximum power transmission efficiency (MMPTE) [21,22] is then used to optimize the distribution of excitations for the combined array to achieve beam steering in the N78 band with the highest possible gain. The optimized distribution of excitations is realized through an in-housedeveloped digital beamforming controller. A prototype antenna array is fabricated and measured, and both the simulated and measured results meet the performance requirements for the MIMO and the beam steering functions Figure 1 shows the detailed dimensions of the two antenna elements respectively for the two different antenna array modules. The antenna elements are built on an FR4 (dielectric constant 4.4, loss tangent 0.02) substrate with the size of 149 mm × 76 mm × 1 mm. The combined 8-element antenna array is composed of two array modules. The first array module consists of six identical folded cubic inverted-F antennas, which are evenly placed on the two long edges of the substrate, as illustrated in Fig. 4. The element is etched on a 5 mm × 5 mm × 3 mm FR4 substrate surface, and its detailed dimensions are shown in Fig. 1(a). The element has two branches to generate two resonances to cover the N78 band. The high resonant frequency is mainly controlled by branch 1, and the low resonant frequency is mainly controlled by branch 2. The resonant frequency is controlled by adjusting the length of the two branches. In order to increase antenna bandwidth and improve antenna matching, the ground of substrate is partially removed, as shown in Fig. 4(b). The second module consists of two identical planar antenna elements, which are placed at the top and bottom of the substrate, on the same side as the ground of the substrate. As shown in Fig. 1 Fig. 3(c) shows that the resonance at 2.1 GHz is mainly generated by the PIFA (branch 1). Fig. 3(d) indicates that the E-shaped monopole (branch 2) generates the resonance at 2.4 GHz. Fig. 3(e) shows that the resonance at 3.45 GHz is generated by the combination of the branch 3 and branch 4. Figure 4 shows the front and back views of the antenna array. The first module (antenna elements 1-6) is symmetrically placed on the two long edges of the substrate, and the second module (antenna elements 7 and 8) is placed on the top and bottom of the substrate. Since the proposed antenna array is symmetric, only the simulated and measured results of antenna elements 1, 2, 3, and 7 will be depicted. Due to the compact arrangement of elements 1, 2, and 3, with a total length of only 31.5 mm, the isolations among antenna elements do not meet the design requirements. In order to enhance the isolation between the elements, an L-shaped isolation slot is introduced between elements 2 and 3. Fig. 5 shows how the L-shaped slot improves the isolations. In fact, the coupling between elements 2 and 3 is mainly caused by the conduction current on the ground, and therefore a high isolation can be achieved by introducing a slot on the ground. After the isolation slot is loaded, the S-parameters of elements are shown in Fig. 6 (labeled as Ref 1), and the bandwidths of elements 2 and 3 do not meet the design requirements. We now use a microstrip line to connect the grounding points of elements 1, 2, and 3 to make the three elements electrically connected and therefore electrically larger. As a result, the bandwidth of each element is significantly improved as shown in Fig. 6 (labeled as Proposed). The simulated and measured Sparameters of antenna elements are shown in Fig. 7. The mutual couplings between antenna elements are shown in Fig. 8(a) and are all lower than −10 dB in the operating frequency bands. Antenna Array Design and MIMO Performance The envelope correlation coefficient (ECC) is an important performance index for MIMO system. It can be seen from Fig. 8(b) that the ECCs of the MIMO antenna system are less than 0.1, while the ECCs in MIMO system are generally required to be less than 0.5. As shown in Fig. 8(c), the simulated total efficiency in the operating frequency bands is more than 50%. The measured and simulated gain patterns for the antenna elements 1, 2, 3, and 7 at 3.45 GHz are shown in Fig. 9. All radiation patterns were measured in a microwave anechoic chamber. It can be seen that most element patterns tend to be omnidirectional, and the maximum gain is less than 0 dBi. To enhance the antenna gain and improve the system performance, we may take advantage of the existing MIMO antenna array and resort to the beamforming technique. Design Method In order to realize the beamforming function in the transmitting mode based on the existing MIMO antenna array designed in the receiving mode, it is necessary to find the optimal distribution of excitations (ODEs) for the fixed antenna array configuration so as to achieve the highest possible gain in a desired direction. In the following, the method of maximum power transmission efficiency (MMPTE) [23][24][25][26][27][28] will be adopted to determine the ODE. The basic working principle of MMPTE is illustrated in Fig. 10. The eight-element array under design is set as the transmitting antenna, and a test antenna is introduced as the receiving antenna and placed in the desired direction in which antenna gain must be maximized. The whole transmission system can be considered as a 9-port network and can be characterized by the scattering matrix as follows: [ Assuming that the test antenna is matched, we have [a r ] = 0. If the PTE of the system reaches the maximum, the ODE for the transmitting array satisfies the following eigenvalue equation where [A] = [S rt ] T [S rt ], and [a t ] is the ODE for the transmitting array, which corresponds to the unique non-zero eigenvalue (the maximum PTE) and can be realized through the beamforming controller. Digital Beamforming Controller The in-house designed digital beamforming controller for the 8-element array has 8 channels, and each channel is composed of a single-channel TR chip AWR9621 and a power supply. The attenuation and phase shift of each channel are controlled by the micro-controller unit (MCU), which is connected to the computer via USB interface. The block diagram of the beamforming controller is shown in Fig. 11(a). The RF signal from the input is distributed into eight channels through power splitters according to the ODE obtained from MMPTE, as illustrated in Fig. 11. To reduce the trouble in calibration, the lengths of the lines from the input to the eight TR chips are kept the same so that the phase shifts and attenuations from the input to the eight TR chips are identical. The red two-way arrows in Fig. 11 ( Figure 12 shows photos of the 8-element antenna array. In order to compare the beamforming performance of the antenna array, we choose some elements from the eight elements to form a subarray with the rest terminated in matching loads. The beamforming performances in positive x-, y-, zdirections for different subarrays operating at 3.45 GHz are demonstrated in Table 1. One can see that the gain of the array in all directions increases as the number of array elements increases. By comparing the gain of the first module and that of the combination of the first and second modules, it can be seen that the addition of the second module can effectively improve the radiation performance. Table 2 shows the ODE calculated from MMPTE. Fig. 13 shows the 3D radiation patterns and simulated and measured 2D radiation patterns of 8-element antenna array operating at 3.45 GHz. The realized peak gains in x-, y-, and z-directions are 4.6 dBi, 4.4 dBi, and 3.7 dBi, respectively, which are significantly higher than those (less than 0 dBi) radiated from a single antenna element. THE INFLUENCE OF ENVIRONMENT ON THE RADIATION PERFORMANCE OF ANTENNA The influence of the environments on the handset antenna performance is substantial and cannot be ignored in practice. For this reason, a practical handset antenna design must take the environments into account, such as the PCB, LCD, battery, and phone case. This raises a challenge for the antenna designer as the simulation of the handset antenna with all environments in place is an impossible task. Since the MMPTE is only involved with terminal parameters of the system, it can get around the challenge by resorting to measurement. To illustrate the process, we will use MMPTE to investigate the influences of the cellphone case, user's hand, and human head on the antenna radiation performances at the working frequency 3.45 GHz. As shown in Fig. 14, a horn antenna is used as the test receiving antenna, and the 8-element antenna array placed inside the cellphone case is used as the transmitting antenna. The antenna array with the phone case is held in a hand model in the vicinity of a head model. Instead of using simulation, the scattering parameters of the transmission system are determined by the network analyzer connected to the horn antenna and the array antenna. The MMPTE is then used to calculate the ODE. We first study the influence of the cellphone case. The cellphone case includes a screen, battery and a back cover. The screen has a dielectric constant 4.82 and loss tangent 0.0054. The back cover is plastic with dielectric constant 2.2 and loss tangent 0.005. The ODE obtained from MMPTE is listed in Table 3. Fig. 15 shows the radiation pattern of the antenna array directed to +z-axis with a maximum gain of 3.7 dBi. It can be seen that the antenna array still maintains a good beam performance, and its gain remains unchanged compared with that of the antenna array in free space. It is noted that the metal shield behind the LCD screen effectively blocks the radiation in −z axis, thus reducing the back lobe of the beam. Now we consider the influence of the cellphone case held by the user's hand, which corresponds to a reading position. The simulation model and a real hand model are shown in Fig. 16. Table 4 shows the ODE from MMPTE based on measured scattering parameters for the beam directed to +z axis for which the scattering parameters are obtained by measurement with the antenna array loaded with the cellphone case and the user's hand. As can be seen from the antenna radiation patterns in Fig. 17, the antenna array still maintains good beam performance with a maximum gain of 2.1 dBi. The gain is reduced by 1.6 dB compared to the antenna array in free space due to the influence of the hand model. Finally we examine the influence of cellphone case held by the user's hand against a human head, which corresponds to a talking position. The simulation model and physical model are shown in Fig. 18. Table 5 shows the ODE from MMPTE based on the measured scattering parameters for the beam directed to +z axis in the talking position. As can be seen from Fig. 19, the antenna array still has a good beamforming performance with a maximum gain of 1.8 dBi. The simulation results are basically consistent with the measured results. The gain loss is 1.9 dB compared with that in the free space. Table 6 shows the comparison between the proposed MIMO antenna array and other similar designs reported for mobile terminals. Only the proposed design uses different antenna elements to realize beamforming function. CONCLUSION The antenna array design using different antenna elements to realize the MIMO function for receiving and beamforming function for transmitting simultaneously has not been reported before. Such an approach helps leverage the existing antenna assets in the mobile device so that the existing antenna resources can be utilized more efficiently. In this paper, an antenna array system consisting of different array modules for a handheld device has been proposed and investigated. The antenna array system is composed of two antenna modules. The first module is a 6-element antenna array operating in N78 (3.3-3.8 GHz) band for 5G, and the second module is a low profile 2-element antenna array operating in LTE/WWAN/N78 (0.7-0.91 GHz, 1.63-2.61 GHz, 3.3-3.8 GHz) bands. The first module combined with the second module forms an 8-element antenna array, which not only achieves MIMO performance for receiving in the overlapping N78 band, but also achieves beamforming for transmitting. The MIMO function is first achieved by properly designing the antenna elements and array configuration. Once the elements and configuration are fixed, the beamforming function is achieved by optimizing the feeding schemes with MMPTE so that the MIMO function is kept intact. The ODE determined by MMPTE is then realized by an in-house designed digital beamforming controller to direct the beam to a desired direction. The influences of environments, including the mobile phone case, user's hand, and human head, on the radiation performance of the antenna array are also studied. For the handset antenna surrounded by complex environment is too complicated to be handled by a computer, the ODE for the antenna array is determined by MMPTE in terms of the scattering parameters obtained from measurement, demonstrating a remarkable capability of MMPTE to solve the antenna design problem in complicated environments.
2021-10-21T15:14:06.077Z
2021-01-01T00:00:00.000
{ "year": 2021, "sha1": "2217b76859be46acd021cb66c725d3620f426b0f", "oa_license": null, "oa_url": "https://www.jpier.org/PIERC/pierc115/09.21072001.pdf", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "0e46f6133b6a9bf1ad2d08016bb4f804719860d1", "s2fieldsofstudy": [ "Business" ], "extfieldsofstudy": [] }
2299290
pes2o/s2orc
v3-fos-license
Noncommutativity from Canonical and Noncanonical Structures Using arbitrary symplectic structures and parametrization invariant actions, we develop a formalism, based on Dirac's quantization procedure, that allows us to consider theories with both space-space as well as space-time noncommutativity. Because the formalism has as a starting point an action, the procedure admits quantizing the theory either by obtaining the quantum evolution equations or by using the path integral techniques. For both approaches we only need to select a complete basis of commutative observables. We show that for certain choices of the potentials that generate a given symplectic structure, the phase of the quantum transition function between the admissible bases corresponds to a linear canonical transformation, by means of which the actions associated to each of these bases may be related and hence lead to equivalent quantizations. There are however other potentials that result in actions which can not be related to the previous ones by canonical transformations, and for which the fixed end-points, in terms of the admissible bases, can only be realized by means of a Darboux map. In such cases the original arbitrary symplectic structure is reduced to its canonical form and therefore each of these actions results in a different quantum theory. One interesting feature of the formalism here discussed is that it can be introduced both at the levels of particle systems as well as of field theory. Introduction In recent years, space-time noncommutativity has become the subject of increasing interest. In field theory stimulated by some results in low energy string theory, and in quantum mechanics because it is in the context of this formalism that space-time noncommutativity is more naturally understood in terms of space and time operators acting on a Hilbert space and also, because quantum mechanics viewed as a minisuperspace reduction of field theory, could reasonably be expected to provided further insight into how quantum mechanical noncommutativity reflects itself in field theory. Some of the more relevant work related to the approach here considered may be found in [1], [2], [3], [4], [5], [6], [7]. An interesting idea that allows us to consider in a full setting the space-time noncommutativity in the context of particle mechanics, is to use the concept of parametrization invariance [5], [7]. In this way the time is taken as an extra canonical variable of the system and it is then easy to introduce a non-canonical structure in this extended phase-space. The usual way to study the parametrization invariance of a system is by using the Dirac method of canonical analysis. Because not all the momenta are independent due to the invariance under parametrizations, this approach requires that a constraint on the system be introduced. For a parametrized particle, this constraint is at the classical level the Hamilton-Jacobi equation and at the quantum level the Schrodinger equation. So the Dirac method associates to the symmetry of parametrizations the classical or quantum evolution equations [8]. Here we want to generalize the above mentioned procedure in order to be able to consider noncommutative theories at the quantum level resulting both from canonical and non-canonical structures. The noncommutativity will then appear as a consequence of the existence of second class constraints, and the implementation of these constraints in terms of Dirac brackets. The interesting point of the procedure is that on the one hand we get the classical and quantum evolution equations for the noncommutative systems and on the other hand we also obtain a classical action that can be quantized using the path integral formalism. Furthermore, the analysis is not restricted to noncommutative theories with constant deformation parameters, since the procedure naturally incorporates arbitrary canonical potentials. Another interesting property of the method is that it can be naturally extended to field theory. Our starting point is to consider a parametrization invariant system. This means that if the system is not naturally invariant under parametrizations we promote the original parameters of the theory, for example the time in the case of particle dynamics, to the level of canonical variables. The second step is to perform the canonical analysis of this theory. One point that we must be careful with is that, since we add new variables to the system, we have to introduce constraints associated to the parametrization invariance symmetry of the theory in order that the number of degrees of freedom are preserved. The third step is to introduce an arbitrary canonical potential that allows us to realize the required noncommutativity. The next step is to show that under the Dirac brackets the first class constraint (or constraints) generate the symmetry. This means that we will probably need to modify the constraints. At this point, if we have several constraints, we need to check that the algebra of these first class constraints closes. Once we finish this procedure we obtain the quantum evolution equations for our system. Alternatively, we can introduce the canonical potential in the action and select an appropriate basis in order to quantize the system using the path integral formalism. For certain choices of the potentials that generate a given symplectic structure, the phase of the quantum transition function between the admissible bases corresponds to a linear canonical transformation, by means of which the actions associated to each of these bases may be related and hence lead to equivalent quantizations. We must stress that in contradistinction to the case when time plays the role of a parameter, the canonical transformation here is implemented in an extended phase space, where the time and its conjugate momentum are included. With the purpose of examining all the above mentioned facets of the space-time noncommutativity, our presentation has been structured as follows: In Section 2 we consider the canonical formalism of parametrization invariant systems. In Section 3 we introduce an arbitrary symplectic structure in the action, and after the canonical analysis we construct the Dirac brackets associated to the theory and also obtain the action for the reduced system. In Section 4, we quantize the theory using different bases, and using both path integral methods and the quantum evolution equations. We conclude the paper with some remarks and possible extensions. Parametrization invariant systems We begin here by reviewing the essentials of the canonical analysis of parametrized systems following the approach in [8]. To this end, consider the action for a particle in a N -dimensional configuration space, in an arbitrary potential: where i = 1, . . . , N . In this action the time t plays the role of a parameter in the theory. To study the non-commutativity of the space and time it is more convenient to consider the time as another coordinate of our theory, i.e. we extend our configuration space with one extra dimension t = q 0 . To do this, we parametrize the action by introducing a new parameter τ and assume that the coordinates q i (t) are scalars under this parametrization, i.e., The action (2.1) takes the form where t = q 0 now plays the role of a new coordinate in the theory. Making the identificationsq i ≡ dq i dτ andq 0 ≡ dt dτ , we can rewrite (2.3) in the form In Hamiltonian form the action (2.4) reads where ϕ = p 0 + H ≈ 0 is the first class primary constraint associated to the symmetry under parametrizations (which needs to be included in (2.5) in order to account for the fact that by introducing a new variable in the theory, restrictions must be added to the physical evolution of the system that indicate that the N + 1 new coordinates are not all independent), H is the canonical Hamiltonian of the action (2.1), and λ(τ ) is a Lagrange multiplier. The action (2.5) is invariant up to a total derivative under the transformations generated by the constraint ϕ, given by where the variation of the Lagrange multiplier is imposed in such way that when varying the action it should vanish up to a boundary term. Following Dirac [11], we propose that at the quantum level the physical sates of the theory are invariant under the above transformations, i.e., (2.7) e iεφ |ψ P = |ψ P . So in infinitesimal form we get We thus see that the constraint leads to a supplementary condition on the physical states, and is another way to reduce the quantum theory to its physical sector without imposing a gauge condition. Now if we consider the configuration representation with basis |q 0 , q i , equation (2.8) yields, where we have identified t = q 0 . We therefore obtain the Schrodinger equation as a result of imposing at the quantum level the classical invariance under parametrizations of the theory. In the following section we shall apply the same procedure to the case of arbitrary symplectic structures. Non-commutativity and Dirac Brackets Let z a = q 0 , q i , p 0 , p i , with a = 1, ..., 2N + 2, denote the 2N + 2 phase-space variables of a parametrized system in the Hamiltonian formulation. In this case we don't have a second order action to begin with as in (2.1). We can however consider a general first order action, equivalent to (2.5), given by where A a (z) is a vector potential which we shall use to generate an arbitrary symplectic structure associated to the Poisson brackets in the Hamiltonian formulation. Applying the Dirac's method for constrained systems, we have from (3.1) that the corresponding canonical Hamiltonian is given by (3.2) H c = λϕ(z), and the canonical momenta lead to the set of primary constraints, Consequently, the total Hamiltonian for this theory is Moreover, from the evolution of the constraints we obtain the following consistency conditions This antisymmetric matrix will play the role of the symplectic structure of the theory. Assuming further that ω ab is invertible so all the Lagrange's multipliers µ a in (3.5) can be determined, it then follows from (3.6) that the constraints χ a are second class. Note that in the case where the symplectic structure is degenerate, at least one of the χ a 's will be first class, but in this case the number of degrees of freedom of the generalized theory will not correspond to the degrees of freedom of the original theory. Hence in what follows we will assume that all the constraints χ a are second class. Now, in order to impose these constraints as strong conditions when quantizing, we construct the associated Dirac brackets which are given by where ω ab , is the inverse matrix of ω ab . Computing the Dirac's brackets of the coordinates with the above expression we obtain Thus, quantizing a theory constrained by symmetries under parametrization results in the noncommutativity of the quantum operators corresponding to the phase space coordinates: The simplest case corresponds to the usual Heisenberg algebra of ordinary Quantum Mechanics, for which the inverse matrix of the canonical symplectic structure takes the form Non-commutative Quantum Mechanics In the previous section we have considered a general procedure for quantizing a theory with an arbitrary symplectic structure. One interesting feature of this formalism is that by including time as a canonical variable allows us to consider also noncommutativity between the time and the spatial coordinates. Now, given such a symplectic structure we can quantize either by using the Dirac's procedure where the first class constraints act as operators on the physical states, imposing supplementary conditions on them, and the Dirac brackets of the second class constraints are replaced by commutators, or, alternatively, we can also quantize by first evaluating the generating potentials of the symplectic structure and then applying path integral methods in order to derive the Feynman propagators. It should be noted, however, that for a given symplectic structure the solution for the potentials A a is not unique, although all the possible resulting actions and resulting classical theories are related by canonical transformations. Furthermore, in the Dirac quantization the commutators (3.9) of the generators of the extended Heisenberg algebra define the possible complete sets of commuting observables of the theory and the correlative admissible bases (labeled by the eigenvalues of these sets). For each of these admissible bases, we obtain a realization of the Heisenberg algebra and of the subsidiary condition (2.8) and, correspondingly in the path integral formalism, the Feynman propagators derived from the transition functions in each of these bases. This means that in the path integral calculation of a transition function, the only admissible actions are those for which the fixed end-points in a variational principle are the same as the dynamical variables labeling the basis used for the evaluation of the transition function. Note finally that there are also actions originating from solutions of (3.6) for which no fixed end-points, corresponding to one of the admissible bases in the Dirac quantization exits. However, can be defined using a Darboux map. This map, involves introducing new dynamical variables in terms of linear combinations of the original ones and, consequently implies a change in the initial symplectic structure to a canonical one. Compatible, although non-equivalent, path integral and Dirac quantizations result from promoting to the rank of operators these new variables, which will satisfy the Heisenberg algebra of ordinary quantum mechanics. So in these cases the deformation of the symplectic structure at the classical level is reflected at the quantum level in a deformed Hamiltonian while the standard Heisenberg algebra of the usual quantum mechanics is preserved. To further illustrate the above observations, we next consider some examples of quantum noncommutativity schemes in the context of both the Dirac and path integral formalisms. For analytical simplicity we assume a 1+1 space-time, generalization to higher order dimensions is fairly straightforward. Quantizing according to Dirac's prescription by using (3.9) leads to the commutators while the remaining generators of the extended Heisenberg algebra are just multiplicative quantities. Also projecting on (4.3) with x, p t | and substituting (4.5) into (4.4), with a Hamiltonian of the form H = One interesting feature of the Dirac quantization resulting from the use of this basis is that for a t independent potential, equation (4.6) becomes For such a time independent Hamiltonian, (4.7) may be interpreted as an eigenvalue equation, with −p t the energy eigenvalues of the system and ψ(x, p t ) the corresponding eigenvectors. Note that the energy spectrum of the resulting theory does not have any corrections from the noncommutativity of the space-time. A similar result was obtained by Balachandran, et al [4] by means of a very different approach. Now, in order to obtain the equivalent quantization by means of path integrals, we need to compute the transition function x(τ 2 ), p t (τ 2 )|x(τ 1 ), p t (τ 1 ) . For this purpose we need first to derive the appropriate action function, which according to our previous observations has to have as fixed end-points the variables x, p t . This, as implied by (3.6), requires in turn deriving the proper generating potentials A a (z) for the symplectic structure (4.1) by solving the equations, It is not difficult to verify that the needed solution is In fact, Inserting (4.9) in the action (3.1) results in (4.10) which indeed has the appropriate variational fixed end-points x, p t . With (4.10) we can now compute the propagator where we have introduced a canonical gauge fixing condition χ = χ(τ, t, p t , x, p x ). This gauge must first be a good canonical gauge in the Dirac's sense, i.e. the Dirac bracket {ϕ, χ} * must be invertible and second the gauge must be consistent with the boundary conditions. Because, we are fixing at the end points (x, p t ), it is not possible to use the usual gauge t = f (τ ), we will use instead the gauge condition The Dirac's bracket between this gauge condition and the constraint is given by This gauge is a good canonical gauge for p x = 0, in which case the path integral has two different branches, one corresponding to p x > 0 and the other for negative p x . It can also be seen that this term leads to corrections of first order in θ which are, however, proportional to the time dependence of the potential. Consequently, if we assume that the potential is time independent, this corrections cancel and we can then integrate (4.11) over t to obtain ) . (4.14) Note now that the only dependence on θ in the above expression appears multiplyingṗ t , but taking into account that this term is zero due to the delta functional in the path integral we do not get nonconmmutative corrections to the propagator. This is in agreement with our previous results derived by using the Dirac's quantization. 4.1.2. Basis |t, p x . Let us next consider the basis {t,p x } in which the operatorŝ x andp t are realized by In the Dirac quantization we have that a realization of the supplementary condition (2.8) in this basis results from projecting with t, p x | and substituting (4.15) into the first class constraint (4.4), we thus get Note that contrary to what we had in the case of the basis {|x, p t } where the supplementary condition was independent of time, here we have a time evolution equation. However, because of the time derivative in the potential in (4.16) we may lose the usual probability amplitude interpretation for ψ(t, p x ) for time derivatives of order higher than one, regardless of whether or not the potential has an explicit dependence on time. It is conceivable, nonetheless, that for certain forms of the potential a probabilistic interpretation may be recovered by modifying the product in the algebra of the wave functions or by redefining hermicity, in analogy to what occurs in Feshbach-Villars formulation of the Klein-Gordon equation. It is natural to ask how is (4.16) related to (4.7) for a time independent potential. For this purpose note that and using (see e.g. [13] for details of a procedure used to derive a similar transition function), we get Finally, substituting this result in (4.16) we get the integro-differential equation On the other hand, if ψ(x, p t ) is a solution of (4.7) then (4.23) is a solution of(4.16). Indeed acting with −i ∂ t + p 2 x 2m + V (t, i (∂ px − θ∂ t )) on (4.23) and making use of (4.17) and (4.21) we get (4.24) Now, if ψ(x, p t ) satisfies (4.7) the right side of (4.24) is zero, hence ψ(t, p x ) as given by (4.23) satisfies (4.16). Q.E.D. Let us now turn to the path integral quantization for this case and the calculation of the propagator t(τ 2 ), p x (τ 2 )|t(τ 1 ), p x (τ 1 ) . The appropriate solution to the equations (4.8) for which t, p x are the fixed end points of the action are (4.25) Inserting this solution into the action (3.1) we then obtain, Observe that the action (4.26) and the action (4.10) are indeed related by a linear canonical transformation generated by The propagator for the admissible basis {|t, p x } is then and for the boundary conditions that we are considering, the usual gauge is a good gauge condition. Assuming now that the Hamiltonian is independent of t, we can easily integrate (4.27) over the variables t and p t , using the gauge condition (4.28) and the constraint, we get where the parametrization in the action has been eliminated. Note that in the limit θ = 0 both (4.16) and (4.29) reduce to the usual Quantum Mechanics. The same is true for a free particle, as it is immediately evident from (4.16), and it is also follows for (4.29) since in this case the Hamiltonian is independent of x, so by integrating over this variable the term with θ = 0 disappears. 4.1.3. Basis |p t , p x . To conclude our analysis of the Dirac and path integral quantization realized on the three admissible bases for the extended Heisenberg algebra (4.2) that we are studying in this section, consider now the representation of the operators (t,x) in |p t , p x . For this basis we have It is interesting to note that in this representation we have introduced an extra parameter a, that can translate the noncommutativity from the coordinate operator to the time operator. (Observe that this characteristic is also present when we impose noncommutativity of the space so we can also translate the noncommutativity parameter from one coordinate to the another). For this representation the constraint equation (4.3) takes the form Note that in this case, when the potential is time independent so that (4.31) reduces to (4.32) p t + p 2 x 2m + V (i ∂ px + (1 + a)θp t ) ψ(p t , p x ) = 0, we do have noncommutative corrections except when we choose the parameter a = 0, or for the case of a free particle. For the path integral formulation in this basis, an appropriate action (having p t , p x as fixed end-points) is given by from which we can obtain results equivalent to those derived from the analysis of the constraint equation (4.31). Contrary to the actions S 1 and S 2 which are unique solutions of (3.6) for their corresponding fixed end-points, there are several canonically equivalent admissible actions with fixed points p t , p x . Thus, for example, S 4 = τ2 τ1 dτ (−tṗ t −θp xṗt −xṗ x ) can be obtained from S 3 by substracting the total derivative of F 2 = aθp t p x from the integrand in S 3 . Other canonically equivalent actions follow from S 3 and S 4 by means of the generator F 3 = θp t p x . Noncanonical related actions. Up to this point we have considered path integral quantizations based on actions which are compatible with the extended Heisenberg algebra (4.2), derived by means of the Dirac quantization procedure. There are, however, other solutions to the equations (4.8) which, although indistinguishable at the classical level from the ones considered so far, they are not canonically related to them, in the sense that there is no generating function for mapping canonically the actions resulting from these solutions to the ones previously considered. We shall see that in these cases the transformations needed for fixing the end-points required for a path integral quantization are actually transformations which map the original phase-space variables with symplectic structure (4.2) to another set of variables related to the canonical symplectic structure (3.10). Classically, as it is well known from the Darboux theorem [12], this map is always possible (at least locally). To each of these Darboux maps corresponds, however, a different quantum mechanics, generated by what in some works in the literature has been called the equivalent of the Seiberg-Witten map for "noncommutative quantum mechanics". To exhibit in more detail the above considerations, let us begin with the solutions: (4.34) With the first set of equations in (4.34), the canonical action takes the form (4.35) We therefore see from (4.35) that from the original phase-space variables of the theory we do not have a set of fixed end-points for the action from which a quantization can be developed. Nonetheless a natural pair (t, x) can be constructed by making the change of variables wheret is a new canonical variable associated to the time. In terms of this new pair of variables, the symplectic structure is reduced to (3.10), and introducing this new time in the action (4.35), results in (4.37) Note that if the original Hamiltonian was time-dependent, the modified one introduces a new kind of interaction that is proportional to the parameter θ of noncommutativity and to the momenta in the spatial direction. Also note that in terms of the modified symplectic structure (3.10) the Dirac brackets (3.8) lead, upon quantization, to the commutators From these commutators we clearly see that a new complete set of commuting observables is (t,x), which label the admissible associated basis of coordinate states { t , x }. The Dirac's supplementary condition in this basis is now, and we note that in the case that the Hamiltonian does not depend explicitly on the time the Schrödinger equation is not modified by the noncommutativity. Now, if we consider the second set in (4.34) of solutions to (4.8) the resulting action is given by Following the same logic as in the previous case, it is natural to introduce in this equation the new set (ť = t + θ 2 p x ,x = x − θ 2 p t ) of time and spatial coordinate. Here then the action (4.40) is reduced to (4.41) and, upon Dirac quantization, the corresponding new set of dynamical observables satisfies the following commutation relations, Using as a complete set of commuting observables the variables (t,x), the new supplementary Dirac condition is For this Schrödinger equation we see that including the case when the Hamiltonian does not depend explicitly on time we do have modifications originated by the noncommutativity. Furthermore we see that the new theory could be non-unitary, since partials with respect toť appear to an order that depends on the kind of interaction. This type of quantization can been formulated directly by using the Moyal product: So for this selection of symplectic potentials the theory is not unitary and this result is equivalent to the obtained in Ref. [15] in the context of noncommutative field theory. To quantize these two cases by means of the path integral method we make use of the basis { t , x } and the respective actions (4.37) and (4.41) to compute the propagator (4.46) t 2 , x 2 |t 1 , x 1 . Following the normal procedure to quantize a theory with first class constraints [8], we have only two extra points to consider. First we have to impose a gauge condition, which in this case can be the normal canonical gauget = f (τ ), since in difference with the approach used in [5] and [7] we are imposing the noncommutativity at the level of the action, using the symplectic structure, and not at the level of the gauge condition. The second point that we need to take into account is the extra appearance in the Hamiltonian of the θp x shifted term when we have a t dependent theory, this can imply that it may not be possible to compute the path integral over the momenta. These are however the usual problems that one finds when computing path integrals with actions in terms of variables with powers larger than two. One additional point to notice is that for both types of solutions of the equations (4.8) considered in this section, the Dirac constraint is not modified, since in both cases the new time is canonical conjugated to the original p t and then the constraint generates the parametrization invariance. It is not difficult to see that this is not the case when the above analysis is extended to the more general case of symplectic structures that upon quantization result in an extended Heisenberg algebra that includes noncommutativity of the momenta. For such a generalization one would have to consider a symplectic structure of the form Here the quantization of the Dirac brackets would then result in the extended Heisenberg algebra and, in contradistinction to what occurred for the previously considered symplectic structure, we would only have two complete sets of commuting fundamental observables: (x,p t ) and (t,p x ), with their respective admissible bases: {|x, p t } and {|t, p x }. Except for some differences such as the ones mentioned above, the analysis of the Dirac and path integral quantizations relative to these bases, as well as others resulting from considering canonical transformations of their respective associated actions followed by Darboux maps, is qualitatively similar (see [14]) to what we have already done, so for the sake of brevity we shall omit the details here. Rather, and in preparation for a future investigation of how our analysis of space-time noncommutativity in the discrete realm of quantum mechanics can be extended to the continuum of relativistic field theory, we turn next our consideration to the case of a relativistic particle. 4.2. Space-time Noncommutativity for a Relativistic particle. Our starting point is the action for the free relativistic particle where now the first class primary constraint ϕ is given by (4.52) ϕ = p 2 + m 2 ≈ 0. As discussed above in Sec. 3, for an arbitrary symplectic structure the action (4.51) has the form Again, arising from the definition of the momenta, we have the primary constraints (4.54) χ a = p z a − A a (z) . These constraints are second class and the corresponding Dirac brackets are identical in form to those in the non-relativistic case, given by Eq.(3.8). Let us consider now a symplectic structure which is determined by the following Dirac brackets involving the space-time and momentum variables: Other admissible bases compatible with the Heisenberg algebra (4.60) are obtained from (4.59) by a canonical transformation generated by F = p α x α , for α fixed. These sets of admissible bases are {|x α , p β , p γ , p λ ; α = β = γ = λ}. Refered to them, the Dirac subsidiary condition results in (4.66) where indices here are not summed over. So, even though the deformation parameter θ does not appear in these constraint equations the space-time noncommutativity is reflected in their violation of Lorentz invariance. On the other hand, canonically transforming (4.59) with F = p α x α , where now we sum over α, we get, after regrouping terms, Here we see that it is natural to define as fixed end-point variables of the action the new set of coordinates given by The Dirac bracket between these new coordinates vanishes and, in consequence, so does their commutator: Note, however, that (4.68) is a Darboux map and not a canonical transformation of the action (4.59). Consequently this is a different Dirac quantization, related to the canonical symplectic form and not to the original one given by (4.56). The Dirac supplementary condition in this case is So, quantizing the theory in this way we obtain that a relativistic particle satisfies the Klein-Gordon equation, and thus arrive at the well known result that for a free particle we do not obtain any deformation of the theory. However, if we consider that the particle lives in a given background, we will get the deformation produced by the new choice of coordinates. To further illustrate this point, consider the interaction of the relativistic particle with a constant external field. Here the constraint will be of the form Using thex α coordinates, which will have the same form as in (4.68), except for the substitution p β → Π β , the Dirac supplementary condition in the basis {|x α } is of the form which indeed shows corrections containing the deformation parameter θ. Concluding remarks We have seen that according to the Dirac quantization scheme for constrained systems, it is the first class constraints and the symplectic structure resulting from the Dirac brackets that uniquely define a particular quantum theory, irrespectively of the fact that there are many possible solutions for the potentials A a corresponding to the same symplectic structure ω. On the other hand, if we use these solutions as the starting point for evaluating the action in the path integral formulation, then depending on the type of solutions that we propose for the equations (3.6), we could get different quantizations. We have seen moreover, that if there is a linear canonical transformation relating these actions, as is the case for the actions S 1 , S 2 and S 3 considered in subsections 4.1.1-4.1.3, then the corresponding quantizations are actually equivalent to each other and differ only by the fact that they are referred to the three admissible bases compatible with the extended Heisenberg algebra (4.2). Indeed, the phases of the quantum mechanical transition functions corresponding to changes between these bases (cf. e.g. Eq. (4.20)) are nothing other than the classical generating functions of the linear canonical transformations among the three actions, and the associated symplectic transformation leaving invariant their common symplectic structure ω is, for each of these three cases, the identity element of the group. Alternatively, for the type of solutions to (3.6) leading to the actions considered in subsection 4.1.4, the situation is actually quite different because there is no generating function that permits to canonically transform such actions to the ones previously considered, and because at the classical level fixing the end-points of these actions involves a change of variables in extended phase-space which results in a Darboux map from the original symplectic structure to the canonical one given by (3.10). Quantizing in these cases via either the Dirac or path integral formalisms is then tantamount to applying standard quantum mechanics with a Hamiltonian modified with the new variables, which are formally promoted to the rank of operators satisfying the commutation relations (4.42). But in axiomatic quantum mechanics the operators acting on vectors in Hilbert space are observables, i.e. operators functions of the basic dynamical variables of the theory, with eigenvalues given by quantities measurable by experiment. For the systems we have been considering and the construction followed in subsection 4.1.4, this would imply that the new time and coordinate variables are the observables of the theory and, since they obey the commutation relations (4.42), the new time and coordinate operators commute. Physically this would then mean that experiments could be designed to measure simultaneously the eigenvalues of these space-time operators. This, however, begs the question of what is then the true physical interpretation for the θ parameter that appears in the modified quantum expressions of the theory, such as the Hamiltonian? We could try to further argue that both the old and new space-time operators are observables and that θ reflects the noncommutativity of the old observables. This, however, brings in a somewhat Bohmian flavor of hidden variables to the new quantization which is, to say the least, subject to questioning (for additional arguments regarding this issue see [16]). Thus, from our point of view, it would seem preferable to conclude that in the case of the quantizations discussed in subsection 4.1.4, the term "space-time noncommutativity" is a misnomer. Nonetheless, since the different quantizations here discussed lead to different (at least conceptually) experimental predictions, it is experiment then that will determine which, if any, of these theories can be closer related to reality. The same can be said regarding the different cases discussed in Section 4.2 for the relativistic particle. Of course it could also be contended that the use of the Dirac and path integral quantizations, which have been so successful in extending classical mechanics and field theory to a certain range of the quantum realm, is not justified a priori when dealing with distances of the order of the Planck length where quantum gravity becomes relevant. This could very well be so and it may involve having to drop the very concept of manifold, which underlies the mathematics of all of our present day physical constructions, in favor of new geometrical paradigms in which quantization is built in ab initio, such as the noncommutative geometry proposed by Connes [17] a few years ago. Be it as it may, we believe that the analysis presented here, the more axiomatic one presented in [13] and references within, as well as many other related works that have appeared in the literature, could provide some guidance for further work in that ultimate direction.
2014-10-01T00:00:00.000Z
2006-10-12T00:00:00.000
{ "year": 2006, "sha1": "783a7e5e04c4e3d7a439a13a35389bd023b1ac29", "oa_license": null, "oa_url": "http://arxiv.org/pdf/hep-th/0610150", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "487011e549648ab27d89b0903df90c97e250d1d1", "s2fieldsofstudy": [ "Mathematics" ], "extfieldsofstudy": [ "Physics" ] }
73710422
pes2o/s2orc
v3-fos-license
The willows ( Salix – Salicaceae ) in Tasmania The genus Salix (willows) has a widespread native distribution concentrated mainly in temperate to sub-arctic regions of the Northern Hemisphere, but scattered in Argentina, Chile, South Africa and Madagascar. Some species are native in tropical areas of central America, Africa and south-east Asia. The genus is not native in Australasia. Recent estimates suggest that Salix contains 300–520 species (Cronquist 1988; Newsholme 1992; Argus 1997; Mabberley 1997; Fang et al. 1999). Together with the genus Populus L., it is in the family Salicaceae Mirb. Sometimes a third genus, Chosenia Nakai is also recognised, although this is often subsumed within Salix. The taxonomy of the Salicaceae, discussing generic limitations, is given by Skvortsov (1999) and Ohashi (2001). The Tasmanian taxa are in two subgenera: subgenus Salix (tree willows) and subgenus Vetrix Dumort. (shrub willows). Many species of Salix are used for timber production, basket making, soil stabilisation, windbreaks, fodder, medicine and as cultivated ornamentals. These practical values have significantly extended the distribution of the genus. In Tasmania, several taxa of Salix have been widely planted and can be found as ornamentals in parks, gardens and open spaces such as roadside corridors. In rural areas, they are grown commonly for windbreaks and hedgerows, and on the banks of watercourses for consolidation. Several taxa are naturalised throughout the state whereas others have the potential to become naturalised. It is the principle aim of this paper to discuss the names that have been used in the past and to offer correct, or at least consistent, names for the plants that occur in Tasmania. Introduction The genus Salix (willows) has a widespread native distribution concentrated mainly in temperate to sub-arctic regions of the Northern Hemisphere, but scattered in Argentina, Chile, South Africa and Madagascar.Some species are native in tropical areas of central America, Africa and south-east Asia.The genus is not native in Australasia.Recent estimates suggest that Salix contains 300-520 species (Cronquist 1988;Newsholme 1992;Argus 1997;Mabberley 1997;Fang et al. 1999).Together with the genus Populus L., it is in the family Salicaceae Mirb.Sometimes a third genus, Chosenia Nakai is also recognised, although this is often subsumed within Salix.The taxonomy of the Salicaceae, discussing generic limitations, is given by Skvortsov (1999) and Ohashi (2001).The Tasmanian taxa are in two subgenera: subgenus Salix (tree willows) and subgenus Vetrix Dumort.(shrub willows).Many species of Salix are used for timber production, basket making, soil stabilisation, windbreaks, fodder, medicine and as cultivated ornamentals.These practical values have significantly extended the distribution of the genus. In Tasmania, several taxa of Salix have been widely planted and can be found as ornamentals in parks, gardens and open spaces such as roadside corridors.In rural areas, they are grown commonly for windbreaks and hedgerows, and on the banks of watercourses for consolidation.Several taxa are naturalised throughout the state whereas others have the potential to become naturalised.It is the principle aim of this paper to discuss the names that have been used in the past and to offer correct, or at least consistent, names for the plants that occur in Tasmania. Vol 27(2) 2009 Baker Since then, many willows have been introduced into Tasmania, sometimes as many as 100 taxa at a time.Two such introductions occurred in 1878 and 1880 (Anon. 1879;1881), when large consignments were sent from Kew Gardens in Britain to the Royal Society Garden in Tasmania (now the Royal Tasmanian Botanical Gardens): ' ...Plants and seeds received at the Botanic Gardens:-From the Royal Gardens, Kew, 100 varieties willow, most of which are alive... ' (Anon. 1879). ' ...From the Royal Gardens, Kew, was received a box containing upwards of 100 varieties of willow cuttings; but, unfortunately, arriving in the heat of summer all efforts to retain viability in them proved futile, and, with the exception of two varieties, all perished... ' (Anon. 1881). In more recent times, introductions of Salix to Tasmania include the importation of hybrid willows (S. matsudana × S. alba), developed in New Zealand for soil stabilisation and as windbreak species, and released to Australia in the 1970s (Cremer et al. 1995). Although records, like the ones mentioned above, suggest that a large number of taxa have been introduced, only a relatively small number are commonly encountered in cultivation within the state. Several Salix taxa are serious weeds across southeastern Australia, including Tasmania.These problem species infest stream banks and wetlands.Their presence impacts negatively upon stream hydrology, displaces and modifies native biodiversity, and causes chemical imbalances in the waterways (ARMCANZ 2001). There are two groups of naturalised willow in Tasmania.The first contains taxa that spread and become naturalised via vegetative reproduction.Of this group, S. fragilis var.fragilis is an extreme example.This species infests vast lengths of waterways in Tasmania and on mainland Australia.Its stems are very brittle, especially at the junctions, and break away from the parent plant when disturbed, for example in windy weather.These liberated stem fragments readily root and, if they come to rest in a suitable environment such as a stream bank, they will establish into mature plants.Detached stems of willows, in general, readily produce roots in moist conditions.Hence, other taxa with less brittle stems can also spread in this way.However, their spread is usually facilitated by cultivation rather than natural means.The second group contains taxa that spread and become naturalised via sexual reproduction.Generally only a single sex of each taxon is present in Tasmania (exceptions include S. cinerea, S. matsudana × S. alba hybrids and S. ×sepulcralis nothovar.chrysocoma).For example, all S. fragilis var.fragilis plants in Tasmania are male.This restricts its reproduction to vegetative methods.The taxa that are present as both male and female plants can produce viable seed when growing together.In addition to this, willows can hybridise, allowing compatible taxa of opposite sex to breed.The seed that results from sexual reproduction has a tuft of fine, silky hairs enabling it to be transported by wind as well as on water currents.If transported to suitable habitats, such as wetlands and stream banks, the seeds can germinate and grow into mature plants.The ability of these plants to produce wind-borne propagules and be highly invasive in wet habitats makes them a significant weed threat in Tasmania.There is potential for some willows to reproduce and spread via both sexual and vegetative means.For example, S. fragilis var.fragilis, as mentioned above is highly adapted to spreading by fallen stems, it is also thought to have hybridised with S. matsudana 'Tortuosa' . Prior to 1999, willows could be freely traded, propagated and planted throughout the state.The weedy nature of certain willow species, coupled with the large number of species that could possibly be introduced to Tasmania with unknown potential weed impacts, has seen the preventative regulation of the genus Salix, via the Tasmanian Weed Management Act 1999.Under this Act, all willows except for three taxa (S. babylonica, S. ×calodendron and S. ×reichardtii) are declared species in Tasmania.Management and control of the declared taxa depends heavily on an understanding of their taxonomy, biology, and ecology.In Tasmania, this information has been largely anecdotal and inconsistent, and it is the aim of this paper to clarify these details. Published accounts of Salix relevant to Tasmania are Curtis (1967), Rodd (1982), Carr (1996) and Jacobs and Murray (2000), as well as the current Census of Vascular Plants of Tasmania (Buchanan 2007) -see Table 1.Curtis (1967) is strictly about Salix in Tasmania, but is based on very few Herbarium records.Rodd (1982), the Flora of Australia account, is also based on very few Tasmanian records.Jacobs and Murray (2000) and Carr (1996) are accounts for New South Wales and Victoria respectively and do not discuss any Tasmanian distributions in detail. Materials and Methods This paper includes treatments of Tasmanian taxa that commonly grow in habitats such as the banks of watercourses, lake/dam shores and other permanently or seasonally wet areas.At many locations, it is not clear whether the willows have been intentionally planted, and therefore are not naturalised, or whether they have arisen at the location and naturalised without human aid.In other instances, the plants may have been originally planted and subsequently spread so that a mix of cultivated and naturalised plants is present. Descriptions of some taxa that are present in Tasmania as cultivated plants are also included on the basis that they are naturalised in other parts of the world (in particular south-eastern Australia and New Zealand) and have the potential to become naturalised in Tasmania.The naturalised status of treated species is based upon field observations made by the author and, when available, from notes accompanying herbarium specimens. The definition of 'naturalised' follows Pysek et al. (2004).It refers to 'alien plants that sustain selfreplacing populations for at least 10 years without direct intervention by people (or in spite of human intervention) by recruitment from seed or ramets (tillers, tubers, bulbs, fragments, etc.) capable of independent growth' . The study is based mainly on recently collected specimens held in the Tasmanian Herbarium (HO), The willows (Salix -Salicaceae) in Tasmania material from other Australian Herbaria (CANB, MEL, NSW), exchange material from overseas herbaria, and comments on selected Tasmanian collections by British Salicologist Desmond Meikle.In the cases where Tasmanian material lacked morphological features the descriptions were supplemented using information from Meikle (1984). Names that have been previously misapplied in the Tasmanian literature are listed where relevant.Whereas some of these are the result of misidentification of specimens that have been redetermined, others are literature references that appear to have not been based on critical examination of Tasmanian specimens. Tasmanian distributions follow the floristic regions proposed by Orchard (1988).Geographical origins of the plants treated have been determined from various published sources. The 'first record' indicates the earliest herbarium voucher of a particular taxon.In the case of naturalised taxa it is not always apparent, from the herbarium vouchers, that the specimens were taken from a cultivated or naturalised plant.It may also be speculated that the plants were naturalised well before the date of first collection.First records for taxa known only from cultivation are included for completeness and are not necessarily a good indication of the time the plant was introduced to Tasmania.To accurately determine the date of introduction of each of the taxa is beyond the scope of this paper. The genus Salix spans a wide range of forms from low-growing, mat-forming shrubs through to large, wide-spreading trees.Leaves are simple, stipulate and petiolate, usually deciduous and alternate, although opposite to sub-opposite leaves occur in S. purpurea.Flowers occur in the axils of bracts and are gathered together in dense spikes or racemes commonly referred to as catkins.Willows are usually dioecious but, in some taxa, including S. ×sepulcralis nothovar.chrysocoma and one of the S. matsudana × S. alba clones, the catkins often include both male and female flowers.The flowers have a greatly reduced perianth consisting of 1-2 nectariferous glands.Staminate flowers consist of one to many stamens (usually two in Tasmanian taxa), with filaments generally free.Pistillate flowers consist of a unilocular superior ovary with the ovaries either sessile or stipitate, each with 2-4, usually bilobed stigmas.The fruit is a 2-4-valved capsule that contains numerous seeds.Each seed has a tuft of fine silky hairs attached at its base. Characters that distinguish Salix from other Tasmanian plants include the combination of the following characters: deciduous habit (Salix humboltiana commonly retains its leaves throughout winter), sympodial growth (plants lack a terminal bud), buds with a single outer scale, flowers borne in catkins and a reduced perianth which consists of 1-2 nectaries. Leaf characteristics given in this paper are based on mature leaves (material collected in summer and autumn before leaf fall) taken from exposed branches.Foliage from shaded areas of a plant, or that of strong regrowth, is generally of larger dimensions than material collected from unshaded areas. Both mature leaf material and flowering material may be required to correctly identify some specimens.Mature leaf material (present in summer) and flowering material (present in early spring) occur at different times of the year, thus requiring two collections from the same plant. Plant habit can be very diagnostic in the identification of willows.The following characters should be noted in the field when sampling: number of stems at or near ground level, size and shape of crown, and branch orientation. Salix fragilis L. var. fragilis, Sp. pl. edn 2. 1: 562 (1762) Previously misapplied names: S. alba × S. fragilis (Curtis 1967), S. alba, non-pure S. fragilis (Rodd 1982), S. ×rubens (Carr 1996;Jacobs & Murray 2000;Buchanan 2007).Discussion: For a comprehensive treatment of S. fragilis var.fragilis, see Meikle (1984).This species is not treated in Curtis (1967).However, I believe that the hybrid cross, S. alba × S. fragilis (S. ×rubens), that is described by Curtis (1967) as ' …frequent along river banks and in wet places…' was misapplied to S. fragilis var.fragilis.Salix fragilis was described in Rodd (1982) but it is stated that no Australian specimens appear to be 'pure' S. fragilis.In the Tasmanian context, I believe that pure S. fragilis is very common.Furthermore, Rodd (1982) cites in his description of S. alba a Tasmanian specimen (Hayes, W.M. Curtis s.n., HO) that I believe is S. fragilis var.fragilis.Subsequent Australian authors (Carr 1996;Jacobs & Murray 2000;Buchanan 2007), in my opinion, at least in part, attribute S. ×rubens to Tasmania on the basis of the above references and this has perpetuated the misapplication of that name. Some nomenclatural controversy surrounds S. fragilis.Continental European Salicologists regard genuine S. fragilis as the wholly glabrous plant with pale claycoloured twigs and red shoots, which British authors name S. decipiens Hoffm.or S. fragilis var.decipiens (Hoffm.)Koch; they regard what British Salicologists call S. fragilis as the hybrid S. alba × S. fragilis, otherwise known as S. ×rubens (Ib Christensen & Jonsell 2005;Meikle pers. comm. 2005).The concept of S. fragilis accepted in the present account is that of Meikle (1984). Female catkin and flower characters have been taken from Meikle (1984).In Tasmania, only male S. fragilis var.fragilis plants are recorded.Carr (1996) and Jacobs and Murray (2000) list female characteristics but state that males are most common in both New South Wales and Victoria.Female plants are considered rare even in areas where S. fragilis var.fragilis is native (Meikle 1984).This species has stems that readily break This sawfly is not recorded from mainland Australia (Naumann et al. 2002;Finlay & Adair 2006). The occurrence of tortuose and semi-tortuose saplings, that appear to have arisen from seed, at a site in Huonville, Tasmania (Baker 1711, HO), suggests that this species can hybridise with S. matsudana 'Tortuosa'; this has also been observed in specimens from Kyneton, Victoria (Baker 1637, HO).In Australia and New Zealand, this species has been recorded as hybridising with S. alba var.vitellina and S. babylonica (Sykes 1988;Jacobs & Murray 2000).Hybrids between S. fragilis and S. alba are discussed in detail under S. ×rubens.Some S. fragilis plants at Launceston (Riverside) consist of male plants with flowers regularly bearing up to four stamens; these plants may be of a different origin to other material in Tasmania.In Victoria, S. fragilis var.furcata Gaudin, a distinctive variety with bifurcate catkins, is abundantly naturalised at one location in South Gippsland (Carr 1996). Distribution and habitat: Salix fragilis is thought to be native in southern Europe, and has been recorded throughout western Europe from Norway to Spain and Portugal (Meikle 1984).It is naturalised in Australia, New Zealand (Sykes 1988), China (Fang et al. 1999), North America (Argus 2007) and South America (Correa 1984).In New Zealand, it is widespread and abundant, and said to be the greatest nuisance of all willows (van Kraayenoord et al. 1995).In Australia, it is naturalised in South Australia, New South Wales, Victoria and Tasmania.In New South Wales, it is widespread and abundant (Jacobs & Murray 2000).In Victoria, it has a scattered but locally common distribution mainly through the eastern part of the state and near Geelong and Werribee (Carr 1996).Most Victorian collections are male and almost all populations have arisen vegetatively (Carr 1996).In Tasmania, Salix fragilis var.fragilis is the most widespread and abundant naturalised willow.It was originally introduced for ornament and as a stream bank stabiliser for de-vegetated watercourses (Cremer et al. 1995).It has thrived and spread by human and natural means and today dominates the banks of many watercourses, where it causes many negative impacts (Fig. 3).Vegetative spread is by easily detached stems dispersed on water currents.It is particularly common in the Midlands, North-West and East Coast regions and is also known from the Furneaux, North-East and West Coast regions.Discussion: For a comprehensive treatment of S. alba var.vitellina, see Meikle (1984).This taxon is often treated at different ranks, for example as the cultivar S. alba 'Vitellina' , as the subsp.S. alba subsp.vitellina (L.) Arcang.or as a distinct species S. vitellina L. In Tasmania, only female S. alba var.vitellina plants are recorded.This taxon is distinguished from other Tasmanian willows by its large upright habit.It is oftensemi pendulous towards the ends of the branches, and more so in the lower crown.Its stems are conspicuously orange-yellow, a character that is especially obvious in the autumn and winter months when the tree is without leaves. Salix alba var.vitellina is thought to have hybridised with S. fragilis var.fragilis at various sites throughout Tasmania, producing plants of intermediate characters that are referable to S. ×rubens.Hybrids have also been observed in New South Wales and Victoria, most likely with S. ×rubens and S. fragilis var.fragilis (Cremer 1995;Carr 1996).A closely related taxon, S. alba var.caerulea (Sm.)Sm. has been cultivated in Tasmania and is said to have produced viable seed (N. Parker pers. comm. 2005).Timber from S. alba var.caerulea is used to make cricket bats.A male clone with a distinctive large pyramidal crown is known to be cultivated in Campbell Town and Launceston and possibly at other locations in the state.The plant's identity is unknown, but its bright, orange-yellow stems suggest that it is a cultivar of S. alba var.vitellina or a hybrid with that taxon as one parent.Other varieties and cultivars are grown as ornamentals throughout the world and some may be cultivated in Tasmania. Distribution and habitat: Salix alba var.vitellina is thought to be an ancient selection, originating from central and southern Europe (Skvortsov 1999).It has been cultivated in Europe, possibly since Roman times (Bean 1980) and is now widespread in cultivation or as a relict of cultivation (Skvortsov 1999;Jonsell 2000).It is widely cultivated in the temperate regions of Australia, often on the banks of watercourses, lakes and ponds, and as a specimen tree in large parks and gardens.Carr (1996) and Jacobs and Murray (2000) regard it as naturalised in South Australia, New South Wales, Victoria and Tasmania.On the mainland of Australia, naturalised populations have resulted mainly from vegetative reproduction, producing frequent small populations (Carr 1996).The relatively non-brittle stems have restricted its spread in this manner, as compared to the very brittle stems of S. fragilis var.fragilis.In Tasmania, S. alba var.vitellina is one of the most commonly cultivated willows.It is frequently grown as a specimen tree in parks and gardens and on the banks of watercourses and water bodies in rural and urban areas throughout the state.Although Australian authors (Carr 1996;Jacobs & Murray 2000) consider this taxon naturalised in Tasmania, my opinion is that it is not seen in naturalised populations, and that the vast majority of trees were deliberately planted.At one site several small plants were observed growing near a larger, seemingly planted individual (Geilston Bay, Baker 188, HO, MEL).The smaller plants may have arisen from vegetative spread.The greatest threat that this taxon poses is through the production of seed and the subsequent invasion of suitable habitats by seedling willows.This may happen in areas where it is sympatric with male willow trees such as S. fragilis var.fragilis.Discussion: For a comprehensive treatment of S. ×rubens see Meikle (1984).S. ×rubens is a hybrid, the parents being S. fragilis and S. alba.In Tasmania, S. ×rubens occurs as at least two different cultivated forms that are not known to be fully naturalised.These are S. ×rubens nothovar.basfordiana (Scaling ex Salter) Meikle f. basfordiana Meikle and S. ×rubens nothovar.basfordiana f. sanguinea Meikle.In addition to these cultivated plants, there is at least one occurrence of S. ×rubens that is the result of in situ hybridisation.Australian workers, as outlined in Table 1, regard this taxon to be naturalised in Tasmania.However, I believe that the name S. ×rubens has, in part, been incorrectly applied to the widespread weedy taxon that is treated here as S. fragilis var.fragilis (see discussion under S. fragilis var.fragilis). Salix ×rubens may be confused with S. alba var.vitellina and S. fragilis var.fragilis.Both S. ×rubens and S. fragilis var.fragilis have red rootlets in water.The leaves of S. ×rubens usually retain some hairs whereas S. fragilis var.fragilis quickly becomes completely glabrous.S. ×rubens is present as both sexes and one would expect to find mixed sex populations if they were formed from in situ hybridisation. Distribution and habitat: Salix ×rubens is common in central Europe as well as in western and temperate Russia and is often associated with non-native habitats (Skvortsov 1999).In New Zealand, S. ×rubens is common where S. alba var.vitellina and S. fragilis grow together.The plants may be of either sex and backcrossing occurs (Sykes 1988). In New South Wales, it is considered a very serious environmental weed, and is regenerating by seed (Jacobs & Murray 2000).In Victoria, it is a very widespread and common weed, along stream banks, particularly in the south of the state.In Tasmania, the cultivated forms of S. ×rubens are uncommon.At Barretta, in the East Coast region a small population (Baker 159, HO) of S. ×rubens nothovar.basfordiana f. basfordiana was recorded growing in a low lying wet area adjacent to a dam.All plants were female, suggesting the population had resulted from vegetative spread.Deliberate planting in this case could not be ruled out.Salix ×rubens hybrids that have formed in situ are also not common in Tasmania.Young plants thought to be of this hybrid cross have been observed at Westerway, in the East Coast region of the state (Baker 1535, HO).Salix ×rubens may be more widespread in Tasmania due to difficulties in distinguishing it from S. alba var.vitellina and S. fragilis var.fragilis.Previously misapplied names: Salix babylonica (Curtis 1967;Rodd 1982;Carr 1996;Jacobs & Murray 2000;Buchanan 2007).Common name: Weeping Willow Illustrations: Fig. 1C, 2C Trees to 20 m tall with a wide crown and pendulous branches.Bark deeply and coarsely fissured, grey.Stems glabrous, olive-brown to grey-brown.Bud scales sparsely hairy at first, becoming glabrous, brown.Stipules mostly narrow-ovate, glandular-serrate with glands also present on the adaxial surface, caducous.Leaves lanceolate, up to 135 mm long, 15-22 mm wide, very sparsely hairy when young, soon becoming glabrous; adaxial surface glossy green; abaxial surface glaucous; margin coarsely glandular-serrate; apex acuminate; petiole up to 12 mm long, with glands near the lamina junction.Catkins appearing with the leaves on leafy side-shoots.Male catkins not seen in Tasmania.Female catkins 15-32 mm long, 5-8 mm wide, spreading to slightly descending or ascending; bracts oblong to narrowly ovate, 1-2 mm long, pale yellow to yellow-green, sparsely hairy on margin; ovary 2-3.5 mm long, shortly pedicellate, pale green, glabrous or very sparsely hairy at the base.Seed < 1 mm long. Discussion: For a comprehensive treatment of S. ×pendulina var.pendulina, see Meikle (1984).Only female plants have been recorded in Tasmania.Both male and female plants have been recorded in New South Wales and Victoria (Carr 1996;Jacobs & Murray 2000).Salix ×pendulina is a hybrid, the parents being S. fragilis var.fragilis and S. babylonica.It has previously been referred to in Tasmania as S. babylonica (Curtis 1967;Rodd 1982;Carr 1996;Jacobs & Murray 2000;Buchanan 2007).However, the name Salix babylonica has been misapplied in Tasmania, and it would appear that all non-golden-stemmed weeping tree willows are S. ×pendulina in this state.Carr (1996) claims that many of the Australian specimens of S. babylonica are referable to S. ×pendulina and S. ×sepulcralis.The concept of S. ×pendulina var.pendulina accepted in this account is that of Meikle (1984).It is distinguished from the other, strongly-weeping tree willow, S. ×sepulcralis nothovar.chrysocoma, by its stems being olivebrown as opposed to golden-yellow, and by having relatively short catkins composed of only female flowers.Salix ×pendulina differs from S. babylonica by having distinctly pedunculate catkins that are usually greater than 20 mm long.One cultivated specimen with ovaries sparsely hairy at the base has been observed in Tasmania (Warrane, Baker 211, HO).These can be referred to as S. ×pendulina var.eleganitissima C.Koch (Meikle 1984).The type variety has completely glabrous ovaries. Distribution and habitat: Although its origin is unclear, it is thought that S. ×pendulina is a garden hybrid that originated in Germany early in the 1800s (Meikle 1984).In New Zealand, it is naturalised throughout the country, especially in moist places near still or flowing water (Sykes 1988 as S. babylonica).In New Zealand literature, it is treated within S. babylonica, as plants are described with catkins up to 30 mm long (New Zealand material not seen).In Victoria, it is widely cultivated for ornament, and naturalised populations grow along streams; it is unknown whether naturalised populations have arisen by vegetative means or by in situ hybridisation (Carr 1996).It is also naturalised in New South Wales.Curtis (1967) remarks that this species is commonly planted and is more or less naturalised in Tasmania.Whilst it is commonly cultivated as roadside plantings, in parks and large gardens, and on the banks of watercourses and other water bodies, it has not been recorded as a naturalised species.All individuals appear to have been planted deliberately and are not actively spreading.It has the potential to breed with plants of other male willow taxa.Common name: Golden Weeping Willow Illustrations: Fig. 1E; 2D Trees up to 15(-18) m tall with a wide-spreading crown and pendulous branches.Bark deeply fissured, grey to brown.Stems golden-yellow, sparsely pubescent, soon becoming glabrous.Buds golden to greenish-yellow, pubescent, soon becoming glabrous.Stipules small and caducous or absent.Leaves lanceolate, 45-110 mm long, up to 20 mm wide, thinly pilose, becoming glabrous, adaxial surface glossy, abaxial surface glaucous; margin irregularly glandular-serrate; apex acuminate; petiole up to 8 mm long, yellow, often with 1 or more glands near the lamina junction.Catkins appearing with leaves on short, leafy side shoots, comprised of either solely male flowers, solely female flowers or bisexual, up to 50 mm long, 5-10 mm wide, spreading to erect, commonly curved; bracts lanceolate, c. 2 mm long, yellow, sparsely pubescent.Stamens 2, exceeding bracts in length; filaments hairy towards base.Ovary 2-3 mm long, green, glabrous, sessile to sub-sessile.Seed seen occasionally. Discussion: For a comprehensive treatment of S. ×sepulcralis nothovar.chrysocoma, see Meikle (1984).Plants are frequently bisexual, with male, female and bisexual catkins occurring on the same tree.The golden weeping willow is one of only two Tasmanian willows that have bisexual catkins (the other being one of the New Zealand-produced S. matsudana × S. alba clones).Salix ×sepulcralis nothovar.chrysocoma is a hybrid that combines the golden-yellow stems of S. alba var.vitellina with the strong weeping habit of S. babylonica, two characters that distinguish it from all other willows in Tasmania.Salix alba var.vitellina is somewhat weeping, but it is never as pendulous as S. ×sepulcralis nothovar.chrysocoma.It is said to be far more commonly cultivated in Europe than the true S. babylonica (Akeroyd 1993), a taxon thought to be either extremely rare or no longer grown in the British Isles (Meikle 1984) and not recorded correctly in Tasmania. Distribution and habitat: Salix ×sepulcralis nothovar.chrysocoma is a very commonly cultivated willow.Its origins are largely unknown, but it was first described from cultivated material grown in Europe (Meikle 1984).In New Zealand, it is widely planted in streambank habitats, grows wild with S. fragilis and S. alba, and is thought to hybridise with them (Sykes 1988, as S. ×chrysocoma).In New South Wales, it is widely planted and commonly naturalised (Jacobs & Murray 2000).In Victoria, it is widely planted for ornament and stream stabilisation and is commonly naturalised.Seed is set but the tree typically reproduces by vegetative means (Carr 1996).In Tasmania, it is grown as roadside plantings, in parks and large gardens, and is often planted on the banks of watercourses and water bodies in rural areas.It is rarely recorded as naturalised, most plants appear to have been planted deliberately.At one location (Emu River, Burnie Baker 248, HO), a small degree of vegetative spread, limited to a few plants, is evident but deliberate planting still cannot be ruled out.This taxon is capable of sexual reproduction, a population of small saplings were growing in a roadside drainage line in southern Tasmania (Lucaston, Baker 1745, HO).The plants had characteristic yellow stems of S. ×sepulcralis nothovar.chrysocoma.Both S. fragilis var.fragilis and S. ×sepulcralis nothovar.chrysocoma were in cultivation nearby.Salix ×sepulcralis nothovar.chrysocoma may have also been a parent of an infestation of plants which grew in sediment settling ponds at Launceston.Carr (1996) and Jacobs and Murray (2000).In Australia, it is known only as a female plant and bears very short and narrow catkins.Salix matsudana 'Tortuosa' can be immediately distinguished from other willows that grow in Tasmania by its strongly contorted branches, stems and leaves.Some authors believe that the species is synonymous with S. babylonica (Skvortsov 1999;Jonsell 2000).For consistency with other Australian workers (Carr 1996;Jacobs & Murray 2000), the name Salix matsudana 'Tortuosa' has been adopted.This plant is capable of producing viable seed and seedlings through hybridisation with plants of other willow taxa.For example, a small infestation of plants was recorded at Huonville (Baker 1711, HO), ranging in size from 3-50 cm tall, growing in a poorly drained, disturbed site of approximately 100 square metres in extent.The plants appeared to have arisen from seed because hand-pulled plants had well-developed tap roots.The plants displayed varying degrees of contortedness, indicating that S. matsudana 'Tortuosa' is one of the parents (Fig. 4).The second parent (pollen donor) was most likely S. fragilis var.fragilis.Both S. matsudana 'Tortuosa' and S. fragilis var.fragilis were growing together in a nearby garden.Such hybrid plants may be more common in Tasmania.Hybrids of this nature have been recorded at Kyneton, Victoria (Baker 1637, HO) and New Zealand (Sykes 1988). Distribution and habitat: The tortured willow was introduced as an ornamental plant from China to Europe in the early 1920s (Bean 1980).It is naturalised in New South Wales (Jacobs & Murray 2000) and in Victoria, where it is widely cultivated and has become sparingly naturalised by vegetative means (Carr 1996).Field observations indicate that whilst this taxon is commonly cultivated throughout Tasmania, it is hardly naturalised.Occasionally individuals or small groups of trees grow around municipal rubbish tips from dumped garden refuse (for example at Rosebery and Queenstown).However, it is never seen in large populations, nor is it seen infesting banks of waterways. Salix matsudana Koidz. × Salix alba L. Common names: New Zealand Hybrid Willow, Matsudana Hybrid Willow Illustrations: Fig. 1G; 2F Trees to 25 m tall.Branches upright, forming a narrowconical crown, sometimes semi-pendulous, especially in lower crown.Bark fissured, grey.Stems pubescent, soon becoming glabrous, grey-green to reddish brown, moderately brittle.Bud scales sericeous, soon becoming glabrous, brown.Stipules lanceolate, glandular serrate, caducous.Leaves lanceolate, 90-140 mm long, 10-17 mm wide, sericeous, soon becoming glabrous; adaxial surface glossy green, abaxial surface glaucous; margin glandular-serrulate, occasionally entire; apex acute to acuminate; petiole up to 10 mm long, without glands at the lamina junction.Catkins appearing with the leaves on leafy side-shoots, composed of either solely male flowers, solely female flowers or mixed male and female flowers.Male catkins up to 35 mm long, 7-9 mm wide, spreading to erect; bracts, lanceolate, up to 2.5 mm long, pale yellow, sparsely hairy at the base; stamens 2, exceeding the bracts, pilose near the base.Female catkins 15-30 mm long, 5-9 mm wide, spreading to erect; bracts ovate or lanceolate, 1.5-2 mm long, pale yellow, glabrous or sparsely hairy at the base; ovary shortly pedicellate, pale green, glabrous.Bisexual catkins ranging from either predominantly male through to predominantly female, 30-50 mm long; floral characters similar to those of male and female flowers.Seed c. 1 mm long. Discussion: This description includes characters from several different clones that were developed in New Zealand by breeding non-tortuose S. matsudana with S. alba.are at least three different clones in Tasmania, with at least one being female, one male and a bisexual clone named S. matsudana × S. alba 'Cannock' .The S. matsudana × S. alba clones were developed to produce willows suitable for soil conservation and river bank protection.Plants were selected for traits such as rapid establishment, resistance to diseases, extensive root development, narrow crown form and adaptability to extreme site conditions.Three clones were released in 1975 with a further six in 1980 (van Kraayenoord et al. 1995).The clones are: 'Cannock' (bisexual), 'Makara' (female), 'Te Awa' (female), 'Tangoio' (female), 'Hiwinui' (male), 'Adair' (male), 'Wairakei' (male), 'Moutere' (male) and 'Aokautere' (male) (Wilkinson et al. undated). This group of willows can be distinguished from other Tasmanian willows by their narrow upright crowns, usually consisting of a single main stem, a character especially obvious in younger specimens.The twigs are olive-green in colour.The branches are often semi-pendant, especially in the lower crown. Distribution and habitat: These willows are widely planted throughout south-eastern Australia and have started to spread by seed in some rivers in New South Wales and the Australian Capital Territory (Cremer 1995;Carr 1996;Jacobs & Murray 2000).One of the clones has become naturalised by vegetative means in Victoria (Carr 1996).In Tasmania, they are commonly grown in rural areas as windbreak/fence-line plantings and less often as specimen trees for ornament but no plants are known to be naturalised in this state.However, apparently viable seed has been observed in the fruit of the bisexual clone (N.Parker pers. comm. 2005).Planting female specimens is discouraged (Anon. 2007) and the male clones are said to readily hybridise with S. babylonica, S. matsudana and S. alba (Cremer 1995;Jacobs & Murray 2000;Cremer 2003).Previously misapplied names: S. atrocinerea Brot.(Curtis 1967), S. cinerea (Rodd 1982;Carr 1996;Jacobs & Murray 2000;Buchanan 2007).Common name: Pussy Willow Illustrations: Fig. 1H, 2G Multi-stemmed shrubs or small trees up to 12(-16) m tall.Branches erect, forming a crown that is usually taller than wide.Bark usually smooth, developing longitudinal fissures with age.Stems densely pubescent when young but becoming glabrous with age, grey when hairy but becoming reddish brown, olive or grey with age.Bud scales densely pubescent at first, soon glabrous, concolorous with stems, often reddish.Stipules auriculate, up to 5 mm long, irregularly serrate, caducous; serrations gland-tipped.Leaves mostly elliptic, sometimes obovate or oblong, 50-120 mm long, 25-55 mm wide; adaxial surface densely pubescent when young, becoming glabrous to sparsely hairy; abaxial surface dull green to glaucous, with hairs persistent; margin revolute, strongly and irregularly undulate-serrate, with teeth usually glandtipped; apex acute, often twisted obliquely; petiole 6-10(-14) mm long.Catkins appearing before the leaves.Male catkins 20-30(-50) mm long, 10-20 mm wide, erect; bracts broadly lanceolate to rhomboid, up to 3 mm long, black in upper 1/2-2/3, sericeous; stamens Discussion: For a discussion of S. ×reichardtii see Meikle (1984).Salix ×reichardtii is a hybrid, the parents being S. cinerea and S. caprea.It is unlikely that S. ×reichardtii appeared in Tasmania as the result of in situ hybridisation, and instead, a selection (or selections) would have been introduced into the state.This hybrid is not treated in Curtis (1967).However, I believe that the name S. atrocinerea, which is treated by Curtis (1967), was misapplied to S. ×reichardtii.Salix atrocinerea is a synonym of S. cinerea subsp.oleifolia, and whilst it may have been present in Tasmania at the time of Curtis's publication, it certainly was not represented by any specimens in the Tasmanian Herbarium.Unidentified specimens of S. ×reichardtii were, however, in the collection at that time.Rodd (1982) cites in his treatment of S. cinerea a Tasmanian specimen (Huon Road, near Longley, Curtis s.n., HO) that is here referred to S. ×reichardtii.Subsequent Australian authors (Carr 1996;Jacobs & Murray 2000;Buchanan 2007), in my opinion, have misapplied the name S. cinerea to Tasmania on the basis of the above references. Material upon which the above description is based was sent to British Salix specialist Desmond Meikle for his comment, and although he could find affinities with S. cinerea, S. caprea L. and S. ×reichardtii, he was 'regrettably puzzled' by the specimens (D.Meikle in litt.).In this treatment, the plant is referred to as S. ×reichardtii, and conforms to the descriptions and naming of similar material that occurs in New South Wales, Victoria and New Zealand (New Zealand material not seen). Salix ×reichardtii can be difficult to distinguish from the other naturalised shrub willows in Tasmania.Its narrow and upright habit, in contrast to the rounded habit of S. cinerea, is usually a reliable field character.The leaves of S. ×reichardtii are commonly elliptic as opposed to obovate in S. cinerea.The margins of S. ×reichardtii leaves are markedly more irregular and undulate than those of S. ×calodendron.The rustcoloured indumentum on the leaves of S. cinerea subsp.oleifolia is not present on S. ×reichardtii leaves.Also, S. ×reichardtii is only known from male plants; female populations would indicate S. ×calodendron and mixed-sex populations would be the sexuallyreproducing S. cinerea or perhaps a backcross between S. cinerea and S. ×reichardtii.The wood under peeled bark of S. ×reichardtii, S. cinerea and S. ×calodendron is variously ridged, and is a good spotting character for all of these taxa. A complex series of hybrids between S. ×reichardtii and S. cinerea subsp.cinerea has been noted in one location in Victoria (Carr 1996).It is possible that this has also occurred in Tasmania, but this is supported only by anecdotal evidence.An ornamental weeping willow, marketed as S. caprea 'Weeping Sally' , is occasionally cultivated in Tasmania.The pendulous scion, although reported to be S. caprea, has-rust coloured indumentum on the leaves, indicating it has S. cinerea subsp.oleifolia parentage.It may be a hybrid between S. caprea and S. cinerea subsp.oleifolia, or a pendulous/prostrate form of the latter.It is female.The rootstock is S. ×reichardtii, in the sense described above.It is strongly upright and serves as robust stem to which the weeping scion is grafted.The rootstock is prone to shooting from the base, resulting in the production of flowering stems that are male.The flowering time of the scion and rootstock overlap, are in very close proximity and sexual reproduction results in the production of fertile seed. Distribution and habitat: Salix ×reichardtii is native throughout Europe where both of its parents co-occur.In Australia, it is commonly planted as an ornamental and for shelter belts in New South Wales, Victoria and Tasmania, and is naturalised throughout this area on the banks of watercourses, lake shores and in drainage lines (Carr 1996;Cremer 1995;Jacobs & Murray 2000).In Tasmania, S. ×reichardtii is widely cultivated as an ornamental species in parks and large gardens, and is commonly planted in rural areas as a windbreak species.It is occasionally naturalised in drainage lines in paddocks and roadsides and along watercourses.Dispersal in these situations is by re-sprouting fallen trees and branches, although it is sometimes difficult to determine whether spread has been by natural means or if it has been facilitated by planting.It may serve as a pollen donor if in close proximity to female plants of S. cinerea.Discussion: For a comprehensive treatment of S. cinerea subsp.cinerea, see Meikle (1984).The above description conforms with the application of this name in Victoria (Carr 1996).The misapplication of the name S. cinerea to S. ×reichardtii in Tasmania is discussed under the treatment of S. ×reichardtii.Salix cinerea subsp.cinerea can be difficult to distinguish from the other naturalised shrub willows in Tasmania.Its rounded habit, in contrast to the more narrow and upright habit of S. ×reichardtii, is usually a reliable field character.However, the crown shape of S. cinerea subsp.cinerea varies from rounded when in open situations to more upright when growing in denser infestations or amongst closed vegetation.The leaves of S. cinerea are mostly obovate as opposed to the mainly elliptic leaves of S. ×reichardtii and S. ×calodendron.Salix cinerea subsp.cinerea differs from S. cinerea subsp.oleifolia by lacking rust-coloured indumentum on its leaves.Salix cinerea grows in mixed sex populations; populations of only male plants may indicate vegetative reproducing S. ×reichardtii.As with S. ×reichardtii and S. ×calodendron, the wood under peeled bark of, S. cinerea is variously ridged, and is a common spotting character for all of these taxa.The female catkins, ovaries and peduncle of S. cinerea subsp.cinerea lengthen significantly as they mature (catkins lengthen up to 85 mm long).The lengthening is not as prominent in S. cinerea subsp.oleifolia. Distribution and habitat: Salix cinerea subsp.cinerea is native throughout Europe and temperate Asia (Meikle 1984(Meikle , 1990) ) where it is one of a few willows that readily invades disturbed habitats such as ditches, forest edges and openings (Skvortsov 1999).It is naturalised in New Zealand (Sykes 1988) and Australia.In New Zealand, it is a widespread weed of wet habitats and is often the dominant species in swamps (Sykes 1988).In Australia, it is naturalised in South Australia, New South Wales, Victoria and Tasmania (Carr 1996;Jacobs & Murray 2000), along the banks of waterways and in seasonally or permanently wet areas such as drainage lines, lake and dam shores, and swamps and bogs. Naturalised populations are known to be in Tasmania and on the mainland as mixed-sex populations.The main method of regeneration and spread of S. cinerea subsp.cinerea is via the production of wind and water borne seed.In Tasmania, the species is only known from two locations: in and around Queenstown in the West Coast region and from the Longley-Kingston area, south of Hobart, in the East Coast region.Two plants have also been recorded growing in Hobart.In these areas, it is abundantly naturalised and widespread, and is the target of an eradication program.Naturalised populations require sufficiently moist habitats at the time the seed is released to successfully establish, and are consequently restricted to moist roadside cuttings, drainage lines, banks of watercourses and dams, and other permanently or seasonally wet areas.In some areas it has become the dominant species infesting large stretches of stream banks.Characters same as S. cinerea subsp.cinerea but leaves with an indumentum of translucent, uncoloured hairs and rust-coloured hairs, the latter sometimes only few and scattered.Catkins generally smaller in overall dimensions; male catkins 17-20 mm long; female catkins up to 35 mm long.Seed up to 2 mm long. Discussion: For a comprehensive treatment of S. cinerea subsp.oleifolia, see Meikle (1984).The misapplication of the name S. atrocinerea to S. ×reichardtii in Tasmania is discussed under the treatment of S. ×reichardtii.Both male and female plants are recorded in Tasmania.The presence of rust-coloured hairs on the leaf lamina allows this taxon to be differentiated from other willows that grow in Tasmania.The female catkins do not lengthen as significantly as they mature as they do in S. cinerea subsp.cinerea. Distribution and habitat: Salix cinerea subsp.oleifolia is native to Europe where it occurs in Britain, Ireland, western France, Spain and Portugal (Meikle 1984).This subspecies is recognised as naturalised in Victoria (Carr 1996;Walsh & Stajsic 2007) where it grows in similar habitats to the type subspecies (Carr 1996).In Tasmania, it is confined to the North-West region.There is a large naturalised population, the target of an eradication program, between Penguin and Ulverstone, occupying similar habitats to those of S. cinerea subsp.cinerea.It has also been recorded on the banks of the River Leven near Gunns Plains, and near the township of Edith Creek.It is likely to be more widespread than records suggest.Multi-stemmed shrubs or trees, 12-15 m tall.Branches erect to spreading, forming a narrow or rounded crown.Bark rough, with scattered longitudinal fissures, grey.Stems densely pilose, silky to touch, ash-coloured; wood under peeled bark prominently ridged.Bud scales densely pubescent, gray-brown.Stipules auriculate, large and prominent, especially on new growth, 8-15 mm long, 5-7 mm wide.Leaves elliptic to obovate, up to 140 mm long and up to 50 mm wide, pubescent; adaxial surface glossy at first, becoming dull, with a thin covering of persistent hairs; abaxial surface grey, with a dense covering of persistent hairs; margin irregular, indistinctly glandular-serrate; apex acute; petiole stout, up to 10-15 mm long, densely pubescent.Catkins appearing before the leaves.Male catkins not known.Female catkins 35-75 mm long, 8-10 mm wide, erect to sub-erect, sub-sessile to shortly stalked; bracts ovaterhomboid, up to 2.5 mm long, black in distal 2/3, pilose; ovary up to 3 mm long, shortly pedicellate, densely white-pubescent.Seed not produced. Discussion: For a comprehensive treatment of S. ×calodendron, see Meikle (1984).Salix ×calodendron is thought to be a tri-hybrid willow, the parents being S. caprea, S. cinerea and S. viminalis L. It is most likely that it was introduced into Tasmania as a horticultural selection.Salix ×calodendron can be difficult to distinguish from the other naturalised shrub willows that occur in Tasmania.The leaves of S. ×calodendron and S. ×reichardtii are commonly elliptic as opposed to obovate in S. cinerea.The margins of S. ×calodendron leaves are not as markedly irregular and undulate as are those of S. × reichardtii.Salix ×calodendron only occurs as female plants; wholly male populations would indicate S. ×reichardtii and mixed-sex populations would be a sign of the sexually-reproducing S. cinerea.The wood under peeled bark of S. ×calodendron, S. cinerea and S. ×reichardtii is variously ridged, and is a good spotting character for all of these taxa. Distribution and habitat: Salix ×calodendron is presumed to be native to the British Isles where it has a scattered distribution.It is cultivated elsewhere in Europe and occasionally escapes from plantings (Akeroyd 1993).In New Zealand, it is widely cultivated and naturalised along the banks of streams and in swamps near original plantings (Sykes 1988).In Australia, it is occasionally grown for ornament and for stabilisation of stream banks in the south-eastern states.It is considered naturalised but uncommon in New South Wales (Jacobs & Murray 2000) and is possibly naturalised in Victoria (Carr 1996).In Tasmania, this willow is known from only two isolated locations.It was first recorded at Longley (Baker 1771, HO), in the East Coast region, where several large shrubs/ small trees are in cultivation in a permanently wet area, presumably to 'soak' up excess moisture.No obvious active regeneration by vegetative means was observed at this location.The second population grows on derelict land adjacent to the Queen River at Queenstown (Baker 1728, HO), in the West Coast region.The plants are confined to a small area and appear to have been planted, perhaps with some minor spread via vegetative means.It is considered to be sterile (Meikle 1984) and should not pose a weed threat through the production of seed.12. Salix purpurea L., Sp.pl.2: 1017 (1753) Common name: Purple Osier Illustration: Fig. 1L Multi-stemmed shrubs to 6(-8) m tall.Branches spreading to erect, forming a rounded bush.Bark smooth, pale grey.Stems glabrous, whitish-grey.Bud scales glabrous, dark purple-brown.Stipules small, caducous or absent (not seen in Tasmanian material).Leaves linear, linear-lanceolate or oblanceolate, 25-85(-100) mm long, 5-30 mm wide, pubescent when young, soon becoming glabrous; adaxial surface green, sublustrous; abaxial surface somewhat paler than adaxial surface; mid-rib prominent, yellow; margin entire to sparsely minutely toothed; apex acute; petiole up to 7 mm long but usually quite short, without glands at the lamina junction.Catkins appearing before the leaves.Male catkins not seen in Tasmania but with similar general proportions to female catkins (see Meikle 1984); stamens 2, filaments and anthers joined, exceeding bracts in length.Female catkins 10-20(-30) mm long, 3-4(-7) mm wide, erect to sub-erect, curved; peduncle 2-3 mm long; bracts ovate, up to 1 mm long, dark brown to black, pilose; ovary 1-1.5(-3) mm long, sessile with a dense covering of short, whitish hairs.Seed not seen. Discussion: For a comprehensive treatment of S. purpurea, see Meikle (1984).In Tasmania, only female S. purpurea plants are recorded.This species is distinguished from other Tasmanian willows by the combination of its shrubby habit and long, narrow leaves.Other distinctive characteristics include the leaves, stems and catkins often being borne in opposite to sub-opposite pairs.The bark of S. purpurea, when peeled away from the wood, is yellow underneath and is very bitter to taste.The stems of this species are tough and flexible and are used for basket making.Distribution and habitat: Salix purpurea has a widespread native distribution throughout Europe, western Asia and northern Africa, growing in wet habitats such as river margins.The species is naturalised in Australia, New Zealand, Canada and the United States of America.In Australia, it is naturalised in New South Wales and Victoria and is represented by several cultivars of both sexes (Cremer 1995;Cremer et al. 1995).There it was planted for erosion control on the banks of rivers and roadside batters (Cremer et al. 1995;Carr 1996).In New Zealand, it was introduced for soil stabilisation, as well as for basket making (Sykes 1988).In Tasmania, it has been planted occasionally for stream bank stabilisation and for ornament.It is not known whether this species is naturalised in Tasmania or if all plants have been planted.For example, at the Oldina Forest Reserve in the North-West region of the state, approximately 400 m of creek line is dominated by S. purpurea.It was originally planted at this site but it is not known how much of the current population was planted.Monitoring would be required to determine if the species is spreading at this and other sites. Discussion: For a comprehensive description, see Rodriguez et al. (1983); flower and catkin characters given above have been taken from this source.This taxon is immediately distinguished from other willows by its very tall and narrow crown.In Tasmania, catkins have never been observed, and plants commonly retain their leaves throughout winter when most other willows are leafless.According to Dorn (1976) and Rodriguez et al. (1983), the name S. chilensis Molina, used by various authors (including Meikle 1990;Carr 1996;Spencer 1997) and encountered in the nursery industry, has been wrongly applied to this taxon. Distribution and habitat: Salix humboldtiana is native to Central and South America where it grows from Mexico through to Chile.The fastigiate form originates from the Copiapó province, northern Chile (Rodriguez et al. 1983).In Australia, S. humboldtiana 'Pyramidalis' is a common garden plant, especially in coastal areas of Queensland, New South Wales and Victoria (Carr 1996;Spencer 1997).It is naturalised to a very limited extent in Queensland and New South Wales (Jacobs & Murray 2000;Bostock & Holland 2007).In Tasmania, this taxon is commonly cultivated and has never been recorded outside of cultivation. Figure 3 . Figure 3. Salix fragilis var.fragilis growing on the floodplains and banks of the South Esk River at Longford. Figure 4 . Figure 4. Saplings of S. matsudana 'Tortuosa' × S. fragilis var.fragilis at a disturbed site in Huonville.Inset: Close-up of a single sapling showing contorted stem and leaves. Table 1 . Summary of Tasmanian Salix taxa treated in previous works. Female catkins not seen in Tasmania, but with similar general proportions to male catkins; ovary subsessile, pale green, glabrous. Seed not seen. Several authors(includiJacobs & Murray 2000) & Murray 2000)separate S. fragilis var.fragilis from other taxa by the fragile nature of the plant, commenting that the stems break with an 'audible crack' .Whilst the stems of S. fragilis var.fragilis are very brittle this characteristic is variable and can be present in other tree willows.This species can be distinguished from other large upright tree species such as S. alba var.vitellina and S. ×rubens by having brown to olive-brown stems as opposed to orangeyellow to reddish stems in S. alba var.vitellina and S. ×rubens.In both S. ×rubens and S. fragilis var.fragilis, the rootlets immersed in watercourses are red. Baker Muelleria rec 27-2 text.indd132 1/07/2009 9:26:01 AM from the parent plant.The leaves of S. fragilis var.fragilis become completely glabrous whereas the other two taxa usually retain at least some fine appressed hairs.Red-brown leaf galls caused by Pontiana proxima (Lepeletier), an exotic willow sawfly, are common on Tasmanian plants. The willows (Salix -Salicaceae) in Tasmania appearing with the leaves on leafy side-shoots.Male catkins not seen in Tasmania.Female catkins 60-75 mm long, up to 8 mm wide, spreading; peduncle up to 8 mm long; bracts lanceolate, up to 4 mm long, pale yellow, hairy, especially on proximal half, caducous; ovary up to 4 mm long, shortly pedicellate, pale green, glabrous.Seed not seen.
2018-12-27T11:35:40.008Z
2009-01-01T00:00:00.000
{ "year": 2009, "sha1": "d5119e0fb1cf230a2e63594345bfa5f8f3eb87b1", "oa_license": "CCBYNCSA", "oa_url": "https://www.biodiversitylibrary.org/partpdf/291949", "oa_status": "HYBRID", "pdf_src": "ScienceParseMerged", "pdf_hash": "d5119e0fb1cf230a2e63594345bfa5f8f3eb87b1", "s2fieldsofstudy": [ "Environmental Science", "Biology" ], "extfieldsofstudy": [ "Biology" ] }
2689289
pes2o/s2orc
v3-fos-license
Comparison of the Causes and Clinical Features of Drug Rash With Eosinophilia and Systemic Symptoms and Stevens-Johnson Syndrome Purpose Drug rash with eosinophilia and systemic symptoms (DRESS) and the Stevens-Johnson syndrome (SJS) are both severe drug reactions. Their pathogenesis and clinical features differ. This study compared the causes and clinical features of SJS and DRESS. Methods We enrolled 31 patients who were diagnosed with DRESS (number=11) and SJS (number=20). We retrospectively compared the clinical and laboratory data of patients with the two disorders. Results In both syndromes, the most common prodromal symptoms were itching, fever, and malaise. The liver was commonly involved in DRESS. The mucosal membrane of the oral cavity and eyes was often affected in SJS. The most common causative agents in both diseases were antibiotics (DRESS 4/11 (37%), SJS 8/20 (40%)), followed by anticonvulsants (DRESS 3/11 (27%), SJS 7/20 (35%)). In addition, dapsone, allopurinol, clopidogrel, sulfasalazine and non-steroidal anti-inflammatory drugs (NSAIDs) were sporadic causes. Conclusions The most common causes of DRESS and SJS were antibiotics, followed by anticonvulsants, NSAIDs and sulfonamides. The increase in the use of antibiotics in Korea might explain this finding. INTRODUCTION The drug rash with eosinophilia and systemic symptoms (DRESS) syndrome, previously referred to as the 'drug hypersensitivity syndrome' , is an adverse drug reaction characterized by skin rash, fever, lymph-node enlargement and internal organ involvement. 1 The definition of DRESS is flawed in that it does not characterize the nature of the cutaneous rash. Aromatic anticonvulsants (phenytoin, phenobarbital, carbamazepine) and sulfonamides are the most common causes of DRESS. 2 The differential diagnosis includes Stevens-Johnson syndrome (SJS), which is a rare, life-threatening, cutaneous adverse reaction. SJS is characterized by targetoid cutaneous lesions affecting less than 10% of the body surface area. There is mucous membrane involvement in approximately 90% of the affected patients. 3 The risk factors for SJS include infection, vaccination, drugs, systemic diseases, physical agents and food. SJS has been associated with more than 100 drugs based on case reports and studies. 4 DRESS and SJS are similar in that the clinical manifestations was based on the existence of associated systemic involvement and the presence of eosinophilia (>500/mm 3 or >10%). 5,6 The diagnosis of SJS was based on severe skin lesions (mucous membrane erosions, target lesions and epidermal necrosis with skin detachment) affecting less than 10% of the total body surface, regardless of the laboratory findings. The diagnosis was confirmed by a histopathological analysis of focal tissue (vacuolization of basal layer keratinocytes associated with lymphocytes). 5,7,8 DRESS and SJS are differentiated by the nature of the skin lesions and extent of body surface area involvement over other criteria. Considering an appropriate differential diagnosis in patients with mucocutaneous erosions can help prevent misdiagnoses associated with cutaneous adverse drug reactions. Statistics Statistical analysis was performed using SPSS version 17.0 for Windows (SPSS, Chicago, IL, USA). Data are expressed as the mean and standard error of the mean. Fisher's exact test and the Mann-Whitney U-test were used for categorical and continuous variables, respectively. Personal characteristics and disease-related factors were compared between the disorders. The association of laboratory tests with the two disorders was adjusted for the 95% confidence interval as the effect measure. A P value of less than 0.05 was considered statistically significant. Patient characteristics The study included 31 adults hospitalized with a diagnosis of DRESS or SJS. Eleven patients (4 men, 7 women; median age 51.5 years) had DRESS and 20 patients (12 men, 8 women; median age of 58.5 years) had SJS. There were no significant differences in the baseline values between the two disorders (Table 1). Clinical findings For DRESS syndrome, patients had prodromal symptoms of itching, fever and facial edema. Patients with SJS commonly had prodromal symptoms of fever and malaise. The first skin lesions appeared on the extremities and face in DRESS and on the trunk in SJS ( Table 2). The trunk lesions in SJS were tender. The mucosal membranes of the oral cavity and eyes were commonly affected in SJS. The genitalia (2/19, 10.5%) and anus (2/19, 10.5%) were also affected ( Table 3A). The results of laboratory tests The liver was involved in DRESS, and all patients with DRESS had elevated liver enzymes. The kidneys (2/11, 18.2%) were also involved (Table 3B). In differentiating between DRESS and SJS, the skin lesions are considered first. Two patients with SJS had eosinophilia (>500/ mm 3 or >10%) and 17 had elevated liver enzymes (ALT or AST >40 U/L). Causative drugs The most common causative drugs were antibiotics, which were identified as the cause in 4 of 11 patients (36%) with DRESS and 8 of 20 patients (40%) with SJS. From most to least common, the four antibiotics that were involved most frequently were vancomycin, ceftriaxone, rifampin, ciprofloxacin and Bactrim. Anticonvulsants were the second most common drugs involved; 3 (27%) cases of DRESS and 7 (35%) of SJS were associated with anticonvulsants. From most to least common, the four anticonvulsants that were involved most frequently were lithium, carbamazepine, lamotrigine and oxcarbazepine. The remaining drugs involved in both disorders were herbal medi- Treatment and outcome Patients with DRESS with internal organ involvement were treated with corticosteroids (7 patients, 64%) and dramatic improvement was observed. Most patients recovered in the absence of specific treatment after eliminating the causative drug. Most patients with SJS were successfully treated conservatively or with corticosteroids. Six patients (30%) were treated with antibiotics and one patient relapsed and died. DISCUSSION This study compared the causes and clinical features of DRESS and SJS. DRESS is a major cause of hospitalization for dermatological complications in patients treated with anticonvulsants. 10 The clinical manifestations typically occur within 2 to 6 weeks after initiating drug therapy and most cases resolve without sequelae when the drug is discontinued. The outcome is fatal in 5-10% of cases. The most common clinical presentation of DRESS includes fever, eruption and lymphadenopathy. The most common hematological abnormalities are eosinophilia, leukocytosis and lymphocytosis. Liver involvement in patients with DRESS may range from a transitory increase in liver enzymes to liver necrosis with fulminant hepatic failure. Only transitory hepatitis was observed in our study, and the patients recovered without sequelae. Other potentially fatal complications are hypersensitivity myocarditis, pericarditis, pneumonitis and nephritis. 6,11,12 Although there are a variety of etiologies, such as infections and underlying malignancies, drugs remain the predominant cause of SJS. The most commonly implicated drugs are anti-convulsants, sulfa derivatives, NSAIDs, penicillins, cephalosporins and allopurinols. 1,9 The characteristic skin lesions seen in SJS are diffuse erythematous macules with purpuric, necrotic centers, and overlying blistering. These cutaneous lesions often demonstrate a positive Nikolsky sign, which is further detachment of the epidermis with slight lateral pressure. Painful erosions of the mucous membranes are common and may affect the lips, oral cavity, conjunctiva, nasal cavity, urethra, vagina, gastrointestinal tract and respiratory tract during the course of the illness. The mucosal membranes most often affected in our study were the oral cavity and eyes. SJS is fatal in 5-15% of cases. 3,8 Both the incidence of the condition and the associated mortality appear to be increased in immunocompromised patients. DRESS and SJS are part of a spectrum of adverse cutaneous drug reactions. However, the pathophysiology of DRESS and SJS has not been elucidated fully. Various theories have been proposed, including both immunological and non-immunological mechanisms. 3,13 The current pathophysiological explanation for DRESS is immunological. Drugs with reactive metabolites can modify cellular proteins and target an autoim- (25) Clopidogrel, herb, sulfonamide, allopurinol mune response against the skin or liver cells. 11 It now appears that the immunological mechanisms associated with SJS are initiated by the Fas antigen, a cell surface molecule that can mediate apoptosis. 3 Laboratory testing can help to identify internal organ involvement, which may not be evident clinically. A skin biopsy may help to confirm the diagnosis, but is usually not specific. The suspected drug should be discontinued immediately when these two syndromes are being considered. 1,8,12 Delaying this measure may be associated with a poor outcome. For patients with extensive mucocutaneous involvement, prompt referral to a burn unit is recommended. Corticosteroids remain the agents most widely used for treating DRESS, although the doses used vary widely across case reports. 8 Favorable results have been reported with their use. In the absence of a well-established therapy, primary and secondary prevention have key roles in the management of these two syndromes. 1,8 Milder cases of SJS can be managed in an inpatient setting using the same fundamental therapeutic protocol used for the treatment of burns. The use of medications to treat SJS remains controversial. Treatment with corticosteroids, while effective in most other acute inflammatory disorders, is controversial. The most important diagnostic clues are the mucosal lesions and eosinophilia, characteristic of both syndromes. Age and gender were not helpful in differentiating SJS from DRESS. Elevated liver enzymes suggest the DRESS syndrome, well known to affect internal organs. As in a previous study, the liver was commonly involved in our DRESS patients. However, some earlier studies reported that up to 75% of the patients with SJS had elevated liver enzymes, similar to patients with DRESS. 14 Hepatitis was observed in 85% of the patients with SJS in our study. Anticonvulsants were once the most common cause although, more recently, antibiotics have been reported to be the most common cause of both disorders. 6,8,9 The increase in the use of antibiotics in Korea might explain this finding. After prompt withdrawal of the offending drug, conservative treatment or combination corticosteroid treatment has been used to treat patients with both syndromes, and most patients have improved clinical symptoms and laboratory findings with this approach. In the absence of specific treatment, the elimination of the causative drug and proper symptomatic treatment are the best management approaches for both SJS and DRESS.
2014-10-01T00:00:00.000Z
2010-03-24T00:00:00.000
{ "year": 2010, "sha1": "71382fe81f03160d175b7c899da5badaa58deaea", "oa_license": "CCBYNC", "oa_url": "https://europepmc.org/articles/pmc2846735?pdf=render", "oa_status": "GREEN", "pdf_src": "PubMedCentral", "pdf_hash": "71382fe81f03160d175b7c899da5badaa58deaea", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
228372706
pes2o/s2orc
v3-fos-license
An AI-Assisted Design Method for Topology Optimization Without Pre-Optimized Training Data Topology optimization is widely used by engineers during the initial product development process to get a first possible geometry design. The state-of-the-art is the iterative calculation, which requires both time and computational power. Some newly developed methods use artificial intelligence to accelerate the topology optimization. These require conventionally pre-optimized data and therefore are dependent on the quality and number of available data. This paper proposes an AI-assisted design method for topology optimization, which does not require pre-optimized data. The designs are provided by an artificial neural network, the predictor, on the basis of boundary conditions and degree of filling (the volume percentage filled by material) as input data. In the training phase, geometries generated on the basis of random input data are evaluated with respect to given criteria. The results of those evaluations flow into an objective function which is minimized by adapting the predictor's parameters. After the training is completed, the presented AI-assisted design procedure supplies geometries which are similar to the ones generated by conventional topology optimizers, but requires a small fraction of the computational effort required by those algorithms. We anticipate our paper to be a starting point for AI-based methods that requires data, that is hard to compute or not available. Introduction In Topology Optimization (TO), the material distribution over a given design domain is optimized by minimizing a certain objective function while fulfilling specified restrictions [1].In most cases, the optimization problem is solved in a mathematical way by means of a suitable search algorithm. The present contribution deals with the solution of TO problems by means of Artificial Intelligence (AI) techniques.State-of-the-art research in this area require optimal structures on a data basis obtained by conventional TO.For this reason, they are subject to several limitations which affect those techniques, such as large computational effort and problematic handling of multi-modal formulations.The approach proposed here aims at removing those drawbacks by generating all the artificial knowledge required for the optimization during the learning phase, with no need of relying on pre-optimized results. Topology Optimization In this work, only the case of mono-material topology optimization will be considered.The material of which the structure is to be build is a constant of the problem and geometry remains as unknown. In the case of stiffness optimization, a scalar measure of structural compliance is typically chosen as the function to be minimized.In addition, the condition that a given quantity of material is used over the design domain must be fulfilled.This material quantity is expressed as fraction of the maximum possible amount of material (degree of filling).Minimization of compliance results in maximizing the stiffness.The available design domain, the static and kinematic boundary conditions for the regarded load cases as well as strength thresholds are typically considered as restrictions. This paper will also focus on stiffness optimization, although the presented method is of general validity and could be applied to optimization with different objective functions or restrictions. There are numerous possible approaches to TO [1].In the "Solid Isotropic Material with Penalization" (SIMP) approach according to Bendsøe [2] the design domain is divided into elements.For each of those elements the contribution to the overall stiffness of the structure is scaled with a factor to be determined. The SIMP approach is able to provide optimized geometries for many practical cases by means of an iterative process.Each iteration involves computationally intensive operations: the most critical ones are assembling the stiffness matrix and solving the system's equation.When restrictions are involved, such as stress restrictions, the complexity of the optimization problem increases [3,4]. Artificial Neural Networks Artificial Neural Networks (ANNs) belong to the area of Machine Learning (ML), which, in turn, are assigned to AI. ANNs are able to learn and execute complex procedures, which has led to remarkable results in recent years.For example, ANNs are able to recognize the objects shown on pictures by their shape and color or beat world champions in the board game "GO" [5,6]. The development of ANNs is progressing steadily, on the one hand due to the continuously better available computing power and on the other hand due to the discovery of new possibilities to improve the learning process. ANNs or, more precisely, feedforward neural networks consist of layers connected in sequence.These layers contain so-called neurons [7].A neuron (see Figure 1) is the basic element of an ANN.The combination of all layers is also called a network. The neuron receives n inputs (here given as vector z), which are linearly combined and added to a bias value b and passed as argument to an activation function fa fn(z) = fa(w T z + b). (1) The coefficients of the linear combination, collected in the vector w, are called weights. It is usual that several neurons have the same input.All neurons with the same inputs are grouped together in one layer (also called fully connected layer ).Since one single output is supplied by each neuron, each layer with m neurons also produces m outputs and the weights become matrices W ∈ R n×m .The outputs of a layer (except the last layer) serve as inputs for the following layer.The first layer is called input layer f (1) and the last layer is called output layer f (n L ) .Any layer whose input and output values are not accessible to the user is called hidden layer f (2,...,n L −1) .Each layer, for example the layer f (2) , has its own weights W (2) and biases b (2) . The number of layers nL is also named depth of the network, which also originate the attribute "deep" in the term Deep Learning (DL).The term DL is generally used for ANNs with several hidden layers.The presence of several layers makes it possible to map a more complex transfer behavior between the input and the output layer. The functional relationship realized by the ANN depends on the weights W and on the biases b, which are adjusted in the context of socalled training or learning according to certain algorithms (learning algorithms).The learning algorithm used consists in the gradient-based minimization of a scalar value termed error or loss, which is obtained from the deviations of the actual outputs from given target outputs.The values which describe the network's architecture and do not undergo any change during training, like the number of neurons in a layer, are termed hyperparameters. In addition to the fully connected layers there are convolutional layers.These layers use convolution in place of the linear combination (1).Here the trainable weights, also called convolution kernel, are convoluted with the input of the layer and produce an output which is passed to the next layer.This process is efficient for grid-like data structures and is therefore used for many modern image applications [8]. The output of an ANN is henceforth referred to as prediction.Further details on the learning of an ANN can be found in the specific literature, for example [8,9]. ANN-based Topology Optimization DL-based TO, by predicting the geometry through ANNs, aims to deliver optimized results in only a fraction of the time required by conventional optimization, by moving the computationally intensive part to the training algorithm, which is executed only once.The results provided by the trained ANNs can then be used directly, refined with conventional methods or adapted to the desired structure size. There are already some attempts in this area.The majority used many thousands topology-optimized geometries as training datasets for the ANN [10,11,12,13,14]. In the case of [12] 80,000 optimized datasets based on the 88 lines of code (top88) [15] were used for the training of the neural network. Banga et al. used an approach in which intermediate results of conventional TO are the basis for the training datasets for the ANN [16]. Nie et al. used a Generative Adversarial Network (GAN) [17] and some physical fields over the initial domain, like strain energy density or von Mises Stress, to predict the geometries.The model was trained with 49.078 conventionally topology-optimized datasets [18]. Yamasaki et al. [19] and Cang et al. [20] achieved great results using data-driven approaches.In the case of Cang two cases were trained and tested.In the first case the direction of a single load could be is changed.The second case had variable directions as well as the positions of the load.Yamasaki's ANN needed to be trained for each new boundary condition.Only the volume fraction is variable. A different approach was presented by Chandrasekhar and Suresh, where the ANN does not generate the whole geometry but only a density value at given x and y coordinates [21]. Although such ANN topology optimization procedures are able to perform the above-mentioned task of a fast and direct generation of optimized geometries, the predictions undergo some restrictions. Since topology-optimized training data are used, and the generation of these data with conventional methods is very time-consuming, the number of training data sets which can be considered is limited.In the case of [10] 100,000 data sets were generated and this took about 200h.Another 8h were needed for training.This limitation affects the accuracy in the prediction of unknown geometries (i.e.geometries which were not used within the training) negatively.For example in the paper [10] about 3.4 % of the generated geometries are not connected (with theoretically infinite compliance) and are therefore not usable. This paper investigates the possibility, which differs from the state of the art, to train an ANN without the use of topology-optimized data sets.The generation of training data sets and the training itself are merged in one single procedural step. This makes it possible to process a much larger amount of data sets for the training in a much shorter time.And since the compliance is calculated during the training the ANN learns to avoid undesirable results. The state-of-the-art procedures require the use of large number of optimized data sets.These data sets must be optimal to be suitable as training data.Depending on the optimization formulation, this may be not the case, as local minima and convergence problems may occur.A method which doesn't use optimized data sets is not subject to these restrictions. Method The presented method is based on an ANN architecture called Predictor-Evaluator-Network (PEN), which was developed by the authors for this purpose.The predictor is the trainable part of the PEN and its task is to generate-based on input data sets-optimized geometries. As mentioned, unlike the state-of-the-art methods mentioned above, no pre-optimized topology-optimized data sets are used in the training.The geometries used for the training are created by the predictor itself on the basis of randomly generated input data sets and evaluated by the remaining components of the PEN, called evaluators. The evaluators perform mathematical operations.Other than the predictor, the operations performed by the evaluators are pre-defined and do not change during the training. Each evaluator assesses the outputs of the predictor with respect to a certain criterion and returns a corresponding scalar value as measure of the criterion's fulfillment.This fulfillment is the loss or the error of this evaluator.A scalar function of the evaluator outputs (objective function J, see section 2.7) combines the individual losses. During the training the objective function computed for a set of geometries (batch) is minimized by changing the predictor's trainable parameters, see section 2.2.In this way, the predictor learns how to produce optimized geometries. The predictor, the individual evaluators, their tasks and their way of operation are explained in detail in the following sections. Basic Definitions In topology optimization, the design domain is typically subdivided in elements by appropriate meshing.In Figure 2, elements (with one element hatched) and nodes are visualized.In this work we examined only square meshes with equal number of rows and columns.Although this method can be used for non-square and three-dimensional geometries. Element The total number of elements in the 2d-case is where dy is the number of rows and dx the number of columns (see Figure 2).In the square case the number of rows and columns are equal dx = dy = d.The d 2 design variables xi {i = 1, . . ., d 2 }, termed density values, scale the contributions of the single elements to the stiffness matrix.The density has the value one when the stiffness contribution of the element is fully preserved and zero when it disappears. The density values are collected in a vector x.In general the density values xi are defined in the interval [0, 1].In order to prevent possible singularities of the stiffness matrix, a lower limit value xmin for the entries of x is set [2]: The vector of design variables x can be transformed to a square matrix XM of order d by using the R 2d operator: Although a binary selection of the density is desired (discrete TO, material present/not present), values between zero and one are permitted for algorithmic reasons (continuous TO).To get closer to the desired binary selection of densities the so-called penalization can be used in the calculation of the compliance.The penalization is realized by an element-wise exponentiation of the densities by the penalization exponent p > 1 [22]. The arithmetic mean of all xi defines the degree of filling of the geometry The target value Mtar is the degree of filling that is to be achieved by the predictor. The kinematic boundary conditions are stored in two (dinp + 1) × (dinp + 1) boolean matrices R k,x and R k,y .In Figure 3, which shows an overview of how the boundary condition are handled, as well as in following figures, the green arrows represent the kinematic boundary conditions and the red ones the static boundary conditions.The entries of R k,x are set to one if the x-component of the displacement in the corresponding node is fixed, and zero otherwise.Analogously, the entries of R k,y are set according to the fixed y-components of the displacements.Both matrices can be transformed into vectors with the R 1d operator, which is the inverse of operator R 2d , and then arranged in sequence so that the vector r k ∈ R (2dinp+1) 2 is created.Analogously to the kinematic boundary conditions, two (dinp + 1) × (dinp + 1) matrices Rs,x and Rs,y are firstly built on the basis of the static boundary conditions (visualized by red arrows).The x-and the y-components of the applied forces are placed, respectively, into the matrices Investigations showed that the training speed could be increased, for high-resolution geometries, by dividing the training into levels with increasing resolution.Since smaller geometries are trained several orders of magnitude faster and the knowledge gained is also used for higher resolution geometries, the overall training time is reduces compared to the training that uses only high-resolution geometries.The levels are labeled with the integer number Λ. Increasing Λ by 1 results in doubling the number d of row or columns of the design domain's mesh.This is done by quartering the elements of the previous level.In this way, the nodes of the previous level are kept in the new level.The number of row or columns at the first level is denoted as dinp. The input data of the predictor includes the kinematic r k and static rs boundary conditions as well as the target degree of filling Mtar.The output of the predictor is a geometry x.Input data can be only defined at the initial level and do not change while the level is changed.Hence, new nodes cannot be subject to static or kinematic boundary conditions (see Figure 4).The change of level occurs after a certain condition, which will be described later, is fulfilled. Predictor The predictor is in charge of creating an optimized geometry for a given input data set.Its ANN-architecture consist of multiple hidden layers, convolutional layers and an output layer with d 2 neurons (see Figure 6).As activation function fa(z) in the hidden and convolutional layers the Parametric Rectified Linear Unit (PReLU) function is being used [23].The PReLU function is the equivalent of the Rectified Linear Unit (ReLU) function [23] fReLU(z) = max(0, z) = 0 if z < 0, z otherwise, (7) with the difference of a variable negative slope α, which can be adapted during training: The sigmoid function is well suited as activation function for the output layer because it provides results in the interval (0, 1), see Figure 5.This makes the predictor's output directly suitable to describe the density values of the geometry.All parameters that can be changed during training, like the bias, the slope of the PReLU as well as the weights of the hidden layers will be generally referred to as trainable parameters in the following.They are collected in the matrix Wp.The operations performed by the predictor can be represented by a function fp: Here the data flow through the predictor as well as output layers for different level Λ can be seen.An input data set (top left) is processed by several successive hidden layers and then passed on to some Residual Network (ResNet)-blocks.In order to reduce the resolution to a lower Λ, average pooling is used. In Figure 6 the hidden block is the combination of a hidden or fully connected layer and an activation function call.The ResNet-block is the combination of two (convolutional) layers and a shortcut that is added as a bypass to the output of the layers.The ResNet-block allows for faster learning but also reduces the error [24]. For subsequent levels, the outputs of the last convolutional block of the previous layer and the outputs of the last hidden block of the first level are added together and then, after an additional convolutional block, converted to the desired output dimension. Evaluator: Compliance The task of the compliance evaluator is the computation of the global mean compliance.For this purpose, an al-gorithm based on Finite Element Method (FEM) [22] is used.The global mean compliance is defined according to [22] as with K as the stiffness matrix, f as the force vector and u as the displacement vector.The compliance has the dimension of energy.As usual in literature [22,15] in the following the units will be omitted for the sake of simplicity. As already explained, the static boundary conditions vector rs consists first of x-entries and then y-entries.Since the degrees of freedom of the stiffness matrix are arranged in alternate way (one x-entry and one y-entry), the force vector is to be built accordingly.In order to transform the static boundary condition vector rs into the force vector f , the number of nodes and a collocation matrix where i = {0, . . ., l − 1}, iR 2(j−l)+1,j = 1 where j = {l, . . ., 2l − 1}, iR = 0 otherwise (14) are required.The force vector is then obtained as follows: The system's equations write The stiffness matrix K depends linearly on the geometry x and is expressed by where the matrices Ki are the unscaled contributions of the single elements to the stiffness matrix.The penalization exponent p achieves the desired focusing of the geometry towards the limits values xmin and 1 as described in section 2.1. The stiffness matrix K is then reduced by removing the columns and rows corresponding to the fixed degrees of freedom according to the kinematic boundary conditions.The result is the reduced stiffness matrix K red , which then can be inverted.The reduced force vector f red is determined according to the same principle.From the reduced equation the reduced displacement vector is obtained as The reduced global mean compliance c red is finally computed as follows: The calculation of the mean global compliance c according to (12) or c red according to (20) leads to the same result, since u at the fixed degrees of freedom vanishes and therefore have no effect on c. Evaluator: Degree of filling The task of this evaluator is to determine the deviation of the degree of filling Mis, see (6), from the target value By considering the filling degree's deviation M in the objective function, the predictor is penalized proportionally to the extent of the deviation from the target degree of filling Mtar. Evaluator: Filter The filter evaluator searches for checkerboard patterns in the geometry and outputs a scalar value F ∈ [0, 1] that points to the amount and extent of checkerboard patterns detected.These checkerboard patterns consist of alternating high and low density values of the geometry.They are undesirable because they do not reflect the optimal material distribution and are difficult to transfer to real parts.These checkerboard patterns exist due to bad numerical modelling [25]. Several solutions for the checkerboard problem were developed in the framework of conventional topology optimization [26].In this work, a new strategy was chosen, which allows for inclusion of the checkerboard filter into the quality function.In the present approach, checkerboard patterns are admitted, but detected and penalized accordingly.Since the type of implementation is fundamentally different, it is not possible to compare the conventional filter method with the filter evaluator.With the matrix a two-dimensional convolution operation (discrete convolution) is performed.In detail, the convolution operation is carried out as follows: The convolution matrix V is visualized in Figure 7 This indicator would already be sufficient to exclude geometries with checkerboard patterns but also penalizes good geometries without recognizable checkerboard patterns.Therefore, an improved indicator is formed on the basis of the mean value and with the help of the e-function, which is less sensitive to small mean values but nevertheless results in a corresponding penalization for large checkerboard patterns: The parameter F k controls the shape of the F -function (see Figure 8). Evaluator: Uncertainty When calculating the density values of the geometry x, the predictor should, as far as possible, focus on the limit values xmin and 1 and penalize intermediate values.The deviation from this goal is expressed by the uncertainty evaluator with the scalar variable P (uncertainty).This value increases if the predicted geometry deviates significantly from the limit values and thus penalizes the predictor.The uncertainty evaluator uses the normal distribution function with σ 2 as the variance and µ as the expected value.The expected value is set to 1 2 , at which P should have the maximum.In order for P to be normalized (with x = 1 2 the function should have the value 1), the normal distribution function fg(x) is multiplied by the term The resulting function fg,n is evaluated for all elements of the geometry.The mean value of the results provides the uncertainty: The variance σ 2 determines the width of the distribution function. Quality function and objective function The task of the quality function is to combine all evaluator losses into one scalar.The following additional requirements must be considered: • The function should have a simple mathematical form, in order not to complicate the minimum search. • The function must be monotonically increasing with respect to the evaluators' losses • The function contains coefficients to control the relative influence of the evaluators losses The most obvious variant fulfilling these criteria a linear combination of the losses.The problem with this choice consists in the different and variable order of magnitude of the compliance loss with respect to the other losses.For a given choice of the coefficients the relative influence of the losses changes for different parametrization and input data sets.To avoid this drawback, a quality function in the following form was chosen The addition of the constant value prevent the quality function from being dominated by one loss when its value is close to zero. For every single dataset one value of fQ exists.Optimization on the basis of single datasets would require large computational effort and lead to instabilities of the training process (large jumps of the objective function output).Therefore, a given number bn of datasets (batch) is used and the corresponding quality function values are combined in one scalar value, which works as objective function for the optimization that rules the training.The value of the objective function ) is calculated as the arithmetic mean of the quality function values obtained for the single datasets of the batch.Investigations showed that averaging the quality function outputs over numerous data sets stabilizes the training procedure.The disadvantage of this averaging is the possibility of forming prejudices.E.g. if one element is frequently present, then its frequency is also learned, even if the element's contribution to stiffness is in some cases small or non-existent. Training The overview in Figure 9 describes the training process for a single level.Here, it is visible that during a batch iteration the input data sets are calculated randomly and then passed to the predictor as well as to the evaluators.Within one batch the input datasets are randomly gener- ated and the predictor creates the corresponding geometries xi.Afterwards, the quality function is computed from the evaluators losses according to (30).The objective function J is then calculated for the whole batch. Then the gradient Gp b of the objective function with respect to the trainable parameters is calculated.The trainable parameters of the predictor for the next batch are then adjusted according to the steepest-descent criterion to decrease the value of the objective function. When the level increases, the predictor outputs a geometry with higher resolution and the process starts again at batch b = 1. It is important to stress that, unlike conventional topology optimization, the PEN method does not optimize the density values of the geometry, but only the weights of the predictor. Implementation The implementation of the presented method takes place in the programming language Python.The framework Tensorflow with the Keras programming interface (API) is used, which is well suited for programming ML algorithms in Python.Tensorflow is developed by Google and is an open source platform for the development of machine-learning applications [23].In Tensorflow, the gradients necessary for the predictor learning are calculated using Automatic Differentiation (AD), which requires the use of functions available in Tensorflow [27].The configuration of the software and hardware used for the training is shown in Table 1.The predictor's topology, with all layers and all hyperparameters, is shown in Figure 10.The chosen hyperparameters were found to be the best after numerous tests in which the deviations of the predictions from the ones obtained by conventional TO were evaluated.The hyperparameters are displayed by the shape (numerical expression over the arrow pointing outside the block) of the output matrix of a block or by the comment near the convolutional block.The label of the output arrow describes the dimensions of the output vector or matrix.The names of the elements in Figure 10, e.g."Conv2D", correspond to the Keras layer names. The input data sets (top left) are processed by four fully connected (here termed "dense") layers, then reshaped into a three-dimensional matrix with the shape 8 × 8 × 64 and passed on to two sequential convolutional layers.Subsequently, the data gets reshaped into a vector and passed through a sigmoid activation layer.As a result, the geometry at the first level is available.The following levels build on the previous levels.So the data from the last hidden block and the data prior to the sigmoid activation of the previous level are used by for the next layer, by transforming the outputs to the same shape and adding them together.Afterwards, the data gets reshaped into a vector and, again, passed through a sigmoid activation layer.As a result, the geometry at the next level is available.As already mentioned, the training of the predictor is based on randomly generated input data sets.All randomly chosen input data are uniformly distributed in the corresponding interval.They are generated according to the following features: • Kinematic boundary conditions r k : -Fixed degrees of freedom along the left side in x and y direction • Static boundary condition rs: -Position randomly chosen from all (dinp +1) 2 = 81 nodes (except the nodes, which have a fixed degree of freedom) of level one -Fixed magnitude rs,F • Target degree of filling Mtar: -Uniform random Mtar = {0.2,0.21, . . ., 0.8}. The algorithms 1, 2 and 3 show the training, the trainable parameter's update and the convergence criterion code respectively. The flow of data from the input (r k , rs, Mtar) of the ANN to the output x and the objective function J(x) is called forward propagation. With computation and updating of the objective function's gradient with respect to the trainable parameters the information flows backward through the ANN.This backward flow of information is called back-propagation [8].Once the gradient is calculated, a trainable parameter's update is done using the learning rate η, which defines the length of the gradient step, and the adam optimizer (see algorithm 2) according to [28]. After the trainable parameters of the predictor have been updated, a new batch can be elaborated.This process will continue until a convergence criterion is fulfilled.In order to define a proper convergence criterion, the lowest objective function value J best in the current level is tracked and compared to the current objective function value J b .If the objective function value J b of one batch is not lower than J best then the integer variable ζ b (pa-Algorithm 1 Learning process, Part 1 (training) x i ← f p (r ki , r si , M tari , W p (b−1) ) 16: 18: end for 20: rough estimation, for more details see [28] tience) is increases by one, else it resets to zero: Once the patience exceeds a predefined value ζmax, termed maximal patience (see table 2), the level Λ increases or, if the maximum level was reached, the training stops (see algorithm 1 line 7).The parameters in table 2 were used for the training. Results The training of the predictor lasted TP = 3 h (2:56:32), which can be subdivided according to the individual levels end while 32: end for as follows: 21 %, 7 %, 45 %, 26 %.The ANN based TO geometries are similar to the results obtained by top88 according to [15] for the same input datasets.For the conventional SIMP-TO the density filter method and the parameters mentioned in table 2 were used.An example prediction is shown in Figure 11.The training history shows the progression of the objective function (see Figure 12) and of the individual evaluator losses over the number of batches (see Figure 13).The smaller batch size at higher levels produces more oscillation of the curve and therefore makes it difficult to identify a trend.For this reason, the curves shown in the figures are filtered using the exponential moving average and a smoothing factor of 0.862 [29].This filtering does not affect the original objective function and serves only for visual purposes. The dashed vertical lines (labeled with the value of Λ) 12 and 13 show the change of level.It can be seen that after each increase of Λ, the value of the objective function increases.This can be explained by adding more weights that are randomly distributed and still untrained. The results were validated using n = 100 randomly generated input data sets, called validation data, that were not part of the training data sets, and the corresponding optimized geometries which were conventionally calculated by the top88 available in [30]. The results of the comparison (PEN and top88) of the 100 validation data sets are summarized in the plots in Figure 14. On average, the ANN-based TO can deliver almost the same result as the conventional method in about 8.4 ms, while the conventional topology optimizer according to Andreassen [15] requires on average 1.9 s (and is hence roughly 225 times slower), see Figure 14 a).It can also be seen that the majority of geometries generated by PEN have a compliance that is close to the geometries gener- With the help of the function the accuracy and thus the validity of the predictor can be determined and predictors with different hyperparameters can be compared.The function κ is needed because a single indicator is not sufficient to determine the accuracy of the predictor.So it is possible that the accuracy κ0.01 is low and the errors mae and mse are also small at the same time.Since these error indicators concentrate on different kinds of differences, the average κ of those is a more meaningful indicator.The examples in Figure 15 show that the predictor can deliver geometries that are similar to conventional method as well as some weaknesses.For example, in some cases the geometries are noisy and contain undesirable elements, which do not contribute to the stiffness (see Figure 15, column two or four).This may be improved by an appropriate choice of hyperparameters of the predictor and by adapting the quality function.Also included in this figure is a row of conventionally topology optimized geometries using different parameters.For all sample geometries in Figure 15 the compliance is reported under the geometry diagram.For the ANN TO generated geometries the evaluator losses are summarized in table 3. From the data in the table 4 it can be seen that in the examined cases 79.7 % of the elements of the geometries obtained with the PEN method have density differences of less than 1% as compared to the conventionally optimized geometries. Computing time comparison As mentioned in section 3.2 the PEN method is by orders of magnitude faster than top88.However, the predictor profits from a computationally intensive training.So it is interesting to attempt a comparison which takes into account the training time. The PEN computing time for a single geometry tPEN, including its share of training time, obviously depends from the number of geometries predicted ep on the basis of one single training process: where tP is the computing time per single geometry and Tp is the training time.The Break Even Point (BEP) is given by the number of predictions eBEP for which both methods require the same time (including training time contribution).To calculate the BEP tPEN is set equal to tTO, which is the computing time to optimize a single geometry using the conventional method.It results: The table 5 shows the computing times of the different methods as well as the BEP.tp tTO eBEP 8.4 ms 1.9 s 5612 Table 5: Time comparison of conventional and PEN based methods When evaluating the results of this comparison, the following points should be considered: • Due to the fact that tp tTO the BEP, for a given reference method, essentially depends on the training time. Online Due to the ability to quickly get the optimized geometry by the predictor, the ANN-based TO can be executed online in the browser.Under the address: https://www.tu-chemnitz.de/mb/mp/forschung/ai-design/TODL/it is possible to perform investigations with different degrees of filling as well as static boundary conditions. Conclusion In this paper, a method was presented that makes it possible to realize a topology optimizer using deep learning. The ANN in charge of generating topology-optimized geometries does not need any pre-optimized data sets for the training.The generated geometries are in most cases very similar to the results of conventional topology optimization according to Sigmund or Andreassen. This topology optimizer is much faster, due to the fact that the computing-intensive part is shifted into the training.After the training, the Artificial Neural Network based topology optimizer is able to deliver geometries which are nearly identical to the ones generated by conventional topology optimizers.This is achieved by using a new approach, the Predictor-Evaluator-Network (PEN) approach.PEN consists of a trainable predictor, which is in charge of generating geometries, and evaluators, which have the purpose of evaluating the output of the predictor during the training. The method was tested up to an output resolution of 64 × 64.The optimization of the computational efficiency of the training phase was not the first priority of this project since the training is performed just once and therefore affects the performance of the method only in a limited fashion.A critical step is the calculation of the displacements in the compliance evaluator.The use of faster algorithms (e.g.sparse solvers) could remove the mentioned limitations.One improving option could consist in implementing the compliance evaluator as an ANN itself and thus making it faster and more memoryefficient.This would make possible to cope with finer resolutions or to learn with much larger batch sizes and thus with more training data in the same time. The results of the PEN method are comparable to the ones of the conventional method.However, the PEN method could prove superior in handling applications and optimization problems of higher complexity, such as stress limitations, compliant mechanisms and many more.This expectation is related to the fact that no optimized data are needed.All methods which process pre-optimized data suffer from the difficulties encountered by conventional optimization while managing the above-mentioned problems.Because the PEN method works without optimized data, it can also be applied to problems that have no optimal solutions or solutions that are hard to calculate, like the fully stressed truss optimization. Up to now, variable kinematic boundary conditions were not tested.This will be done in future research, together with resolution improvement, application to threedimensional design domains and consideration of nonlinearities and restrictions. Figure 1 : Figure 1: Graphical representation of a single neuron Figure 3 : Figure 3: Matrix representation of a) kinematic boundary conditions and b) static boundary conditions Figure 4 : Figure 4: Nodes and elements for different levels Λ.The green arrows represent the kinematic boundary conditions.The red arrows represent the static boundary condition. Figure 7 : Figure 7: Left -sample geometry, right -convolutional matrix V where the geometry has checkerboard patterns.A first indicator can be computed as mean value of the convolution matrix: v = 1 (d − 2) 2 d−2 Figure 8 : Figure 8: Influence of factor F k on filter calculation Figure 11 : Figure 11: Sample geometry (Left: ANN-based TO by using the PEN-Method; Right: 88 lines of code [15]) Figure 14 : Figure 14: Computing time and compliance comparison Figure 15 : Figure 15: Additional sample geometries a) Deep Learning Topology Optimization b) validation data • The training time in turn depends on the convergency condition.Within the framework of this project an extensive study about the proper of choice of convergency criterion could not be made.The present choice allowed for good results.It can be expected that the training time could be reduced after a targeted study in this sense.• Of course the training time also depends on the hardware used for training.By using a high performance hardware the training time, and so the BEP can be strongly reduced without effecting the versatility of the method in everyday use.• This comparison does not include a study of the effect of the problem size (number of design variables). Table 3 : Summary of evaluator losses (same examples as in Figure15) Table 4 : Summary of indicators
2020-12-14T02:16:16.763Z
2020-12-11T00:00:00.000
{ "year": 2020, "sha1": "4603b2859d56c4ea596b3c9b4b45574b61f07bc0", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2076-3417/11/19/9041/pdf?version=1632897557", "oa_status": "GOLD", "pdf_src": "ArXiv", "pdf_hash": "4603b2859d56c4ea596b3c9b4b45574b61f07bc0", "s2fieldsofstudy": [ "Engineering", "Computer Science" ], "extfieldsofstudy": [ "Computer Science" ] }
264144543
pes2o/s2orc
v3-fos-license
Fully automated radiolabeling of [68Ga]Ga-EMP100 targeting c-MET for PET-CT clinical imaging Background c-MET is a transmembrane receptor involved in many biological processes and contributes to cell proliferation and migration during cancer invasion process. Its expression is measured by immunehistochemistry on tissue biopsy in clinic, although this technique has its limitations. PET-CT could allow in vivo mapping of lesions expressing c-MET, providing whole-body detection. A number of radiopharmaceuticals are under development for this purpose but are not yet in routine clinical use. EMP100 is a cyclic oligopeptide bound to a DOTA chelator, with nanomolar affinity for c-MET. The aim of this project was to develop an automated method for radiolabelling the radiopharmaceutical [68Ga]Ga-EMP100. Results The main results showed an optimal pH range between 3.25 and 3.75 for the complexation reaction and a stabilisation of the temperature at 90 °C, resulting in an almost complete incorporation of gallium-68 after 10 min of heating. In these experiments, 90 µg of EMP-100 peptide were initially used and then lower amounts (30, 50, 75 µg) were explored to determine the minimum required for sufficient synthesis yield. Radiolysis impurities were identified by radio-HPLC and ascorbic acid and ethanol were used to improve the purity of the compound. Three batches of [68Ga]Ga-EMP100 were then prepared according to the optimised parameters and all met the established specifications. Finally, the stability of [68Ga]Ga-EMP100 was assessed at room temperature over 3 h with satisfactory results in terms of appearance, pH, radiochemical purity and sterility. Conclusions For the automated synthesis of [68Ga]Ga-EMP100, the parameters of pH, temperature, precursor peptide content and the use of adjuvants for impurity management were efficiently optimised, resulting in the production of three compliant and stable batches according to the principles of good manufacturing practice. [68Ga]Ga-EMP100 was successfully synthesised and is now available for clinical development in PET-CT imaging. Background c-MET is a transmembrane receptor with tyrosine kinase activity that is activated by its physiological ligand, hepatocyte growth factor.This receptor plays a key role in many physiological processes (embryogenesis, wound healing, etc.).When involved in cancer biology, it activates several intracellular signaling pathways leading to increased proliferation, migration and metastasis of cancer cells through the epithelialmesenchymal transition process (Sung et al. 2016).This aberrant signalling is found in many primary cancers, including kidney, colorectal and non-small cell lung cancer (NSCLC) (Baldacci et al. 2018;Duplaquet et al. 2018;Gherardi et al. 2012;Ma et al. 2008;Salgia 2017;Zhang et al. 2018). Currently, in routine clinical practice, patient eligibility for targeted MET therapy is determined on tissue biopsy by immunohistochemistry (IHC) using the specific antibody SP44 (Spigel et al. 2014), by fluorescent in situ hybridisation (FISH) or by nextgeneration sequencing (NGS) of the MET gene.However, these techniques, especially IHC, have limitations as they do not reflect the variability of c-MET expression over time, nor the heterogeneity within a tumour lesion or between different tumour sites.In addition, they rely on the availability of tissue samples, which is not always the case, especially for locations that are inaccessible for biopsy. Macroscopic PET-CT nuclear imaging has the potential to map c-MET-expressing lesions throughout the body overcoming most of the limitations of the conventional patho-molecular techniques used.Moreover, PET-CT molecular imaging provides noninvasive, real-time detection with high sensitivity and specificity, and allows quantitative analysis of binding intensity using standardized uptake value (SUV) measurement or derived quantification methods.To date, a number of radiopharmaceuticals (RPs) targeting the MET pathway, such as antibodies, peptides or small molecules, have been radiolabelled to detect the sites of various cancers ([ 64 Cu]Cu-NOTA-rh-HGF, [ 89 Zr]Zronartuzumab, [ 18 F]F-AH113804, [ 11 C]C-SU11274) (Luo et al. 2015;Jagoda et al. 2012;Arulappu et al. 2016;Wu et al. 2010), but there is currently no routine clinical use for them.Targeted therapies for the MET pathway are based mainly on tyrosine kinase inhibitors such as (crizotinib, capmatinib and tepotinib) (Remon et al. 2023), the METspecific monoclonal antibody onartuzumab (Spigel et al. 2017) or the drug conjugated antibody ABBV-399 (Wang et al. 2017).However, a specific c-MET radioligand in PET could open the way to radioligand therapy, such as in prostate cancer with prostate specific membrane antigen (PSMA) ligand (Sartor et al. 2021) or in neuroendocrine tumours with somatostatin ligand (Strosberg et al. 2017).Among these RPs can be found EMP100, which is a cyclic oligopeptide (Fig. 1) with nanomolar affinity for the human c-MET receptor linked to a DOTA chelator, as measured by fluorescence polarization (3.0 ± 0.5 nM-unpublished results).The oligopeptide was shown to have no pharmacological effect on the HGF/c-Met pathway and was shown not to compete with the native ligand.Gallium-68 radiolabelled EMP100 was investigated in a cohort of 12 metastatic renal cell carcinoma (mRCC) patients with very encouraging results by Mittlmeier et al. (2021).However, in this study the radiolabelling of [ 68 Ga]Ga-EMP100 was performed manually.The aim of the present work is to develop an automated method for radiolabelling the EMP-100 peptide with gallium-68 to obtain the RP [ 68 Ga]Ga-EMP100. The gallium-68 eluate was obtained by elution of one or two 68 Ge/ 68 Ga generators Galliapharm ® (Eckert and Ziegler GmbH, Germany).The automated synthesis of [ 68 Ga] Ga-EMP100 was performed on the GAIA synthesis module (Elysia Raytest, Belgium), placed in a high energy class A laminar air flow hot cell MEDI 5000 ® (Medisystem, France).This module allows the entire process to be edited and controlled by a computer program. Description of the radiolabelling process EMP100 is dissolved with acetate ammonium buffer and then transferred to the reaction vial.The process began with the collection of gallium-68 eluate from the generators on a SCX column, which was then washed from impurities by water for injection (WFI).Gallium-68 was eluted from the SCX column using a 5 mol/L NaCl; 0.1 mol/L HCl solution.The reaction vial containing the EMP100 precursor, gallium-68 ions and adjuvants, was buffered to acidic pH and heated.This was then transferred to a C18 column to trap the [ 68 Ga]Ga-EMP100 compound by lipophilic affinity, with the remaining impurities discharged into the waste.The C18 column was then washed with WFI and finally the [ 68 Ga]Ga-EMP100 compound was eluted from the column with a mixture of saline and ethanol and then transferred to the final product vial through a 0.22 µm filter.The total radiolabelling time for this fully automated process was approximately 42 min (Fig. 2). Optimisation of the radiolabelling process The three main objectives of radiolabelling optimisation are to improve product quality through a number of specific parameters.Firstly, radiochemical purity (RCP) is controlled and must exceed 95% (EMA.European Medicines Agency 2022, 2018a; Revised guidance for elaborating monographs on radiopharmaceutical preparations: new section on validation of methods 2019).Second, the molar activity (MA) must be greater than 10 GBq/µmol: at this stage of development, this value was chosen based on the current knowledge of needed ratio between 'hot' and 'cold' compound in the final product as well as commonly achieved values for agents at this stage of development (Bailly et al. 2017;Spreckelmeyer et al. 2020a, b;Jussing et al. 2021).Finally, the target activity at the time of calibration should be greater than 500 MBq, which, combined with a radiochemical yield (RCY) greater than 50%, allows imaging of at least one patient regardless of generator elution yield. In order for the complexation reaction of Ga 3+ ions with the oxygen and nitrogen atoms of the DOTA chelate to proceed without problems, the pH must be acidic, preferably buffered between 3.0 and 4.0, with a heat supply (Green and Welch 1989;Kubíček et al. 2010). To identify the optimal parameters for the complexation of EMP-100 with gallium-68, we used a systematic optimisation approach based on a design of experiments.The optimal parameters for the complexation of EMP-100 with gallium-68 were determined by adjusting the pH (2.75-4.00),temperature (80, 85, 90, 95 °C), heating time (0, 5, 8, 10 and 15 min) with the amount of peptide fixed at 90 µg and without adjuvants.For these first 3 parameters, the RCP > 95% measured by thin layer chromatography (TLC) was the limiting factor for defining the optimised parameter.We performed these experiments in manual mode using the SCX eluate from the module without passing through the C18 column.We then heated the reaction vial using an Elite ® heating block (Major Science, Taiwan).In a subsequent step, using the GAIA ® module (Elysia Raytest, Belgium) and C18 purification, we optimised the amount of EMP-100 peptide (30, 50, 75, 90 µg) and the effect of adjuvants (ethanol, ascorbic acid and gentisic acid) on the quality of [ 68 Ga] Ga-EMP100 in the final product.Optimisation criteria included RCP measured by TLC and HPLC (> 95%), RCY (> 50%) and molar activity (MA) (> 10 GBq/µmol). The RCP was determined by TLC using a mixture of 1 mol/L ammonium acetate and methanol (1:1), while the RCY was calculated according to the following formula: C18 and SCX activities are automatically recorded during radiolabelling and are available in the final report. Validation of quality controls and 3 batches As the compound [ 68 Ga]Ga-EMP100 does not have a monograph in the European Pharmacopoeia (Ph.Eur.), we followed the general text Ph.Eur.0125 (Radiopharmaceuticals) and Ph.Eur.51900 (Extemporaneous preparation of radiopharmaceuticals) (Ph.Eur.11th Edition 2022). After each radiolabelling, an aliquot of the final [ 68 Ga]Ga-EMP100 product was taken for quality control, which included the following parameters. Appearance Visual inspection of the solution behind a lead glass screen was used to verify that the solution was clear and colorless. Measurement of activity and calculation of molar activity A MEDI-405 ® activimeter (Medisystem, France) was used to measure the final radioactivity of the [ 68 Ga]Ga-EMP100 product, expressed in MBq, with a lower limit of 500 MBq per 10 mL.The molar activity (MA) was calculated by dividing the radioactivity (GBq) by the amount of precursor peptide (µmol).The lower limit of determination was 10 GBq/µmol. Radionuclide identity The gallium-68 radionuclide decays by the emission of positrons, whose dematerialisation results in 511 keV gamma photons, which were identified using a Mucha ® gamma counter (Elysia Raytest, Belgium).The half-life of the radionuclides in the final product, assumed to be gallium-68 only, was determined by a five-point decay test using a MEDI 405 ® activimeter.The half-life at each point was measured and calculated according to the monographs Ph.Eur.0125 and Ph.Eur.2482, which should be between 61 and 75 min (Ph.Eur.11th Edition 2022). Radiochemical identity and purity Thin layer chromatography A 5 µL sample of the final product was applied to a solid iTLC-SG paper stationary phase (Agilent, US) for migration into a mobile phase of 1 M ammonium acetate and methanol (1:1).The compounds were characterised by a retardation factor (Rf ), which reflects the migration distance of the compound relative to the spotting line.The [ 68 Ga]Ga-EMP100 has an Rf > 0.8, while free gallium-68 impurities and RCY = C18 activity before elution − C18 activity after elution SCX initial activity measured on SCX * 100 colloidal forms have an Rf < 0.1.The RCP by TLC must be greater than 95%.The TLC papers were analysed using a miniGITA ® scanning radiochromatograph (Elysia Raytest, Belgium) and peak integration was performed using GINA ® software (Elysia Raytest, Belgium). High The retention time (Rt) of the compound of interest [ 68 Ga]Ga-EMP100 was expected to match the retention time of the "cold" standard [ nat Ga]Ga-EMP100 ± 5%. Radiochemical purity limits were set for colloidal gallium-68 and free gallium-68 detectable by radio-HPLC and/or TLC. HPLC was used for chemical and radiochemical identification of the various species likely to be present in the final solution: [ 68 Ga]Ga-EMP100, EMP100, degradation or radiolysis products and the free gallium-68 species. The overall RCP of the compound of interest [ 68 Ga]Ga-EMP100 was calculated using the following formula where A is the percentage of gallium-68 impurities in free and colloidal form calculated by TLC and B is the percentage of [ 68 Ga]Ga-EMP100 determined by HPLC.We have set a target of 95% for the RCP (overall) of [ 68 Ga]Ga-EMP100. RCP = (100 − A) × B 100 pH evaluation For the experiments performed in manual mode, we measured the pH using a Sevencompact duo S213 ® pH meter (Mettler Toledo) after decay, and for the experiments performed in automated mode, we measured the pH using test strips (VWR and Sigma). Endotoxin, sterility testing and residual solvents Endotoxin testing or pyrogen evaluation was performed using a chromogenic method on the Endosafe Nexgen ® instrument (Charles River, Ireland) with a specification of less than 17.5 IU/mL (Ph.Eur.11th Edition 2022).Sterility of the finished product was assessed by inoculation into a culture medium after decay (> 48 h) and absence of growth for 14 days, as described (Ph.Eur.11th Edition 2022). Ethanol, used to stabilise the complex during the heating step and to elute the product in the purification step, must be less than 10% in the final product (Ph.Eur.11th Edition 2022).Ethanol is quantified by gas chromatography (GC) (Ph.Eur.50400). Assessment of reproducibility and stability Three batches of [ 68 Ga]Ga-EMP100 were produced to validate the radiopharmaceutical production and quality control process, and each was thoroughly analysed to ensure that all quality parameters met the acceptance criteria.The stability of [ 68 Ga] Ga-EMP100 was evaluated over a period of 3 h at room temperature, with RCP measured by TLC and HPLC. Optimisation and validation of [ 68 Ga]Ga-EMP100 radiolabelling The first result of the radiolabelling optimisation was the identification of the optimal pH range between 2.75 and 4.00, which was achieved by varying the volume of 0.08 mol/L ammonium acetate between 1700 and 4400 µL and measuring the effect on the RCP measured by TLC (Table 1), keeping the temperature and heating time at 90 °C for 10 min constant.The highest RCP was obtained with a buffer volume of 2500 µL, corresponding to a pH of 3.75. Subsequent investigations were carried out on the influence of heating time at a stable temperature (90 °C), measuring the incorporation of gallium-68 by assessing the RCP at different time points (0, 5, 8, 10 and 15 min).A duration of 10 min was found to ensure almost complete incorporation (Table 2).The effect of complexation temperature was then investigated at 80, 85, 90 and 95 °C, with pH and duration maintained at 3.75 and 10 min, respectively.Complexation was found to be almost complete at temperatures of 90 °C and above (Table 3). Throughout the above optimisation steps, 90 µg of EMP100 peptide precursor was consistently used and RCP by TLC was performed prior to C18 according to the manual mode described above.Smaller amounts of peptide precursor (30, 50, 75 µg) were tested to determine the minimum amount required for satisfactory synthesis yield.Satisfactory RCP (> 95%) and RCY (> 50%) were obtained with 75 µg of precursor peptide (Table 4). When measuring the chemical identity by radio-HPLC, we observed radioactivity peaks that most likely correspond to radiolysis impurities or incomplete radiolabelling (Fig. 3b, Table 5 (Test identification number 1)).To reduce radiolysis oxidation and stabilise the reaction mixture, we introduced adjuvants such as ascorbic acid, gentisic acid and ethanol prior to the heating step.Initial tests with all three adjuvants gave consistent results as reflected in the RCP and RCY data (Table 5 (column 2), Fig. 3c).Then we tested the combination of ethanol with ascorbic acid on a series of radiolabelling and all batches were within specifications (Table 5 (column 3), Fig. 3c). Three batches of [ 68 Ga]Ga-EMP100 were prepared under the optimised synthesis parameters (pH 3.75, heating temperature 90 °C for 10 min, 75 µg precursor EMP100, with ascorbic acid and ethanol added as adjuvants).All three batches were found to be within the defined specifications (Table 6). The stability of [ 68 Ga]Ga-EMP100 was assessed over 3 h in the finished product vial at room temperature, measuring appearance, pH, radiochemical purity, and sterility.Results remained within established specifications (Table 7). Discussion Here we describe the pharmaceutical development and validation of an automated method and quality control system for gallium-68 labelling with a c-MET ligand (EMP100) using the Gaia Luna ® module. Gallium-68 is a positron emitter that is readily detectable in PET-CT imaging and has the advantage of being readily available in hospitals thanks to 68 Ge/ 68 Ga generators.As a result, gallium-68 radiopharmaceuticals can be prepared on site without the need for a medical cyclotron.Automated systems are a good solution for gallium-68 radiolabelling because they are more reliable, more reproducible and guarantee consistent yields.As clinical demand increases, process automation also improves operator radiation protection compared to manual methods and meets regulatory requirements. We have developed a method for radiolabelling EMP100 peptide precursor with gallium-68 using a Gaia Luna ® module (Elysia Raytest) to obtain [ 68 Ga]Ga-EMP100.This method has many advantages.Firstly, by trapping the gallium-68 eluate on a cationic column prior to the radiolabelling process, the radioactivity can be concentrated, allowing pooling of generator elution to achieve higher activities.Another advantage of cationic eluate purification is the ability to remove concentrations of zinc ions (from gallium-68 decrease) and other metallic impurities that could compete with gallium-68 labelling reactions.The cationic SCX method also allows control of the volume of the reaction (Velikyan 2015;Mueller et al. 2012;Zhernosekov et al. 2007;Meisenheimer et al. 2020;Nelson et al. 2022). To achieve successful radiolabelling of EMP100, we optimised the critical production parameters such as pH, heating time, complexation temperature and amount of peptide.The optimum reaction pH was found to be 3.75 using ammonium acetate buffer.The pH of the buffer plays an important role in radiolabelling procedures, particularly with gallium-68, and the reaction kinetics for 68 Ga 3+ incorporation is inversely related to pH (Bartholomä et al. 2010;Bnzeth et al. 1994).We observed the best radiolabelling efficiency at pH 3.75 and note that at pH 4, hydrolysis to insoluble 68 Ga(OH) 3 occurs in the preparation, as the radiolabelling process is inconsistent at low to normal RCP.This range of pH is in agreement with previously published results for the manual radiolabelling of [ 68 Ga]Ga-EMP100, carried out by fractionated elution of gallium-68, where the pH used was between 3.7 and 4.0, obtained using sodium acetate (Mittlmeier et al. 2021).The reaction temperature was then investigated: this is an important factor since, above a certain temperature, gallium-68 ions can form both gallium oxides and hydroxides as precipitates (Silva et al. 2009), and some biological compounds, such as peptides, can be thermolabile and undergo degradation or denaturation, thus affecting the quality of the final RP (Lepareur 2022).While published data show radiolabelling at 95 °C for 15 min, our investigations show that incorporation is complete after 10 min and that a heating temperature of 90 °C is sufficient for complete complexation. Molar activity (GBq/µmol) is an important parameter in PET imaging.When the biological target concentration is minimal, image quality and quantification can be improved by a high MA, as has been shown for GLP-R (Velikyan 2015;Velikyan et al. 2017Velikyan et al. , 2008;;Migliari et al. 2022;Eriksson et al. 2014) for insulinoma imaging using ligands such as exendin-4.In particular, the presence of unlabelled peptide can reduce the concentration of radioactivity in the target tissue due to competition with the labeled peptide for the same receptor.In a manual process, Mittlmeier et al. (2021) used 100 µg of EMP100 precursor, corresponding to 27 nmol.In this study, we investigated different amounts to find the minimum required and found that above 75 µg of peptide (equivalent to 20 nmol), a sufficient synthesis yield is achieved, i.e. above 50%, with an RCP in line with specifications. During radiolabelling of [ 68 Ga]Ga-EMP100, gallium-68 atoms decay, emitting gamma and beta radiation.In the presence of water molecules in the solution, this radiation generates oxygenated free radicals.These radical species are capable of oxidising certain biological molecules, particularly thiol groups and certain amino acids (methionine, cysteine, isoleucine) (Velikyan 2015;Meisenheimer et al. 2020;Velikyan et al. 2017;Janota et al. 2016).During the initial measurement of chemical identity by radio-HPLC, we observed gallium-68 peaks, probably corresponding to radiolysis impurities, at earlier retention times compared to the main [ 68 Ga]Ga-EMP100 peak.To reduce the oxidative effect of radiolysis and to further stabilise the reaction medium, we introduced various adjuvants known to have antioxidant effects and found consistent results in terms of RCP, RCY and radiochemical identification when the combination of the two excipients (ascorbic acid and ethanol) were added. Finally, consecutive batches of [ 68 Ga]Ga-EMP-100 were produced according to the parameters defined during optimisation and were found to meet the defined specifications.In addition, the product was found to be stable 3 h after radiolabelling.Based on a starting activity of 1500 MBq, automated radiolabelling yielded approximately 1000 MBq of final product with a final MA of 50 GBq/µmol.This would be sufficient to image 2 or 3 (70 kg) patients at 2.0 MBq/kg body weight with a single PET camera, as with the gallium-68-labelled radiopharmaceuticals [ 68 Ga]Ga-PSMA-11 (EMA.European Medicines Agency 2022;Fourquet et al. 2021) and [ 68 Ga]Ga-DOTATOC (EMA.European Medicines Agency 2018a; Delabie et al. 2022;Moreau et al. 2022) already used in clinical routine. This robust automated radiolabelling process helps to achieve the highest possible MA, i.e. with the smallest amount of peptide that allows sufficient gallium-68 incorporation yield.In clinical practice, this starting MA allows injection of [ 68 Ga] Ga-EMP100 even after a decay time of up to 2 h, although the MA is lower than the specification (10 GBq/µmol). Conclusion For the automated radiolabelling of [ 68 Ga]Ga-EMP100, the parameters of pH, temperature, precursor peptide content, and the use of adjuvants for impurity management were efficiently optimised, resulting in the production of 3 compliant and stable batches according to the principles of good manufacturing practice. [ 68 Ga]Ga-EMP100 was successfully synthesised and is now available for clinical development in PET-CT imaging. Fig. 3 Fig. 3 HPLC chromatogram showing: a the peak of the cold standard [ nat Ga]Ga-EMP100 (UV 220 nm), b peaks of free [ 68 Ga]Ga 3+ and [ 68 Ga]Ga-impurities due to radiolysis before the [ 68 Ga]Ga-EMP100 peak after synthesis without adjuvants, c the peak of [ 68 Ga]Ga-EMP100 after synthesis with adjuvants Gillings et al. 2020;Todde et al. 2014;) In accordance with ICH Q2 (R1) standards and RP recommendations (Revised guidance for elaborating monographs on radiopharmaceutical preparations: new section on validation of methods 2019;Gillings et al. 2020;Todde et al. 2014; Tietje et al. 2010; EMA.European Medicines Agency 2018b), an HPLC method was developed and validated on a Nexera-i LC 2040C 3D ® instrument (Shimadzu, Japan) coupled in series with a diode array detector for UV absorbance detection at 220 nm and 280 nm and a GABI Nova ® radioactivity detector (Elysia Raytest, Belgium) for 511 keV photon detection.The reversed phase column used for separation was a Luna Omega 3 µm PS C18 ® 100 Å, 100 × 4.6 mm (Phenomenex, US).Injections of 5 µL were made at a fixed flow rate of 1 mL/min using a gradient elution mode with solvents A (water/0.1% formic acid), B (acetonitrile/0.1% formic acid) over a period of 12 min.The following phase gradient was applied: 0-1.7 min B 3%, 1.7-8 min B 70%, 8-9 min B 70%, 9-12 min B 3%.The GINA X ® software (Elysia Raytest, Belgium) was used to integrate the different peaks. Table 2 Study of the complexation time of [ 68 Ga]Ga-EMP100 Table 3 Study of the complexation temperature of [ 68 Ga]Ga-EMP100 Table 4 Summary of the QC data for [ 68 Ga]Ga-EMP100 according to the amount of peptide (n = 3 or more for each point) Table 5 Summary of [ 68 Ga]Ga-EMP100 QC data by adjuvant Table 7 Results of [ 68 Ga]Ga-EMP100 stability study in final vial
2023-10-17T06:17:37.716Z
2023-10-16T00:00:00.000
{ "year": 2023, "sha1": "3f6f4500ce645fef988b9cf58cd56d7cb125dc75", "oa_license": "CCBY", "oa_url": "https://ejnmmipharmchem.springeropen.com/counter/pdf/10.1186/s41181-023-00213-3", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "c9cef911fb848fb1ad5296aed1d932ee08c3cfde", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Medicine" ] }
8612720
pes2o/s2orc
v3-fos-license
Oncotripsy: Targeting cancer cells selectively via resonant harmonic excitation We investigate a method of selectively targeting cancer cells by means of ultrasound harmonic excitation at their resonance frequency, which we refer to as oncotripsy. The geometric model of the cells takes into account the cytoplasm, nucleus and nucleolus, as well as the plasma membrane and nuclear envelope. Material properties are varied within a pathophysiologically-relevant range. A first modal analysis reveals the existence of a spectral gap between the natural frequencies and, most importantly, resonant growth rates of healthy and cancerous cells. The results of the modal analysis are verified by simulating the fully-nonlinear transient response of healthy and cancerous cells at resonance. The fully nonlinear analysis confirms that cancerous cells can be selectively taken to lysis by the application of carefully tuned ultrasound harmonic excitation while simultaneously leaving healthy cells intact. Introduction In this study, we present numerical calculations that suggest that, by exploiting key differences in mechanical properties between cancerous and normal cells, oncolysis, or 'bursting' of cancerous cells, can be induced selectively by means of carefully tuned ultrasound harmonic excitation while simultaneously leaving normal cells intact. We refer to this procedure as oncotripsy. Specifically, by studying the vibrational response of cancerous and healthy cells, we find that, by carefully choosing the frequency of the harmonic excitation, lysis of the nucleolus membrane of cancerous cells can be induced selectively and at no risk to the healthy cells. Numerous studies suggest that aberrations in both cellular morphology and material properties of different cell constituents are indications of various forms of cancerous tissues. For instance, a criterion for malignancy is the size difference between normal nuclei, with an average diameter of 7 to 9 microns, and malignant nuclei, which can reach a diameter of over 50 microns (Berman, 2011). Early studies (Guttman and Halpern, 1935) have shown that the nuclear-nucleolar volume ratios in normal tissues and benign as well as malignant tumors do not differ quantitatively. Nucleoli volumes of normal tissues, however, are found to be significantly smaller than the volume of nucleoli in cancerous tissues (Guttman and Halpern, 1935). Similarly, the mechanical stiffness of various cell components has been found to vary significantly in healthy and diseased tissues. In Cross et al. (2007), the stiffness of live metastatic cancer cells was investigated using atomic force microscopy, showing that cancer cells are more than 80% softer than healthy cells. Other cancer types, including lung, breast and pancreas cancer, display similar stiffness characteristics. Furthermore, using a magnetic tweezer, Swaminathan et al. (2011) found that cancer cells with the lowest invasion and migratory potential are five times stiffer than cancer cells with the highest potential. Likewise, increasing stiffness of the extracellular matrix (ECM) was reported to promote hepatocellular carcinoma (HCC) cell proliferation, thus being a strong predictor for HCC development (Schrader et al., 2011). Moreover, enhanced cell contractility due to increased matrix stiffness results in an enhanced transformation of mammary epithelial cells as shown in Paszek et al. (2005). Conversely, a decrease in tissue stiffness has been found to impede malignant growth in a murine model of breast cancer (Levental et al., 2009). Various experimental techniques have been utilized in order to quantitatively assess the material properties of individual cell constituents in both healthy and diseased tissues. The inhomogeneity in stiffness of the living cell nucleus in normal human osteoblasts has been investigated by Konno et al. (2013) using a noninvasive sensing system. As shown in that study, the stiffness of the nucleolus is relatively higher compared to that of other nuclear domains (Konno et al., 2013). Similarly, a difference in mass density between nucleolus and nucleoplasm in the xenopus oocyte nucleus was determined by Handwerger et al. (2005) by recourse to refractive indices. The elastic modulus of both isolated chromosomes and entire nuclei in epithelial cells are given by Houchmandzadeh et al. (1997) and Caille et al. (2002), respectively. Specifically, Houchmandzadeh et al. (1997) showed that mitotic chromosomes behave linear elastically up to 200% extension. Experiments of Dahl et al. (2004) additionally measured the network elastic modulus of the nuclear envelope, independently of the nucleoplasm, by means of micropipette aspiration, suggesting that the nuclear envelope is much stiffer and stronger than the plasma membranes of cells. In addition, wrinkling phenomena near the entrance of the micropipette were indicative of the solid-like behavior of the envelope. Kim et al. (2011) estimated the elastic moduli of both cytoplasm and nucleus of hepatocellular carcinoma cells based on force-displacement curves obtained from atomic force microscopy. In addition, Zhang et al. (2002) used micropipette aspiration techniques in order to further elucidate the viscoelastic behavior of human hepatocytes and hepatocellular carcinoma cells. Based on their study, Zhang et al. (2002) concluded that a change in the viscoelastic properties of cancer cells could affect metastasis and tumor cell invasion. The increased compliance of cancerous and pre-cancerous cells was also investigated by Fuhrmann et al. (2011), who used atomic force microscopy to determine the mechanical stiffness of normal, metaplastic and dysplastic cells, showing a decrease in Young's modulus from normal to cancerous cells. The scope of the present work, and the structure of the present paper, are as follows. We begin by defining the geometric model and summarizing the material model and material parameters used in finite-element analyses. Subsequently, the accuracy of the finite-element model is assessed by means of a comparison between numerical and analytical solutions for the eigenmodes of a spherical free-standing cell. We then present eigenfrequencies and eigenmodes of a freestanding ellipsoidal cell, followed by a Bloch wave analysis to model tissue consisting of a periodic arrangement of cells embedded in an extracellular matrix. Finally, resonant growth rates are calculated that reveal that cancerous cells can selectively be targeted by ultrasound harmonic excitation. The transient response at resonance of healthy and cancerous cells is presented in the fully nonlinear range by way of verification and extension of the findings of the harmonic modal analysis. We close with a discussion of results. Finite element analysis In this section, we investigate the dynamical response of healthy and cancerous cells under harmonic excitation. We begin by briefly outlining the underlying geometric and material parameters used in our analysis, followed by a verification of the finite element model used for modal analysis. We then calculate the eigenfrequencies and eigenmodes of both free-standing and periodic distributions of cells. In this latter case, we determine the full dispersion relation by means of a standard Bloch wave analysis. Finally, we present resonant growth rates and simulate the transient response of both cancerous and healthy cells excited at resonance in a fully-nonlinear setting by means of implicit dynamics calculations. Geometry and material parameters The nucleus, the largest cellular organelle, occupies about 10% of the total cell volume in mammalian cells (Lodish et al., 2004;Alberts et al., 2002). It contains the nucleolus, which is embedded in the nucleoplasm, a viscous solid similar in composition to the cytosol surrounding the nucleus (Clegg, 1984). In this study, the cytosol is modeled in combination with other organelles contained within the plasma membrane, such as mitochondria and plastids, which together form the cytoplasm. For simplicity, we idealize the plasma mebrane, nuclear envelope, cytoplasm, nucleoplasm, and nucleolus as being of spheroidal shape. We model the plasma membrane, a lipid bilayer composed of two regular layers of lipid molecules, in combination with the actin cytoskeleton providing mechanical strength as a membrane with a thickness of 10 nm (Hine, 2005). Similarly, we model the nuclear envelope, a double lipid bilayer membrane, in combination with the nuclear lamin meshwork lending it structural support as a 20 nm thick membrane. We define the cytoplasm, nucleoplasm, and nucleolus as spheres with radii of 5.8 µm, 2.7 µm, and 0.9 µm and subsequently scale them by a factor of 1.2 in two dimensions in order to obtain the desired spheroidal shape. We assume an average nuclear diameter of about 5 µm, as reported in Cooper (2000). Diameters for both cytoplasm and nucleolus follow from Lodish et al. (2004) and Guttman and Halpern (1935), who report nucleus-to-cell and nucleus-to-nucleolus volume ratios of 0.1 and 30.0, respectively. The geometry with all cell constituents as used in subsequent finite element analyses is illustrated in Figure 1. In order to further elucidate the effect of an increasing nucleus-to-cell volume ratio, as observed experimentally (Berman, 2011), we consider a range of geometries with increasing nuclear and nucleolar volumes. For all of these geometries, we hold fixed the volume of the cytoplasm. Furthermore, we assume a constant nuclear-to-nucleolar volume ratio for both healthy and cancerous cells, as observed by Guttman and Halpern (1935). Cell-to-cell differences and experimental uncertainties notwithstanding, the preponderance of the observational evidence suggests that the cytoplasm, nucleus and nucleolus are ordered in the sense of increasing stiffness. Neglecting viscous effects, we model the elasticity of the different cell constituents by means of the Mooney-Rivlin-type strain energy density of the form where F denotes the deformation gradient, J = det(F ) is the Jacobian of the deformation, and µ 1 , µ 2 and κ are material parameters. For both cytoplasm and nucleus in cancerous cells, material parameters corresponding to the data reported by Kim et al. (2011) are chosen and summarized in Table 1. We additionally infer the elastic moduli of the nucleolus from Konno et al. (2013) based on a comparison of the relative stiffnesses of the nucleoli and other nuclear domains. For membrane elements of the plasma membrane and nuclear envelope, we choose material parameters corresponding to the cytoplasm and nucleoplasm, respectively. Furthermore, we infer matrix parameters from the shear moduli reported by Schrader et al. (2011) for normal and fibrotic livers. For all parameters, we resort to small-strain elastic moduli conversions, with a Poisson's ratio of 0.49 to simulate a nearly incompressible material, in order to match experimental values with constitutive parameters. We vary the stiffness of both cellular components and extra-cellular matrix (ECM) within a pathophysiologically-relevant range in order to investigate the effect of cell softening and ECM stiffening on eigenfrequencies. Finally, we assume both cytoplasm and nucleoplasm to have a mass density of 1 g/cm 3 , a value reported by Moran et al. (2010) as an average cell density, and we set the density of the nucleolus to 2 g/cm 3 (Birnie, 1976). Verification against analytical solutions Based on the elastic model described in the foregoing, the remainder of the paper is concerned with the computation of the normal modes of vibration of healthy and cancerous cells using finite elements. To this end, we begin by assessing the accuracy of the finite element model used in subsequent calculations by means of comparisons to exact solutions. We consider a single spherical free-standing cell and compare numerically computed eigenmodes with the analytical solution of Kochmann and Drugan (2012) for a free-standing elastic sphere with an elastic spherical inclusion. In the harmonic range, the finite-element discretization of the model leads to the standard symmetric linear eigenvalue problem where K and M are the stiffness and mass matrices, respectively, ω is an eigenfrequency of the system andÛ is the corresponding eigenvector, subject to the normalization conditionÛ For the spherical geometry under consideration, the modal analysis has been carried out analytically in closed form by Kochmann and Drugan (2012). For the homogeneous sphere, they find that the natural frequencies ω i follow as the roots of function where b is the outer radius and λ and µ and the Lamé constants. Furthermore, Kochmann and Drugan (2012) report analytical solutions for an isotropic linearelastic spherical inclusion of radius a, moduli λ 1 and µ 1 , within a concentric isotropic linear-elastic coating of uniform thickness of outer radius b, moduli λ 2 and µ 2 . In this case, the eigenfrequencies ω i follow from the characteristic equation det A = 0, where with dimensionless quantities Constitutive parameters Solid sphere 10 −3 10 −3 3 30 Spherical inclusion 10 −3 10 −3 3 30 and with x = a/b. Table 3 shows a comparison between analytical and finite-element values of the fundamental frequency of a solid sphere and a sphere with a high-contrast spherical inclusion for the particular choice of parameters listed in Table 2. The finite-element values correspond to a mesh of ≈ 15, 000 linear tetrahedral elements, representative of the meshes used in subsequent calculations. As may be seen from the table, the finite-element calculations may be expected to result in frequencies accurate to ∼ 10 −3 relative error. Eigenfrequencies and eigenmodes In order to obtain a first estimate of the spectral gap between cancerous and healthy cells, we consider the eigenvalue problem of single free-standing cells. To this end, cytoplasm, nucleoplasm and nucleolus are discretized using linear tetrahedral elements, while linear triangular membrane elements are used for the plasma membrane and nuclear envelope. A typical finite element mesh for a cell geometry with a ratio of n/c = 1 and a total of 40, 349 elements is shown in Figure 2. For the calculation of eigenfrequencies, meshes containing ∼ 16, 000 elements are used for geometries ranging from a ratio of n/c = 1 to n/c = 2. In addition to the accuracy assessment of Section 2.2, we have assessed the convergence of the free-standing cell finite-element model by considering five different meshes of 2, 171, 3, 596, 8, 608, 11, 121, and 15, 215 elements. From this analysis, we find that the accuracy in the lowest eigenfrequency for the finest mesh is of the order of 0.2%, which we deem sufficient for present purposes. All cell constituents are modeled by means of the hyperelastic Mooney-Rivlin model described in Section 2.1, with the materials constants of Table 1. Figure 3 shows the calculated lowest eigenfrequency, rigid-body modes excluded, for different cell geometries and varying material properties. In the calculations, the nucleolus/nucleoplasm-to-cytoplasm volume ratio is increased incrementally in the range of n/c = 1.0 to n/c = 2.0, resulting in six different test geometries. Furthermore, material properties are varied within a pathophysiologically relevant range, whereby a value of 100% cancerous potential corresponds to values presented in Table 1. Since cancerous cells are more than 80% softer than healthy cells (Cross et al., 2007), we vary the elastic moduli in Table 1, with full values representing cancerous cells and increased moduli representing healthy cells. In addition, decreased elastic moduli of the extracellular matrix (ECM) are expected in healthy tissues (Schrader et al., 2011). Figure 3 summarizes the calculated lowest eigenfrequency for different percentages of the parameter values presented in Table 1. Thus, a cancerous potential of 80% reflects an increase in elastic moduli of 20% for material parameters of the different cell constituents with a simultaneous decrease in elastic moduli of the ECM by 20%. The shaded area in Figure 3 illustrates a typical gap in the lowest natrual frequency for a nucleolus/nucleoplasm-tocytoplasm volume ratio of n/c = 1.0, with ω = 501, 576 rad/s for cancerous cells and ω = 271, 764 rad/s for a reduction in cancerous potential by 80%, the expected value for healthy cells (Cross et al., 2007)). An even higher spectral gap is recorded by additionally taking the growth in nucleolus/nucleoplasm-tocytoplasm volume ratio into account, as experimentally observed in cancerous cells (Berman, 2011). A more detailed comparison of the spectra of free-standing healthy and cancerous cells, corresponding to cancerous potentials of 20% and 100%, respectively, is presented in Table 4, which collects the computed lowest ten eigenfrequencies for a cell geometry with volume ratio n/c = 1.0. From this table we observe that free-standing cancerous cells have a ground eigenfrequency of the order of 500, 000 rad/s, whereas free-standing healthy cells have a ground eigenfrequency of the order of 270, 000 rad/s, or a healthy-to-cancerous spectral gap of the order of 230, 000 rad/s. In addition, their higher eigenfrequencies overlap with the ground eigenfrequency of cancerous cells. Therefore, special attention is required to examine whether or not excitation of cancerous cells might trigger healthy cells to resonate. Indeed, figures of merit other than natural frequency, including growth rates of resonant modes and energy absorption, may also be expected to play an important role in differentiating the response of cancerous and healthy cells. These additional figures of merit are investigated in Section 2.6. The eigenmodes corresponding to different resonance frequencies for a ratio of n/c = 1.0 and a cancerous potential of 100% are shown in Figure 4. It may be noted from the figure how each mode represents different characteristic deformation mechanisms of the various cell constituents. Knowledge of the precise modal shape may therefore help to target lysis of specific cell components. Thus, shear deformation may be expected to dominate at a frequency of 558, 031 rad/s, whereas volumetric deformations may be expected to be dominant at 576, 073 rad/s. These differences in deformation mode open the way for targeting specific cell constituents for lysis, such as the plasma membrane at a frequency of 816, 846 rad/s. Bloch wave analysis The preceding spectral analysis for a free-standing cell can be extended to a tissue consisting of a periodic arrangement of cells embedded into an extracellular matrix. In this case, the analysis can be carried out by recourse to standard Bloch wave theory. Within this framework, the displacement field is assumed to be of the form where k is the wave vector of the applied harmonic excitation and the new unknown displacement fieldû(x) is defined within the periodic cell (Bloch, 1929). Table 4: Comparison of the lowest ten eigenfrequencies for a cell geometry with a ratio of n/c = 1.0 and a cancerous potential of 100% (cancerous) versus a cancerous potential of 20% (healthy) obtained from a free vibration analysis. By periodicity, the values of wave vector k can be restricted to the Brillouin zone of the periodic lattice. Substitution of representation (8) into the equations of motion results in a k-dependent eigenvalue problem. The corresponding eigenfrequencies ω i (k) define the dispersion relations of the tissue. Details of implementation of Bloch-wave theory in elasticity and finite-element analysis may be found in Krödel et al. (2013). In the present analysis, we consider a cubic unit cell of size a = 15 µm and the finite element discretization shown in Figure 2. The extracellular matrix (ECM), not shown in the figure, is also discretized into finite elements. Figure 6 shows the first irreducible Brillouin zone, which is itself a cube of size 2π/a. In order to visualize the dispersion relations, we choose the k-path along the edges and specific symmetry lines of the Brillouin zone also shown in Figure 6. The path allows for the elliptical shape of the cells, with only one symmetry axis. The computed dispersion relations for both cancerous and healthy cells then follow as shown in Figures 7 and 8, respectively, for the lowest 50 eigenfrequencies. Similarly to the calculations presented in Section 2.3 for free-standing cells, the lowest eigenfrequencies of the healthy tissue are shifted uniformly towards lower values with respect to the eigenfrequencies of the cancerous tissue, with significant spectral gaps of the order of 200, 000 rad/s between the two. We recall that the computed ground eigenfrequency of free-standing cancerous cells is of the order of ω ∼ 500, 000 rad/s. In addition, from the properties of Table 1 we may expect a cancerous-tissue shear sound speed of the or- 2ρ(1+ν) ∼ 0.8 m/s (cytoplasm) to c ∼ 7.2 m/s (nucleolus). Therefore, at resonance, the corresponding wave number of the applied harmonic excitation is of the order of k ∼ ω/c ∼ 725, 000 rad/m (cytoplasm) and k ∼ 69, 444 rad/m (nucleolus) or, correspondingly, a wavelength of the order of λ ∼ 2π/k ∼ 10 −5 m (cytoplasm) and λ ∼ 9 · 10 −5 m (nucleolus), which is larger than a typical cell size. It thus follows that the regime of interest here is the long-wavelength regime, corresponding to the limit of k → 0 in the preceeding Bloch-wave analysis. Consequently, in the remainder of the paper we restrict attention to that limit. The corresponding boundary value problem takes the form sketched in Figure 5 and consists of the standard displacement elastodynamic boundary value problem with harmonic displacement boundary conditions applied directly to the boundary. ω n [rad/s] 501576 502250 532132 537569 r n,cancerous ||Û n || [ µm s ] 8.862 · 10 6 9.179 · 10 6 −3.898 · 10 8 −2.863 · 10 7 ω n [rad/s] 496165 496165 519049 545277 r n,healthy ||Û n || [ µm s ] −3.882 · 10 6 −3.882 · 10 6 −2.032 · 10 6 −0.335 · 10 6 (b) Figure 9: Comparison of u n during transient response simulations in the linearized kinematics framework for a cancerous cell (with a cancerous potential of 100%) excited at one of its resonance frequencies ω c and a healthy cell (with a cancerous potential of 20%) excited at an eigenfrequency ω h which is closest to ω c . Resonant growth rates The spectral gap, or gap in the lowest eigenfrequencies, between healthy and cancerous cells and tissues provides a first hint of sharp differences in the response of healthy and cancerous tissue to harmonic excitation. In particular, the preceding analysis shows that the fundamental frequencies of the cancerous tissue may be in close proximity to eigenfrequencies of the healty tissue, which appears to undermine the objective of selective excitation of the cancerous tissue. However, a complete picture requires consideration of the relative energy absorption characteristics and growth rates of resonant modes. To this end, we consider the modal decomposition of the displacement field where (Û n ) N n=1 are eigenvectors obeying the orthogonality and normalization condition (3) and (u n (t)) N n=1 are time-dependent modal amplitudes obeying the modal equations of motionü In this equation, ω n is the corresponding eigenfrequency, F ext (t) is the external force vector and F n (t) is the corresponding modal force. For a harmonic excitation of frequency ω ext , eq. (10) further specializes tö u n (t) + ω 2 n u n (t) = F n cos ω ext t, where now F n is a constant modal force amplitude. At resonance, ω ext = ω n , the amplitude of the transient solution starting from quiescent conditions grows linearly in time and the transient solution follows as u n (t) = F n 2ω n t sin(ω n t). We thus conclude that the growth rate of resonant modes is Figure 9 shows the growth properties of r n for two different cases. In the first case, a cancerous cell is excited at its resonant frequency of ω c = 501, 576 rad/s, whereas the healthy cell is excited at its closest resonance frequency of ω h = 496, 165 rad/s. In the second case, eigenfrequencies of 538, 512 rad/s and 545, 277 rad/s are investigated. The simulations reveal that the growth rate of the resonant response of the cancerous cells is much faster than that of the healthy cells, which opens a window for selectively targeting the former. Transient response at resonance The preceding analysis has been carried out with a view to understanding the resonant response of cells and tissues under harmonic excitation in the harmonic range. In this section, we seek to confirm and extend the conclusions of the harmonic analysis by carrying out fully nonlinear implicit dynamics simulations of the transient response of healthy and cancerous cells under resonant harmonic excitation. In this analysis, a geometry of ratio n/c = 1 is considered, Figure 2, together with material parameters of Table 1. We restrict attention to the long wavelength limit, i. e. to ultrasound radiation of wavelengths larger than the cell size. In keeping with this limit, we enforce harmonic displacement boundary conditions directly as shown in Figure 1 in order to mechanically excite the cell. The strength of the harmonic excitation used in the calculations isû 0 = 0.04 µm. In the simulations, we track the transient amplification of the cell response up to failure. We assume that failure occurs when the stress in the cytoskeletal polymer network, which constitutes the structural support for cell membranes, reaches a threshold strength value. Lieleg et al. (2009) found that the macroscopic network strength can be traced to the microscopic interaction potential of cross-linking molecules and other cytoskeletal components such as actin filaments. Here, we assume a rupture strength of the order of 30 Pa based on strength values of a single actin/cross-linking protein bond reported in Lieleg et al. (2009). Figure 10 shows the fully-nonlinear transient response of healthy and cancerous cells at the resonant frequency of the latter. It can be seen from the figure that stresses in both the plasma membrane and nuclear envelope of the cancerous cell grow at a much faster rate than in healthy cells. For the harmonic excitation under consideration, the strength of the nuclear envelope of the cancerous cell reaches the rupture strength at time t lysis ≈ 71µs, while, at the same time, the level of stress in the healthy cells is much lower. Figure 11 furthermore illustrates the kinetic and potential energy of the nuclear envelope during excitation at resonance of both healthy and cancerous cells. From transient response simulations, the energy that needs to be supplied until the point of rupture is reached is where t is the applied traction on the boundary ∂Ω, u is the displacement vector, F j (t i ) is the force acting on surface node j at time t i , and u j (t i ) is the corresponding displacement vector. For a cell geometry with a ratio of n/c = 1.0 and a cancerous potential of 100%, calculations give a value of 228 pJ for the energy per cell required for lysis. Assuming an average cell size of 20 µm, a time to lysis of 70 µs and a tumor of 1 cm in size, this energy requirement translates into a power density requirement in the range of 0.8 W/cm 2 . Discussion and outlook In this study, we have presented numerical calculations that suggest that spectral gaps between hepatocellular carcinoma and healthy cells can be exploited to selectively bring the cancerous cells to lysis through the application of carefully tuned ultrasound harmonic excitation, while keeping healthy cells intact. We refer to this procedure as oncotripsy. A normal mode analysis in the harmonic range reveals the existence of a healthy-to-cancerous spectral gap in ground frequency of the order of 230, 000 rad/s, or 36.6 kHz. Further analysis of the growth rates of the transient response of the cells to harmonic excitation reveals that lysis of cancerous cells can be achieved without damage to healthy cells. These findings point to oncotripsy as a novel opportunity for cancer treatment via the application of carefully tuned ultrasound pulses in the frequency range of 80 kHz, duration in the range of 70 µs and power density in the range of 0.8 W/cm 2 . This type of ultrasound actuation can be readily delivered, e. g., by means of commercial low-frequency and low-intensity ultrasonic transducers. Evidently, the present numerical calculations serve only as preliminary evidence of the viability of oncotripsy, and further extensive laboratory studies would be required in order to confirm and refine the present findings and definitively establish the viability of the procedure.
2015-12-07T20:32:16.000Z
2015-12-07T00:00:00.000
{ "year": 2016, "sha1": "3329e4082bec17709c8bf704f83554b8c622225d", "oa_license": null, "oa_url": "http://arxiv.org/pdf/1512.03320", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "3329e4082bec17709c8bf704f83554b8c622225d", "s2fieldsofstudy": [ "Biology", "Engineering" ], "extfieldsofstudy": [ "Physics", "Biology" ] }
33897645
pes2o/s2orc
v3-fos-license
Pars plana vitrectomy for posterior surface calcification in a silicone intraocular lens in asteroid hyalosis – a report of mistaken identity? Key include: Optometry; Visual science; Pharmacology and drug therapy in eye diseases; Basic Sciences; Primary and Secondary eye care; Patient Safety and Quality of Care Improvements. Dear editor Mehta et al recently reported removal of dystrophic calcification on the posterior surface of a silicone intraocular lens (IOL) in a patient with asteroid hyalosis. 1 In this case the authors used pars plana vitrectomy (PPV) to successfully remove calcified deposits. We have recently tried unsuccessfully to use PPV to treat an 86 year old patient with calcification of a silicone IOL in the presence of asteroid hyalosis. We chose to avoid IOL exchange due to a history of Fuchs endothelial dystrophy and glaucoma in the left eye, and a failed corneal graft in a rubeotic eye on the right. Our patient did not have an intact posterior capsule having been treated with Nd:YAG capsulotomy 2 years previously, before the calcification occurred. A 25 gauge pars plana vitrectomy was performed after attempting to remove the deposits unsuccessfully from the posterior surface of the IOL with Nd:YAG laser. It was felt that PPV would be less likely to cause corneal decompensation than IOL exchange. In contrast to the experience of Mehta et al 1 we found that the vitrector was unable to effectively remove many of the calcified deposits even with high vacuum. Instead the use of microforceps and a silicone tipped flute were required to 'polish' the IOL deposits (see Supplementary Video). Although we were able to remove a significant amount of the calcification in this way, the results were ultimately disappointing due to smearing of the deposits which were found to be 'putty-like' in consistency. Although there was some improvement in the clarity of the IOL on retroillumination, the patient reported very limited improvement in vision (6/18 preoperatively to 6/12 immediately after surgery), and on transillumination the posterior surface was significantly smudged (Figure 1). Mehta et al reported that initially their patient underwent an unsuccessful Nd:YAG capsulotomy, however it was not made clear whether the posterior capsule remained intact. 1 It has previously been suggested that direct contact of the IOL with the vitreous cavity caused by Nd:YAG capsulotomy accelerates the deposition of calcification on silicone IOLs, 2 and other authors of larger case series of this condition have reported it to be mostly found after initial Nd:YAG capsulotomy, 3,4 although reports of dystrophic calcification in the presence of an intact capsule do exist. 5 Given our experience, we are unclear whether the photo presented demonstrates typical fibrous capsular opacity, or calcification of the posterior capsule rather than a true case of dystrophic calcification of the IOL optic. We believe this is an important distinction to make since we feel that vitrectomy is only likely to be successful if the posterior capsule is intact. A similar case of this condition has been published recently by Ullman and Gupta. 5 They also used vitrectomy to treat dystrophic calcification, but found it to be ineffective at removing established calcification on the posterior IOL surface on more than one occasion. They suggested using prophylactic PPV to treat dystrophic calcification at an early stage before symptoms become established. We believe that vitrectomy and surgical capsulectomy should be considered as an alternative to Nd:YAG capsulotomy where IOL calcification is not yet established. It is likely that opening a significantly calcified posterior capsule by Nd:YAG capsulotomy in the presence of asteroid hyalosis will lead to significant future IOL dystrophic calcification. In established calcification of a silicone IOL we feel that vitrectomy is of limited value and IOL exchange should remain the standard treatment. We appreciate Drs Rainsbury and Lochhead sharing their experience and insight into the treatment of dystrophic calcification of silicone intraocular lenses (IOL). As they point out, it intuitively makes sense that the presence or absence of a posterior capsule changes the natural history and management options of this pathologic calcification. Disclosure In our case, 1 because of the poor visual potential in the fellow eye from retinal scarring and atrophy secondary to agerelated macular degeneration, we were reluctant to proceed with IOL exchange, though would have if necessary. The patient had already received two prior unsuccessful attempts with the Nd:YAG laser to remove the dystrophic material. During the vitrectomy, no clean surgical plane developed, and a modest degree of mechanical rubbing of the vitrectomy port against the lens optic using a high vacuum with a low cut rate was required to clear the central axis. The material appeared to be adherent directly to the posterior surface of the lens optic, and not to any residual posterior capsule, suggesting the posterior capsule was absent centrally. Had the posterior capsule been present, we suspect it would have been easier to remove the dystrophic material. In the setting of monocular patients (as in our case) or those with compromised capsular support (as in the case described by Drs Rainsbury and Lochhead), we believe it is reasonable to attempt vitrectomy prior to IOL exchange in these cases of dystrophic calcification. Though vitrectomy might not be successful in every case, removing the vitreous alone may make subsequent IOL exchange safer, and prevent recurrence of dystrophic calcification.
2016-05-12T22:15:10.714Z
2014-11-13T00:00:00.000
{ "year": 2014, "sha1": "b8aeedbe795a5d34b620bf7cea5f369741491432", "oa_license": "CCBY", "oa_url": "https://www.dovepress.com/getfile.php?fileID=22414", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "ebd60deff8ec7720a740ec2d52ddd67d71d0af52", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
257695200
pes2o/s2orc
v3-fos-license
Effects of using WeChat/WhatsApp on physical and psychosocial health outcomes among oncology patients: A systematic review The purpose of this systematic review is to summarize the potential effects of the WeChat and WhatsApp mobile applications in cancer management. This systematic review was written in accordance with PRISMA guidelines. CINAHL, PubMed, ProQuest Nursing and Allied Health Database, PsycINFO, PsycARTICLES, and ERIC were utilized for the literature search. Articles were included if they evaluated the outcomes of using WeChat/WhatsApp for cancer management, and excluded if they were qualitative studies, not published in peer-reviewed journals, protocols for a future study, or conference abstracts. 20 studies were included in this systematic review, with a total sample of 3110 participants. Interventions were utilized to share educational information with participants, follow-up after surgical operations, and in clinical communication. Outcomes, including pain, medication adherence, self-efficacy, quality of life, and depression, were statistically significantly improved in the WeChat/WhatsApp intervention groups in comparison to the control groups or to baseline measurements of the study participants. Outcomes of sleep and rehospitalization rate were improved without reaching statistical significance. Outcomes of anxiety, fatigue, and adverse drug reactions were found to be conflictive among included studies. This systematic review suggested that use of WeChat/WhatsApp on cancer management might improve various physical and psychosocial health outcomes among oncological patients. Limitations of the study include solely reviewing English language articles published in academic journals and most of the studies being from one country. Future research should be conducted in various countries among diverse communities, including rural areas, to ascertain the effects of WeChat/WhatsApp in different populations. Introduction Cancer is a disease of high prevalence in the world. In 2018, 18.1 million new cancer cases and 9.5 million cancer-related deaths were reported worldwide. 1 Based on data from 2015 to 2017, an estimated 39.5% of men and women will be diagnosed with cancer at one point during their lifetimes. By 2040, the annual number of new cancer cases is estimated to be 29.5 million, with 16.4 million cancer-related deaths. 1 Cancer management poses a serious financial burden on health systems, with an estimated national expenditure of 150.8 billion dollars in the United States in 2018. 1 Cancer also poses significant physical, social, and financial burdens on patients, their caregivers, families, and governments. The introduction of technology into health care delivery has allowed for the management of noncritical care within the community, reduced hospitalizations and lowered costs to health systems. 2 The term of mHealth is defined as a health practice supported by mobile devices, including mobile phones, patient monitoring devices, personal digital assistants, and other wireless devices. 3 The integration of mHealth into health care allows for a more personalized, participatory, preventative, accessible, and cost-effective approach to health care delivery. 2 The use of WeChat/ WhatsApp in cancer management is a form of mHealth. WhatsApp is the most popular messaging service application in more than 100 countries with over 2.5 billion active users. The greatest use of WhatsApp is in India (390 million users), followed by Brazil (108 million users) and the United States (75 million users). 4 WeChat is among one of the top applications in the world based on user count, currently with over one billion active users and an average 19 million daily active users in the United States. 5,6 It is the most popular social network application in China and used by 78% of the 16-64 aged group in the country. 5 The use of WeChat/WhatsApp as a mode of mHealth effectively capitalizes its familiarity since patients can readily navigate the platform and utilize its features. Although some literature reviews have synthesized the results of using WeChat/WhatsApp for chronic disease management, there is a lack of specific knowledge synthesis on their use for cancer management. In a recent literature review, researchers investigated the value of WeChat in chronic disease management, including diabetes, hypertension, coronary heart disease, and cancer. 7 WeChat was found to facilitate quick communication between providers and patients, acting as an effective tool for health promotion, and follow-up appointments. 7 The absence of an updated review regarding the use of WeChat/WhatsApp in cancer management creates a knowledge gap in this emerging field. In response, research questions of this review were: (1) What are effects of WeChat/ WhatsApp related cancer management interventions on patients' physical health outcomes? (2) What are effects of WeChat/WhatsApp related cancer management interventions on patients' psychosocial health outcomes? Methods The principles of the PRISMA statement 8 was followed in the process of conducting this review. Papers were included if they investigated the use of WeChat or WhatsApp platforms as interventions for patients 18 years or older with oncological diseases. We included quantitative studies published in English peer-reviewed journals. Papers were excluded if they: (a) had a study population that did not relate to oncological diseases, (b) investigated interventions that were not associated with WeChat or WhatsApp, (c) were qualitative studies, (d) were not published in peer-reviewed journals, (e) were protocols for a future study, (f) were conference abstracts, reviews, cross-sectional or case studies, editorials, news articles, patent documents or commentaries. This systematic review did not use animals or recruit human participants. Thus, an ethics approval was not needed. Databases, such as CINAHL, PubMed, ProQuest Nursing and Allied Health Database, Psy-cINFO, PsycARTICLES, and ERIC, were selected for the literature search ( Figure 1). The databases were systematically searched in the field of Title and Abstract using a combination of the keywords, ((cancer) OR (oncolog*) OR (tumor)) AND ((WeChat) OR (WhatsApp)), in May 2021 by two reviewers (TH & PZ). Consistency in screening was ensured via independent evaluation by two researchers (TH & PZ), and discussions to reach consensus. The citations were exported into EndNote software to remove any duplicates. The rest of the citations were screened for relevance based on the established inclusion and exclusion criteria. Eligible articles were searched for full text documents. Then, the full text documents were carefully reviewed, with reasons for exclusion noted. Further, a manual search was conducted in the reference lists of eligible articles for additional papers not found in the electronic search. In December 2021, another systematic search was conducted to update most recent publications. The Critical Appraisal Skills Program Checklists have been applied as quality assessment tools to assess the included papers (Appendix 1). 9 Using these checklists and consulting current literature in the field, we classified the quality of papers as low, moderate, or high according to the final score. [10][11][12][13] Two researchers (TH & PZ) independently evaluated each article; any discrepancies in ratings were discussed to reach a consensus. All papers included in this review were with moderate to high quality. Two reviewers (TH & PZ) independently extracted data according to pre-determined criteria. From each paper, various data including authors' information, publication year, sampling strategies, sample characteristics, study design, features of the intervention, outcomes, measurements, significant findings, and limitations were extracted. The extracted data were entered into an Excel spreadsheet. The reviewers discussed disagreements in data extraction in order to reach a consensus. Descriptive statistics (e.g. mean and percentage) were used to describe the characteristics of included studies. Then, thematic analysis was used to summarize the findings for each research question. Two reviewers (TH & PZ) independently conducted the thematic analysis. Categorization results were compared and any disagreements were resolved with a consensus decision. Due to the heterogeneity of the measurement tools used by the included studies, a meta-analysis was not performed since attempting to combine different measurements for the same variable would be inappropriate. Characteristics of included papers Twenty published papers were included in this review. In all, 16 (80%, 16/20) studies were conducted in China, two in Italy, one in Mexico and one in Iran. Eighteen (90%, 18/20) studies were conducted in a developing country, as per the United Nations standards. 14 Fifteen (75%, 15/20) studies were solely based in the community, 1 (5%, 1/20) was solely based in the hospital, and 4 (20%, 4/20) studies were set in the hospital and community. Three (15%, 3/20) studies were conducted with a rural population focus, and 3 (15%, 3/20) pilot and feasibility studies were conducted during the COVID-19 pandemic. The total sample size of the 20 included studies was 3110 ( Table 1). Characteristics of the interventions The included studies explored WhatsApp and WeChat interventions in a diverse oncological patient population, including those with breast (4/20, 20%), prostate (1/20, 5%), uterus (2/20, 10%), colon (1/20, 5%), thyroid (1/20, 5%), head/neck (1/20, 5%), lung (2/20, 10%), maxillofacial (1/20, 5%), laryngeal (1/20, 5%), or multiple (6/20, 30%) cancers ( Table 2). The majority (95%, 19/20) of the studies investigated the interventions in a postsurgical, post-chemotherapy, or post-radiation study population. Among the included studies, 16 (80%, 16/20) interventions were associated with WeChat, and four (20%, 4/20) were associated with WhatsApp. The studies displayed a range of features, with the most common being educational information regarding disease symptoms and treatment, evidenced in 14 (70%, 14/20) studies. [15][16][17][18][19][20][21][22][23][24][25][26][27][28] Two (10%, 2/20) studies incorporated push messages to remind patients to exercise or access the resources on the application. 21,25 Text messages were the most frequently used mode of information delivery, as identified in 17 (17/20, 85%) studies. [15][16][17][18][19][20][21][22][23][24][25][27][28][29][30][31][32] To engage the participants, seven (35%, 7/20) studies utilized audio and/or video calls as part of the intervention. 17,23,24,26,31,33,34 The duration of the intervention ranged from one-time use 28 to two and a half years, 18 with the average duration being 4 months. The most frequent use of the intervention was daily in eight (40%, 8/20) studies, 15,17,22,23,25,27,29,32 followed by non-consecutively in five (25%, 5/20) studies. 16,18,19,24,30 All of the interventions involved a health care provider, most frequently a physician as indicated by 12 (60%, 12/20) studies. 15,[17][18][19]25,[27][28][29][30][31][32]34 Additional health care providers included nurses, [16][17][18][21][22][23][25][26][27][28][29] researchers, 15,27 pharmacists, 17,32 and other (psychologist, health education specialists, dietitians, sex counselors, liaison officers). 15,17,25 Physical health outcomes Pain. Six (30%, 6/20) studies (four randomized controlled trials, one quasi-experimental nonrandomized trial, and one pre-post study) reported improved pain control using instruments such as the Numeric Rating Score, 16,19,20,27 Visual Analogue Score, 28 and Brief Pain Inventory. 32 Four of the experimental trial studies found statistically significant lower pain scores or a decrease in the number of patients with intense pain in the WeChat intervention group compared to routine care (p ≤ 0.05). 19,20,28,32 To complement these findings, a pre-post study found a statistically significant decrease in mean pain score from baseline after 3 months (p ≤ 0.001). 16 In one randomized controlled trial, a non-statistically significant decreased pain median score was reported in the WeChat-based multimodal nursing program after 6 months. 27 The intervention features varied in the six studies, from one-time information communication, 28 3 days of communication and educational materials, 19 daily interactions for 1 month, 32 weekly sharing of educational materials and communication for 2 months, 20 non-consecutive communication and education for 3 months, 16 and daily delivery of rehabilitation information for 6 months (Table 3). 27 Fatigue. Three randomized controlled trials (15%, 3/20) reported conflicting findings related to the fatigue outcome using a Numerical Rating Scale, 27 Cancer Fatigue Scale, 15 and the Piper Fatigue Scale. 17 In a randomized controlled trial among a breast cancer population, participants of the WeChat intervention group received daily rehabilitation information through text messages. 27 The intervention group had an increase in the fatigue median score after 6 months. 27 However, the other randomized controlled trials on breast and cervical cancer patients reported significant lower fatigue mean scores after a 7-week WhatsApp intervention (p = 0.005) and statistically significant lower fatigue scores after a 3 month WeChat intervention (p = 0.000). 15,17 Daily communication through WeChat occurred as part of a multidisciplinary collaborative continuous nursing intervention, involving guidance on diet, medication, pain control, daily activities, and social behaviors. 17 In the other randomized controlled trial, daily educational messages were sent through WhatsApp regarding breast cancer treatment, fatigue, body image, religion, and cognitive behavioral therapy. 15 Sleep. Three (15%, 3/20) studies (three randomized controlled trials) reported improved sleep outcomes using the Pittsburgh Sleep Quality Index 17,19 and a Numerical Rating Scale. 27 In a nonspecified and breast cancer population, better sleep quality mean scores and improved sleep median scores were found in the WeChat intervention group after 3 days and 6 months, respectively. 19,27 Another randomized controlled trial reported significant improvements in sleep quality in a cervical cancer study population undergoing the WeChat intervention, after 3 months (p < 0.001). 17 Medication adherence. Two (10%, 2/20) studies (one randomized controlled trial, one quasiexperimental non-randomized trial) reported improved medication adherence using the Medication Possession Ratio 20 and the Morisky Medication Adherence Measure. 32 Both studies found statistically significant higher medical possession ratio scores after 2 months (p = 0.031) and increased complete adherence score rates after 4 weeks (p < 0.001) in a non-specified cancer population. 20,32 The randomized controlled trial utilized WeChat to send daily pain dairies, adverse drug reaction forms every 3 days, and pain inventory forms every 15 days. 32 The quasi-experimental non-randomized trial provided information weekly on pain management through WeChat and telephone while the control group received telephone follow up. 20 Rehospitalization rate/incidence of side effects or adverse drug reactions. Five (25%, 5/20) studies (three randomized controlled trials, two non-randomized trials) reported conflicting findings related to the side effects outcome using the calculated Side Effects Incidence, 20 Treatment-Related Adverse Events, 19 Incidence of Patient Complications, 17 Risk Event Rate, 18 Adverse Drug Reactions Incidence, and Rehospitalization Rates. 32 In the studies with an intervention period ranging from 2 months to two and a half years, significantly lower complication, side effects, or risk event incidence were noted between the WeChat intervention groups and the usual care or telephone control groups (p ≤ 0.05). 17,18,20 These studies investigated patients with cervical, thyroid, and nonspecified cancer. In a randomized controlled trial with a 3-day intervention among patients with nonspecified cancer, lower adverse events were reported in the WeChat intervention, but the difference was not statistically significant. 19 However, in a 1-month randomized controlled trial with nonspecified cancer patients, a significant increase (p = 0.003) in adverse drug reactions was observed in Decreased body image concern mean score after 7 weeks Body image concern inventory Int (À6.0) vs Ctl (À2.26) Han (2021) the WeChat intervention group that received daily pain diaries, adverse drug reaction forms and the Brief Pain Inventory form biweekly. 32 Cough, urinary continence, treatment discomfort, survival. Some outcomes were only reported in a single study using a specific measurement tool, including cough using a Visual Analog Scale, 16 urinary continence using a 24-h pad test, 21 treatment discomfort using the Breast Cancer Treatment Discomfort Rating Scale, 25 clinical effectiveness according to the change of tumor volume, 28 and numerical survival using calculated Survivorship Values. 22 Significant decreases in cough mean scores were reported after 3 months of a WeChat intervention among a lung cancer study population (p < 0.001). 16 As well, after 1 year of WeChat intervention, there was significantly improved selfreported continence in post-surgical prostate cancer patients, compared to the participants' immediate post-surgical continence (p < 0.001). 21 Statistically significant lower treatment discomfort scores were reported among breast cancer patients after a WeChat and telephone intervention period of 3 months, compared to a telephone control group. 25 In a 4 year follow up period of a randomized controlled trial, there was increased non-statistically significant survivorship values reported when comparing the WeChat intervention group to the telephone control group. 22 Psychosocial health outcomes Self-efficacy/confidence. Four studies (two randomized controlled trials, one pre-post study, one nonrandomized trial) reported improved self-efficacy using the Self-Efficacy Scale , 25 Stoma Care Self-Efficacy Scale 24 General Self-Efficacy Scale, 29 and the Chinese version of Strategies Used by People to Promote Health. 18 In the randomized controlled trials, statistically significant increased self-efficacy mean scores were found in the WeChat intervention group after 3 months (p < 0.001). 24,25 The non-randomized trial reported statistically significant increased overall selfmanagement efficacy findings in thyroid cancer patients after two and a half years of WeChatbased perioperative and conventional nursing, in comparison to solely conventional nursing (p < 0.001). 18 To complement the aforementioned studies, the pre-post study reported a significant increased self-efficacy mean score among laryngeal cancer patients after 1 month of the intervention (p < 0.05). 29 Quality of life. Seven (35%, 7/20) studies (six randomized controlled trials, one non-randomized controlled trial) reported increased quality of life outcomes using the Cancer-Related Quality Of Life Tool, 19 European Organization for Research and Treatment of Cancer Quality of Life Questionnaire, 22 Medical Outcome Study of the Quality of Life Inventory, 23 Stoma Quality of Life Scale, 24 Health-Related Quality of Life Functional Assessment of Cancer Therapy Breast Version, 27 Medical Outcomes Study 36-Item Short-Form, 17 and the Generic Quality of Life Inventory-74. 25 Increased quality of life scores were reported in all studies, with six studies reaching statistical significance (p ≤ 0.05). 17,[22][23][24][25]27 These six studies had intervention periods ranging from 3 months to 1 year. 17,[22][23][24][25]27 All six studies compared the effects of a WeChat intervention to a telephone, guidance manual, or routine standard of care control group. 17,[22][23][24][25]27 The research study participants had the following cancer types: rectal, cervical, lung, and breast. 17,[22][23][24][25]27 One randomized controlled trial comparing the WeChat intervention to usual care, for a period of 3 days, reported non-statistically significant improvements in quality of life scores among non-specified cancer patients. 19 Anxiety. Nine (45%, 9/20) studies (seven randomized controlled trials, two non-randomized controlled trials) reported conflicting findings related to the anxiety outcome using the Hospital Anxiety and Depression Scale, 22,23,25 the Self-Assessment Scale for Anxiety, 18 the Self-Rating Anxiety Scale, 17 General Anxiety Disorder-7, 19 Zung's Self Rating Anxiety, 26 State-Trait Anxiety Inventory, 24 Hamilton Anxiety Scale. 28 Statistically significant decreased anxiety mean scores occurred in studies with longer WeChat interventions (one and a half months to two and a half years) compared to telephone or guidance manual control groups (p < 0.05), 22,23,25 with the sole exception of the one-time WeChat presurgical communication (p < 0.05). 28 In another shorter randomized controlled trial, with a 3-day usual care and WeChat intervention, decreased anxiety mean scores were reported for anxiety, but statistical significance was not reached. 19 One randomized controlled trial reported conflictive findings on anxiety for the informationbased hospital-family integration continuous care intervention group, with decreased state anxiety mean scores, and increased trait anxiety mean scores. 24 However, this can be explained by the intervention group having statistically lower state and trait anxiety mean scores compared to the control group (p ≤ 0.000). 24 Among post-surgical patients, the intervention group received messages and calls sharing an education program on colostomy care from the colostomy therapist. 24 Depression. Seven (35%, 7/20) studies (four randomized controlled trials, three non-randomized controlled trials) reported improved depression outcomes using instruments such as Zung's Self Rating Depression, 26 Hospital Anxiety and Depression Scale, 22,23,25 Self-Rating Depression Scale, 17 Self-Assessment Scale for Depression, 18 and the Patient Health Questionnaire-9. 19 Six studies, ranging from 3 months to two and a half years, reported statistically significant lower depression scores in the WeChat intervention groups compared to usual care, telephone, or guidance manual control group (p ≤ 0.05). 17,18,22,23,25,26 One randomized controlled trial reported nonstatistically significant decreased depression mean scores with a WeChat intervention of 3 days. 19 The studies investigated the impact of the interventions on populations with lung, breast, cervical, thyroid, and advanced cancer. [17][18][19]22,23,25,26 Social support, spiritual outcomes, body image concern. Some outcomes were only reported in one single study using a specific measurement tool, including social support using the Multidimensional Scale of Perceived Social Support, 23 positive spiritual outcomes using the Self-Transcendence Scale, Meaning In Life Questionnaire, Herth Hope Scale, 26 and body image using the Body Image Concern Inventory. 15 Non-statistically significant increased social support mean scores were reported among a breast cancer study population, after a 6-month WeChat education intervention (p = 0.209). 23 In the quasi-experimental non-randomized study, the intervention group underwent a WeChat life review program and usual care for one and a half months, while the control group received usual care. 26 Spiritual outcomes reached statistical significance demonstrated by an increased self-transcendence mean score (p = 0.001) and increased meaning in life mean score (p = 0.001), but the differences in hope mean score were not significant (p = 0.0980). 26 In addition, significant decreased body image concern scores were reported by breast cancer patients after 7 weeks of a WhatsApp educational intervention (p = 0.002). 15 Summary of findings Findings of this review suggested that the use of WeChat and WhatsApp in cancer management has potential effects on physical and psychosocial health outcomes among oncological patients. Physical outcomes explored in the included studies were pain, fatigue, sleep, medication adherence, re-hospitalization rate or incidence of side effects, cough, urinary continence, treatment discomfort, and survival outcomes. Psychosocial outcomes explored were self-efficacy, quality of life, anxiety, depression, social support, spiritual, and body image concerns. This review found that certain outcomes, including pain, medication adherence, self-efficacy, quality of life, and depression, were significantly improved in the WeChat/WhatsApp intervention groups in comparison to control groups or baseline measurements. In outcomes of sleep and re-hospitalization rate, improvements were reported without reaching statistical significance. In addition, the potential effects of WeChat/ WhatsApp on anxiety, fatigue, and adverse drug reactions were found to be conflictive among included studies. Physical health outcomes Our findings indicated that the use of WeChat/WhatsApp in cancer management has the potential to improve pain control. The significant impact of the pain outcome has been supported in another research study, which investigated the use of WeChat video calling in a post-total knee arthroplasty population, with a statistically significant pain score difference reported between the WeChat intervention and routine care group. 35 The similarities of the results may be due to the post-treatment nature of the study population, with pain being a key indicator of either surgery or cancer treatments. The correlation of decreased pain and the use of WeChat/WhatsApp has major implications for cancer management and may improve the lives of patients. Our findings indicated that the use of WeChat/WhatsApp in cancer management has the potential to improve medication adherence. In this review, two studies investigated the impact of WeChat/ WhatsApp on medication adherence, and both studies reported a significant difference between intervention and control groups. Medication adherence in cancer management is crucial in achieving optimal quality of life as cancer medication non-adherence has been shown to lead to lower survival rates and reoccurrence of disease. [36][37][38] Consistent with our findings, one randomized controlled trial examining the influence of an educational WhatsApp intervention among diabetes and hypertension patients reported a clinically significant 15% increase in medication adherence after the intervention. 39 Given the importance of medication adherence in cancer management, interventions using WeChat/WhatsApp could have notable impacts on disease treatment. Psychosocial health outcomes Our findings indicated that the use of WeChat has potential effects on self-efficacy among cancer patients. This finding is consistent with another randomized controlled trial evaluating the use of WeChat in a diabetic study population. 40 The trial reported significant improvements in self-efficacy among the WeChat intervention group compared to routine care. 40 Among cancer patients, previous literature has reported high self-efficacy to be associated with increased healthy behaviors such as regular exercise and communication with healthcare providers, greater determination in achieving desired health outcomes, and higher quality of life. 41 Our findings indicated that the use of WeChat/WhatsApp has potential beneficial effects on quality of life. This finding is supported by a literature review of 98 publications, exploring the impacts of social media on breast cancer survivors. 42 Among the included studies, psychosocial well-being was most commonly measured as an outcome of interest. 42 The literature review concluded that online groups and communities may improve the well-being of breast cancer survivors by providing opportunities for social engagement. 42 In addition, in a pre-posttest study among persons with dementia and their caregivers, supplementary video conferencing health care through Zoom, WhatsApp, or FaceTime was reported to reverse the decline in quality of life. 43 Our findings indicated that the use of WeChat/WhatsApp might have potential effects on depression outcomes among cancer patients. Depression is one of the most common symptoms affecting cancer patients, and is a risk factor for reduced survival. 44 Consistent with our findings, a cross-sectional research study investigated the prevalence of depression among cancer patients and reported a significant difference between depression in users of social networks and non-users. 44 In addition, a systematic review of 42 studies concluded that social media can provide social, emotional, or experiential support in patients with chronic diseases, such as cancer. 45 Treatments combining the use of WeChat/WhatsApp and medicine may improve depressive symptoms among cancer patients, ultimately improving health outcomes. Our findings indicated that the use of WeChat has potential effects on social support; however, we only found one study reporting on the outcome. Consistent with our findings, a single-group pilot study among breast cancer survivors investigated a targeted physical activity intervention, via the MapMyFitness application, and utilized a social cognitive theory-based, Facebook-delivered health education intervention. 46 The ability to engage in social roles or activity were greatly improved after the use of the MapMyFitness application and Facebook intervention. 46 Our findings are also supported by a literature review, which highlighted that social media helps to improve wellbeing by allowing breast cancer survivors to engage with wider social networks and connect with peers having similar experiences. 42 Social media has been emphasized to improve social relations of cancer patients; however, further research on WeChat/WhatsApp is necessary to clarify its effects on social support. Conflictive outcomes In this review, conflictive findings were reported for the anxiety, fatigue, and adverse drug reactions health outcomes. The majority of the included studies found potential effects of WeChat/WhatsApp on anxiety in cancer patients. However, one randomized controlled trial reported an increased trait anxiety mean score after 3 months of participation in an information-based WeChat-associated intervention. 24 This result can be explained by the timing of measuring scores, with the initial measurement taking place before the surgery, and the comparison taking place 3 months after discharge, when participants could be feeling anxiety post-surgery. In regard to the fatigue outcome, one randomized controlled trial studying breast cancer patients reported an increased fatigue score amongst the WeChat intervention group. 27 This trial was specifically studying post-operative women with breast cancer, and those in early rehabilitation from intensive surgery, which could have increased their levels of fatigue. 27 In comparison, the other studies investigated breast cancer survivors and inpatient cervical cancer participants who had not undergone intensive surgery prior to study participation. 15,17 Lastly, one randomized controlled trial reported a significant increase in adverse drug reactions incidence after a WeChat intervention. 32 The authors noted in the discussion that the pain management team had added new drugs for pain management during the study period, which may have caused increased adverse drug reactions. 32 Thus, the result may not be an effect of the WeChat intervention. Use of WeChat/WhatsApp during the COVID-19 pandemic Among 20 included studies, three (15%, 3/20) pilot studies investigated the use of WhatsApp in cancer management during the COVID-19 pandemic and reported high patient satisfaction to this innovative intervention. 31,33,34 Consistent with our findings, studies demonstrate the success of using WhatsApp or telephone services in maintaining health communication between patients, caregivers, and physicians. [47][48][49] Many oncological patients may be immunosuppressed from previous chemotherapy/radiation treatments, and subject to greater risk of infection. 50 By implementing technological solutions, immunosuppressed oncological patients can experience the full extent of cancer management while reducing their exposure to infectious agents. Limitations of utilizing WeChat/WhatsApp in cancer care also exist. WeChat/WhatsApp communication is not a complete substitute for in-person visits and utilizing this technology may not be feasible for all patients or clinical situations. For example, one included pilot study indicates that technical difficulties may occur and interfere with the delivery of health care. 33 Determined from a clinical study, a significant limitation of WeChat/WhatsApp is the inability to conduct a face-to-face physical examination. This means that physicians cannot weigh their patients and accurately prescribe dosages for weight-based drugs. 51,52 As corroborated by research in the field, included articles indicate that Internet connections should be secure, transmitted data must be uncorrupted, and lastly, confidentiality must be maintained. 26,34,53 As well, patient privacy must be ensured in all situations, and further security measures must be taken when transmitting clinical information online. Patient privacy and data security are the major concerns during use of WeChat and WhatsApp in healthcare. These concerns have created barriers to the global adoption of WeChat and WhatsApp in healthcare practice. In the current 20 studies, there is no report on security measures taken. Current literature suggested developing guidelines in the management and use of WeChat and WhatsApp in order to prevent the unwanted consequences. 54,55 We also encourage future researchers to complete a full technology assessment to ensure patient privacy in use of online communications. 53 Implications Findings of this review have implications for cancer patients, health care providers, health care organizations, communities, and society. For cancer patients, utilizing familiar online communication tools such as WeChat or WhatsApp will provide increased accessibility and facilitate contact with physicians via mobile devices. This is especially important when patients face financial, geographical, or physical limitations to accessing health care. WeChat/WhatsApp is used more frequently in developing countries, as they are more accessible compared to Text Messaging Services, which may require external payment. For health care providers, WeChat/WhatsApp provides a convenient and dynamic method for remote follow up with patients. The use of mHealth also allows for physicians to have continuous communication with patients, compared to conventional care. Health care organizations could also benefit from utilizing WeChat/WhatsApp by reducing costs and time on administrative tasks. Physicians will be able to virtually connect with patients instead of coordinating a time when both parties can physically be present at the office. This limits time spent on transportation and inherently reduces the need for large office spaces. The introduction of mHealth can facilitate more equality in accessing timely care in rural communities; however, efforts must be made to mobilize stable Internet access in such geographical locations before implementing mHealth programs. Limitation and future research There are some limitations in this review. Firstly, this review only included articles written in English. Studies focusing on WeChat or WhatsApp have been written in a variety of languages; however due to our research team knowledge, we were solely able to include English articles. Secondly, included articles were published in academic peer-reviewed journals, and articles from industrial or commercial publications were excluded. The majority of the included studies in this review were conducted in China; thus, our findings cannot be generalized to all cancer patient populations. Thirdly, there is a large variation in intervention design and implementation process. We noticed this limitation and included Table 2 to describe the features of each intervention. In the future, research studies should carefully review existing literature and design interventions through evidence-based approaches. We recommend that future studies can investigate the impact that duration, number of messages or calls, and types of messages or calls has on cancer patients. Lastly, the ages of the cancer patients were not specified in this review, which may impact findings since younger patients might be more comfortable navigating online platforms. This review identified several directions for future research. Firstly, there is a need to disentangle conflicting findings of WeChat/WhatsApp and health outcomes within the current literature. For example, future research should examine the effects of WeChat/WhatsApp on anxiety, fatigue, and adverse drug reactions in a large-scale randomized controlled trial. Secondly, access to health services continues to be an issue in rural communities 56 and further research should be done to uncover the impacts of using WeChat/WhatsApp to connect with rural cancer patients. Finally, the majority of the studies were conducted in one country, China, limiting the generalizability of the findings. Future research should be encouraged in various countries and settings to further investigate the effects of WeChat/WhatsApp. Conclusion This review summarizes the potential effects of WeChat/WhatsApp use in cancer management on physical and psychosocial outcomes. Throughout the included studies, the applications were utilized to the full extent of the platforms, with knowledge being transmitted through public accounts, messages, push reminders, and video and audio calls. This review suggested that use of WeChat/WhatsApp on cancer management might improve various physical and psychosocial health outcomes among oncological patients. Future research should be conducted in various countries among diverse communities, including rural areas, to ascertain the effects of WeChat/WhatsApp in different populations. Author contributions Conceptualization, PZ and TH; methodology, PZ; formal analysis, PZ and TH; data curation, PZ and TH; writing of original draft preparation, TH and PZ; writing of review and editing, PZ, TH, YL, NT, MD, HZ, and CZ; supervision, PZ; funding acquisition, PZ. All authors have read and agreed to the published version of the manuscript. Declaration of conflicting interests The author(s) declared no potential conflicts of interest with respect to the research, authorship, and/or publication of this article. Funding The author(s) disclosed receipt of the following financial support for the research, authorship, and/or publication of this article: This research was funded by the Social Sciences and Humanities Research Council Institutional Grant (Ref # 102090).
2023-03-24T06:17:42.511Z
2023-01-01T00:00:00.000
{ "year": 2023, "sha1": "6b98bd8c24df57241f9c91b7627e36260155ff3e", "oa_license": "CCBYNC", "oa_url": "https://doi.org/10.1177/14604582231164697", "oa_status": "GOLD", "pdf_src": "Sage", "pdf_hash": "0954e6ab08b9bece6f372e6e11943c7669ddac22", "s2fieldsofstudy": [ "Medicine", "Computer Science" ], "extfieldsofstudy": [ "Computer Science", "Medicine" ] }
232386474
pes2o/s2orc
v3-fos-license
A Case of Dialister pneumosintes Bacteremia-Associated Neck and Mediastinal Abscess Patient: Female, 30-year-old Final Diagnosis: Dialister pneumosintes bacteraemia associated mediastinal and neck abscess Symptoms: Diarrhoea • fever • vomiting Medication: — Clinical Procedure: Incision and drainage Specialty: General and Internal Medicine Objective: Rare disease Background: Dialister pneumosintes is a suspected periodontal pathogen. It can affect different parts of the body either by hematogenous transmission or regional spread. Here, we report a case of 30-year-old previously healthy woman diagnosed with mediastinal and neck abscess caused by this pathogen. Case Report: A 30-year-old woman presented with a 1-day history of fever, vomiting, and diarrhea. She was on her last dose of a 2-week course of oral antibiotic for suspected dental abscess. On admission, parenteral broad-spectrum antibiotic was started for sepsis of unknown source. Because of intermittent spike of high temperature despite being on an antibiotic, cross-sectional imaging was performed, which revealed a superior mediastinal abscess with extension in the neck. She was referred to the ENT surgeon for incision and drainage of the collection. However, the procedure was complicated by injury to the right internal jugular vein. Her postoperative period was also convoluted with the development of pulmonary embolism, followed by deep vein thrombosis of the right upper limb. Her pus polymerase chain reaction test detected 16s rRNA gene, suggestive of gram-negative anaerobic bacilli, and anaerobic blood culture grew Dialister pneumosintes. After a prolonged course of illness and antibiotic treatment, she recovered well, and now is back to her normal activities. Conclusions: Potential life-threatening complications may develop from periodontal infection by this microorganism. In patients being treated for sepsis of unknown origin, not responding to antibiotic treatment, and with a history of recent periodontal infection, a deep-seated abscess needs to be considered. Background Dialister pneumosintes is a non-spore-forming, non-motile, non-fermentative, gram-negative anaerobic bacilli [1]. It is reported to occur as normal flora in the nasopharynx, oral cavity, intestine, and vagina [2,3]. This bacterium was first detected in 1921 from the nasopharyngeal secretion of patients during the influenza epidemic of 1918-1919, and was initially named Bacterium pneumosintes [4]. We present a case of Dialister pneumosintes-associated mediastinal and neck abscess in a previously healthy young patient who was getting treatment for a possible dental abscess. To the best of our knowledge, this is the first case report of mediastinal and neck abscess caused by this organism. Case Report A 30-year-old woman with no significant past medical history presented to the Emergency Department with a 1-day history of fever (38°C), vomiting, and diarrhea. Two weeks prior to admission, she visited her general practitioner for tooth ache, and she was prescribed a 14-day course of oral antibiotic (clarithromycin 500 mg BD) for a suspected dental abscess. On examination, she was tachycardic and febrile, with nontender, non-erythematous generalized swelling of the right side of her face. She had no lymphadenopathy or organomegaly. Her heart sounds were normal, and the chest was clear on auscultation. After that, she was re-evaluated and an urgent CT scan of the neck-thorax-abdomen-pelvis was performed, which detected a septated, peripherally enhancing, anterior mediastinal abscess measuring 5.5×3.2×6 cm in transverse, antero-posterior, and craniocaudal dimensions, respectively, with extension into the lower neck up to the level of the thyroid gland (Figure 2A, 2B). An X-ray orthopantomogram showed lucency around the lower premolar tooth, in keeping with the clinical suspicion of abscess formation (Figure 3). Given the CT scan finding, while waiting for the incision and drainage, parenteral metronidazole (500 milligram TDS) was added, as advised by the microbiologist. Therefore, urgent referrals were sent to the Maxillofacial, ENT, and Cardio-thoracic Departments of the affiliated university hospital. After inter-departmental discussion, the patient was transferred under the ENT specialty. An emergency incision and drainage of the mediastinal and neck abscess was performed under general anesthesia. The procedure was complicated by massive hemorrhage. Suturing of the bleeding point was attempted without success owing to its unidentified source. Bleeding was controlled conservatively with pressure dressing. Major hemorrhage protocol was activated, and she was transfused with 7 units of red blood cells and 7 units of fresh frozen plasma. She was transferred to the Intensive Therapeutic Unit (ITU), where she remained intubated for 3 postoperative days. Her pus sample was sent for culture and sensitivity. Auramine stain of pus was negative for acid-fast bacilli. Both aerobic and anaerobic cultures of the pus, including the culture for Mycobacterium tuberculosis, were sterile. Gram film of the pus sample showed profuse leucocytes. Pus polymerase chain reaction (PCR) testing for Mycobacterium tuberculosis and Mycobacterium avium complex were negative. Her HIV test result was also negative. However, PCR testing of pus detected 16s rRNA gene, suggestive of gram-negative anaerobic bacilli. Gradually, she was stepped down from the ITU. A post-procedure CT angiogram of the neck and thoracic area detected thrombus inside and surrounding the right internal jugular vein, suggestive of an intra-operative bleeding site, and acute pulmonary embolism in the right-side lobar branch of the pulmonary artery. It also revealed minimal residual collection of pus in the right supraclavicular space extending into the right superior mediastinum. Hence, she was treated with subcutaneous low-molecular-weight heparin (LMWH), which was later changed to oral anticoagulant (edoxaban 30 milligram OD). Although her inflammatory markers were decreasing, she remained tachycardic with intermittent episodic high temperature. After discussion with the microbiologist, her antibiotics were changed to meropenem (1 gram iv TDS), vancomycin (1 gram iv BD), and oral fluconazole (50 milligram OD). On her seventh postoperative day, she developed swelling of her right upper limb due to deep vein thrombosis of her upper limb veins revealed by compression venography. Eventually, one of her initial anaerobic blood culture sample, which was sent on the day of her admission, grew Dialister pneumosintes. Her antibiotic was changed to amoxicillin plus clavulanic acid (1.2 grams TDS) and metronidazole (500 milligrams TDS). Her inflammatory markers gradually came down. A repeat CT scan of her neck and thorax showed reduction in the amount of collection in the right supraclavicular fossa and superior mediastinum. She remained clinically well and afebrile. At the time of hospital discharge, she was prescribed a further 21-day course of oral antibiotic amoxicillin with clavulanic acid (625 milligrams TDS) and metronidazole (400 milligrams TDS). A follow-up ultrasound scan, which was done 2 weeks after her discharge, showed absence of any residual collection in the neck. Four weeks later, she was seen in the Outpatient Department, where she reported she was feeling significantly better. At present, she is waiting for Venous Thromboembolism (VTE) Clinic follow-up. Discussion Initially, we were unable to identify the source of infection due to non-specific presentation of symptoms and signs. Computed tomography revealed a mediastinal and neck abscess, and an orthopantomogram revealed a dental abscess. However, there was no evidence of descending transmission of infection from the orthodontic source. In this case, it was a hematogenous spread of infection from the dental abscess, and Dialister pneumosintes bacteremia of periodontal origin has been documented in the literature [5,6]. A study of 135 systemically healthy dentistry patients revealed D. pneumosintes was the pathogenic organism in 83% of cases of severe periodontitis and 19% of cases of slight periodontitis [7]. We were unable to confirm that D. pneumosintes was the pathogenic organism causing her tooth infection, as we did not cultivate any specimen or aspirate from her tooth. Based on the characteristics of the pathogen and the nature of the infection, and with the absence of any other source, it is likely that the neck and mediastinal abscess was due to bacteremia from a periodontal infection. Currently, there are 4 known species in Dialister genus, consisting of 135 strains; however, D. pneumosintes and D. micraerophilus are commonly encountered species [8]. D. pneumosintes is difficult to grow in the conventional culture media and the 16s rRNA-based PCR assay has been developed for the detection of this pathogen [9]. This microorganism has been isolated from periodontitis [7], gingivitis, root canal infection [10], sub-gingival plaque [9], human bite wound infection [11], respiratory tract, head and neck infection [3], and vaginal infection [12]. Severe infective complications have been reported, including brain abscess [13] and liver abscess [5] with a suspected dental source of origin. Our patient developed pulmonary embolism and internal jugular vein (IJV) thrombosis following a major bleed during the intervention owing to injury to the IJV. Her pre-operative CT scan was not suggestive of any thrombosis. Two cases have been reported as D. pneumosintes bacteremia-associated thrombosis. The first was suppurative thrombosis of ovarian vein in a young woman [12] and the second case was cavernous sinus thrombosis in an immunocompromised elderly woman in association with other anaerobes [14]. The mortality rate of patients with mediastinitis is up to 40% despite aggressive treatment with broad-spectrum antibiotics [15]. Conclusions Both immunocompetent and immunocompromised individuals can be affected by Dialister pneumosintes. Patients who present with tooth ache who have been prescribed antibiotics need appropriate follow-up to monitor the response to therapy or any development of complications. Likewise, patients with septic presentation with a history of dental infection and who are not responding to antibiotic treatment needs urgent imaging to assess for presence of a deep-seated abscess. Due to the high mortality rate, an emergency surgical approach is necessary once the diagnosis of mediastinal abscess is confirmed.
2021-03-29T06:15:43.144Z
2021-03-27T00:00:00.000
{ "year": 2021, "sha1": "4d29b440f0ad9f3533717979799cd9bbcdcd4da7", "oa_license": "CCBYNCND", "oa_url": "https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8015808", "oa_status": "GREEN", "pdf_src": "PubMedCentral", "pdf_hash": "01a91b39b6a74312827a697e5f6a0b79ead76684", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
18216592
pes2o/s2orc
v3-fos-license
Acute intentional and accidental poisoning with medications in a southern Brazilian city Intoxicações medicamentosas agudas intencionais e não intencionais notifi cadas em município do Sul do Brasil The expansion of the pharmaceutical market in the 20th century led to important changes in the consumption of medications worldwide. The objective of the current study was to analyze acute intentional and accidental poisoning with medications according to factors related to the individual, the poisoning, and the drug involved. This was a cross-sectional study that collected secondary data on all cases of acute poisoning with medications reported from 2003 to 2004 by the Poison Control Center at the Regional University Hospital in Maringá, Paraná State, Brazil. We studied a total of 546 cases of acute poisoning with medications. Females predominated among intentional cases (79.8%), and the 0-9-year age bracket was the most common among accidental cases (51.9%). The most frequently involved drugs were those acting on the central nervous system (57.2%), predominantly those requiring controlled prescriptions, especially among the intentional cases (66.2%). The results demonstrate the characteristics of acute poisoning from medications in Maringá, confirming the need for preventive measures that contribute to the rational use of medications. Poisoning; Drugs; Drug Utilization Introduction Technological changes in the 20th century led to the extensive development of industries as a whole, fostering the synthesis of new compounds for various purposes 1. In this context, the pharmaceutical industry developed rapidly, producing a vast array of new products and changes in the use of medications worldwide. According to various authors, pharmaceutical products predominate in accidents resulting from exposure to toxic agents 2,3,4,5. In 2002, 26.9% of poisonings recorded by the Brazilian national network of poison control centers involved pharmaceutical products 6. In addition, Bochner 7, analyzing records from the National Toxicological and Pharmacological Data System (SINITOX), found that among poisonings reported in adolescence and pre-adolescence in Brazil from 1999 to 2001, medications were among the main toxic agents. The persistently high poisoning rates involving pharmaceutical products in Brazil reflect various forms of resistance to rational drug use. The existence of a huge variety of pharmaceuticals with dubious safety and efficacy can favor the indiscriminate sale and resulting over-utilization of these products, leading to the occurrence of harmful effects, often poisonings 8. In addition, the lack of initiatives to train health professionals to specifically provide adequate orientation on proper drug use leads to misARTIGO ARTICLE Margonato FB et al. 850 Cad. Saúde Pública, Rio de Janeiro, 25(4):849-856, abr, 2009 information that can also foster incorrect use 8,9. In addition, the lack of information on drugs in the Hospital Data System of the Unified National Health System (SIH/SUS) and similar data systems favors the persistence of such misinformation 10. Meanwhile, the unnecessary use of prescription and non-prescription drugs is also a significant factor that increases the risk of poisonings 11. The risks are thus obviously related to the level of information on medications among patients, prescribers, and dispensers 12,13,14. Many health professionals are also unaware of the risk of poisoning from medication during their approach to users of medicines, further aggravating this risk. It is therefore necessary to study possible risk factors related to the occurrence of poisoning from medication in order to plan the appropriate preventive measures. A study that characterizes intentional and accidental poisoning from medications can contribute to the design and implementation of health programs aimed at preventing these events and raising health professionals’ awareness on their relevance for rational pharmaceutical use. Therefore, the aim of the current study was to analyze acute intentional and accidental poisoning from medications, according to factors related to the patient, the poisoning, and the drug involved. Acute intentional and accidental poisoning with medications in a southern Brazilian city Intoxicações medicamentosas agudas intencionais e não intencionais notifi cadas em município do Sul do Brasil Introduction Technological changes in the 20 th century led to the extensive development of industries as a whole, fostering the synthesis of new compounds for various purposes 1 .In this context, the pharmaceutical industry developed rapidly, producing a vast array of new products and changes in the use of medications worldwide. According to various authors, pharmaceutical products predominate in accidents resulting from exposure to toxic agents 2,3,4,5 .In 2002, 26.9% of poisonings recorded by the Brazilian national network of poison control centers involved pharmaceutical products 6 .In addition, Bochner 7 , analyzing records from the National Toxicological and Pharmacological Data System (SINITOX), found that among poisonings reported in adolescence and pre-adolescence in Brazil from 1999 to 2001, medications were among the main toxic agents. The persistently high poisoning rates involving pharmaceutical products in Brazil reflect various forms of resistance to rational drug use.The existence of a huge variety of pharmaceuticals with dubious safety and efficacy can favor the indiscriminate sale and resulting over-utilization of these products, leading to the occurrence of harmful effects, often poisonings 8 . In addition, the lack of initiatives to train health professionals to specifically provide adequate orientation on proper drug use leads to mis-ARTIGO ARTICLE information that can also foster incorrect use 8,9 .In addition, the lack of information on drugs in the Hospital Data System of the Unified National Health System (SIH/SUS) and similar data systems favors the persistence of such misinformation 10 .Meanwhile, the unnecessary use of prescription and non-prescription drugs is also a significant factor that increases the risk of poisonings 11 .The risks are thus obviously related to the level of information on medications among patients, prescribers, and dispensers 12,13,14 . Many health professionals are also unaware of the risk of poisoning from medication during their approach to users of medicines, further aggravating this risk.It is therefore necessary to study possible risk factors related to the occurrence of poisoning from medication in order to plan the appropriate preventive measures. A study that characterizes intentional and accidental poisoning from medications can contribute to the design and implementation of health programs aimed at preventing these events and raising health professionals' awareness on their relevance for rational pharmaceutical use.Therefore, the aim of the current study was to analyze acute intentional and accidental poisoning from medications, according to factors related to the patient, the poisoning, and the drug involved. Methodology This was a cross-sectional epidemiological study of treatment provided by the Poison Control Center (CCI) in Maringá, Paraná State, Brazil, where secondary data were obtained for the period from January 2003 to December 2004. The study population consisted of all individuals with a record of acute poisoning from medications during the study period.The sample excluded cases reported as adverse reactions, drug interactions, and those resulting from chronic use of medications, since they did not fit the objectives of the study, which analyzed acute poisonings.The sample also excluded non-residents of Maringá.Intentional cases were reported suicide attempts, abortion, homicide, and pharmaceutical drug abuse, while accidental cases were those reported as individual or collective accidents, incorrect use, incorrect medical prescription, and dosing errors.The data were obtained from poisoning forms from the CCI.The study was approved by the Institutional Review Board of the State University in Londrina, Paraná, as specified in case file 305/04. The study focused on variables related to the individual, the poisoning, and the drug involved.Patient-related variables were gender and age at the time of poisoning, and those related to the poisoning were intention (intentional versus accidental), exposure route, and time of day and day of the week of the exposure.The variables related to the drug(s) involved were: amount, pharmacological class (according to the Anatomical Therapeutic Chemical Classification/Defined Daily Dose -ATC/DDD) 15 , pharmaceutical form, and required control for dispensing. Data were keyed in using Epi Info for Windows, version 3.2.2(Centers for Disease Control and Prevention, Atlanta, USA).Simple frequencies and percentages were calculated to summarize the results. Results From January 2003 to December 2004, the Poison Control Center (CCI) in Maringá recorded 8,291 cases of poisoning.Of these, 1,112 (13.4%) involved medications (733 residents of Maringá).We excluded 131 cases of adverse reactions and drugs interactions and 56 cases of poisoning from chronic-use medication, and the 546 remaining cases were studied.Of these, 209 (38.3%) were reported as intentional and 337 (61.7%) as accidental.Suicide attempts with medications were the most common (60.1%).The second most common cause (25.3%) involved individual accidents.Dosing errors, incorrect use, pharmaceutical drug abuse, incorrect medical prescription, attempted abortion, and homicide totaled 14.6%. As shown in Table 1, females comprised nearly 80% of intentional cases and slightly more than half of accidental cases.As for age, more than half of the accidental cases occurred in the 0-9-year group.There were also some intentional cases in early adolescence (10-14 years), while the largest number of intentional cases occurred in the 20-39-year bracket (Table 2). The vast majority of poisonings occurred by ingestion (97.1%).While all the intentional cases resulted from ingestion of medicines, the accidental cases included a small percentage (7.7%)indicating exposure by other routes (parenteral, nasal, transdermal, and ophthalmic). As for time of exposure, 24.5% of the cases occurred from 6:00 PM to 11:59 PM.The largest share of both intentional and accidental cases (24.6% and 24.4%, respectively) occurred during this time period.Among the intentional cases, the most common days were Sunday (19.2%),Monday (15.6%), and Saturday (15.5%).Thursday (19.7%) was the most common day for accidental poisoning with medications. Drugs acting on the central nervous system were involved in the majority of intentional cases and slightly more than a third of the accidental cases (Table 3).Table 4 shows that the vast majority of accidental poisonings involved a single drug, as compared to slightly more than half of the intentional cases.As for pharmaceutical forms, Table 4 also shows that the proportion of solid forms of drugs was much higher among intentional cases, while liquid forms accounted for two-fifths of accidental poisonings.While the majority of accidental cases were caused by nonprescription medications, the opposite was true for intentional cases, two-thirds of which were related to ingestion of controlled drugs. Discussion The high number of acute intentional poisonings with medications among females was similar to the results in Marcondes Filho et al. 16 , Juárez Aragón et al. 17 , and González Valiente et al. 18 , who observed that the majority of intentional cases occurred in females (although they were studying poisonings with chemical products in general). As for age bracket, Bortolleto & Bochner 19 , analyzing SINITOX records on poisoning with medications in Brazil from 1993 to 1996, also found high intentional poisoning proportional rates in the 20-29-year age bracket (18.6%).According to other authors, these percentages can be explained by difficulties in accessing the work market and personal and family problems, favoring the appearance of destructive forms of dealing with reality 20 .Bortoletto & Bochner 19 found that the 0-4-year bracket had the highest proportion of poisonings with medications in Brazil from 1993 to 1996, with 33%.More recent statistics for Brazil show 31.9% 6, while a study using secondary data from the States of São Paulo and Rio Grande do Sul for 1997 and 1998 found 35.67% 21 .The differences between the percentage of poisonings in this age bracket in the current study and the data from the literature dem- The high percentage of accidental poisonings in childhood may be related to the natural tendency of children to explore their environment and place whatever they find in their mouths 22 .In the United States and Canada, a measure that reduced the rates of accidental poisoning with medications was the mandatory use of special child-proof caps on medications and household cleaning products.Brazil has a Federal Bill of Law (4.841/1994), first submitted in 1994, to implement such child-proof medicine bottles, but thus far the National Congress has failed to pass it 23 .The production of special child-proof bottles, the implementation of educational programs on poisoning hazards in schools and daycare centers, and parent and teacher awareness-raising on proper care and handling of medications could reduce such drug-related accidents in childhood. The percentage of adolescents among the intentional cases could be explained by hypotheses raised in Londrina, Paraná State, Brazil, by Marcondes Filho et al. 16 , who observed that the most important factors related to suicide attempts by poisoning in the 12-24-year group were interpersonal losses, depression, and drug abuse.Interestingly, according to Bochner 7 , from 1999 to 2001 in Brazil, medications accounted for 33% of reported poisonings among adolescents aged 15 to 19.The same author found that in the 10-14-year group, this percentage was lower (25.7%), while other, non-pharmaceutical products were more common.Some authors feel that the problem of suicide attempts in adolescence is due to the biological, psychological, and social changes that occur during this period of life 7,24 .The work by physicians specializing in adolescence has been emphasized in health services and recommended by Teixeira & Luis 24 .Another factor cited by authors is the need for health professionals that are prepared to recognize risk behaviors among adolescents and youth.Educational programs and early specialized care for family members and friends are also recommended, although families often show resistance, since the potentially suicide-prone individual may have requested help in some way and gone unnoticed 24 . As for type of event, although the available data for 2002 in the SINITOX databank show methodological differences as compared to the current study, since they also include poisonings from chronic-use medications, the most common circumstances were also suicide attempts and individual accidents (39.5% and 37.8%, respectively) in Brazil 6 .Although the findings are consistent with ours, importantly, the suicide attempt rate in Maringá was nearly double the Brazilian national rate. In a previous study in Maringá on poisonings in general in the 0-14-year-old population, ingestion occurred in 79.3% of the cases 25 , a lower figure than in the current study.Meanwhile, Oduardo Lorenzo et al. 26 found that among acute poisonings in the 0-15-year bracket, 93.4% occurred by ingestion, similar to the findings by Gárate et al. 27 (91.5%).In Porto Alegre, Rio Grande do Sul State, Brazil, 88.4% of cases in children 0-5 years of age were by ingestion 18 .Gandolfi & Andrade 28 showed similar findings in the State of São Paulo in 1998.The high rates of accidental cases near lunch time (10:00 AM to 1:59 PM) and dinner time (6:00 PM to 11:59 PM), could be ex-plained by the fact that colorful medicines are often stored in the kitchen, a situation prone to accidental poisoning in children. The high rate of liquid pharmaceutical preparations among accidental cases could be related to the study population's characteristics, since more than 70% occurred in children less than ten years old (Table 2).Techniques that facilitate the use of medications by children resistant to swallowing drugs are important for various medical treatments in this age bracket.However, the manufacturing of easy-to-open, colored packages that contain medicines with attractive colors and smells could unwittingly facilitate childhood poisonings. The high percentage of intentional cases on Sundays could result from family and conjugal strife related to weekends, like alcohol abuse, more time spent with the family, anxiety, loneliness, and other factors.In Londrina, Marcondes Filho et al. 16 observed that suicide attempts by poisoning were positively associated with alcohol and drug abuse, more frequent on weekends. Meanwhile, in studies on family problems related to suicide attempts, some authors concluded that depression is be related to family conflicts like difficulty with adaptation, disharmony, disorganization, demoralization, and family breakdown 29,30 .Such types of family conflicts often become more evident when the family spends more time together, that is, mainly on Sundays.Such factors could explain the higher percentage of suicide attempts with medications on this day of the week. Consistent with the current study, the research by Marcondes Filho et al. 16 point to the predominance of ingestion of drugs acting on the central nervous system in intentional cases of poisoning.Likewise, analyzing acute poisoning with medications in the State of São Paulo in 1998, Gandolfi & Andrade 28 also found that drugs for treatment of psychiatric and neurological disorders were the most common pharmacological groups among poisonings.A Cuban study showed similar results, based on data from 1995 and 1996 31 . The percentage of central-acting medications among poisonings is worrisome, since this type of drug nearly always requires a prescription for dispensing.The prescription of indiscriminate doses, dispensing without adequate information, and even sales without a prescription may be related to improper use.Some authors point to the fact that depressive patients are often prone to suicide attempts 16 .Thus, especially in these cases, prescriptions should be rationalized, with indication of the adequate amounts, in order to ensure the correct use of such drugs. Indiscriminate self-medication, with high rates reported in Brazilian studies, is believed to have favored the high rates of accidental poisonings 11,32,33 .The high rate of poisonings with multiple drugs may be related to the ease in obtaining medications on the Brazilian market 13 .This frequency may be associated with lack of information and awareness and to abusive advertising by the pharmaceutical industry, favoring the accumulation of small home pharmacies 27,29 .In studies evaluating knowledge on generic drugs in Pelotas (Rio Grande do Sul) 14 and Recife (Pernambuco) 13 , the authors concluded that schooling and purchasing power bear a positive association with level of knowledge on these products.They also found that older age was associated with lower level of information on medications, and that lack of information can lead to non-utilization of generic drugs or inadequate interchange between brand-name and generic products 13,14 .Based on this information, one can assume that lack of orientation can also lead to incorrect use of medications, furthering increasing the odds of poisonings.Some authors have also highlighted that the elderly population is highly susceptible to such misinformation 34 . In relation to the drugs used in self-medication, specifically over-the-counter drugs, their purchase in pharmacies without a prescription would probably not be so problematic if the pharmacist always dispensed them 14 .However, the simple presence of a trained professional in dispensing sites would be useful, as long as the proper orientation on risk of poisonings was communicated to the purchaser. The results of the present study allow concluding that the great majority of intentional cases of poisoning occurred in females, while 0-9 years was the most common age bracket for accidental cases, and intentional cases prevailed in the other age brackets.Sunday was the most common day of the week for intentional case and Thursday for accidental cases.For all cases, the most frequent clinical outcome was cure without signs or symptoms. In relation to the drugs involved, the percentage of poisonings with multiple drugs was higher in intentional cases, and the most frequent drugs were those acting on the central nervous system.Liquid pharmaceutical preparations were more frequent among accidental poisonings, and controlled-use prescription drugs were more common among intentional cases.This study's findings show that poisonings with medications pose a large-scale problem during both childhood and adulthood.The conclusion is that the implementation of educational programs on the prevention of household accidents, focusing on mothers, the distribution of information leaflets in drugstores and primary health clinics, and correct orientation on the proper storage and use of drugs are also important tools for changing this reality. For adults, an important measure to prevent suicide attempts with medications is the effective monitoring and inspection of sales on controlled drugs by pharmacies.The prescription of exaggerated amounts of such medications to depressive patients is believed to favor their use in suicide attempts.Standardization of the prescribed amounts is recommended, along with the availability of psychological follow-up for these individuals by the health system. Given the reality as observed above, it is believed that proper training of health professionals to provide orientation on the correct use and storage of medications can greatly reduce the rates of acute poisoning with medications in the general population.Effective multidisciplinary work and exchange of information by health teams is essential for the prevention, detection, treatment, reporting, and follow-up of poisonings. Table 1 Distribution of acute poisonings with medications, according to gender and intention.Maringá, Paraná State,Brazil, 2003 and 2004. Table 3 Distribution of acute poisonings with medications, according to drug class and intention.Maringá, Paraná State,Brazil, 2003 and 2004. Table 4 Distribution of acute poisonings with medications according to number of drugs involved, pharmaceutical form, control required for dispensing of drug(s), and intention.Maringá, Paraná State,Brazil, 2003 and 2004.
2018-04-03T01:12:36.032Z
2009-04-01T00:00:00.000
{ "year": 2009, "sha1": "20d1879d05e85f1463115ff642a38ae5c37a2ae0", "oa_license": "CCBY", "oa_url": "https://www.scielo.br/j/csp/a/n4GRjJdQH6T9JDfzNGBVkYf/?format=pdf&lang=en", "oa_status": "GOLD", "pdf_src": "ScienceParseMerged", "pdf_hash": "55f7941d69b892a6f55145d8d0b12edb624deba9", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
257272723
pes2o/s2orc
v3-fos-license
The Metabolites and Mechanism Analysis of Genistin against Hyperlipidemia via the UHPLC-Q-Exactive Orbitrap Mass Spectrometer and Metabolomics Genistin, an isoflavone, has been reported to have multiple activities. However, its improvement of hyperlipidemia is still unclear, and the same is true with regard to its mechanism. In this study, a high-fat diet (HFD) was used to induce a hyperlipidemic rat model. The metabolites of genistin in normal and hyperlipidemic rats were first identified to cause metabolic differences with Ultra-High-Performance Liquid Chromatography Quadrupole Exactive Orbitrap Mass Spectrometry (UHPLC-Q-Exactive Orbitrap MS). The relevant factors were determined via ELISA, and the pathological changes of liver tissue were examined via H&E staining and Oil red O staining, which evaluated the functions of genistin. The related mechanism was elucidated through metabolomics and Spearman correlation analysis. The results showed that 13 metabolites of genistin were identified in plasma from normal and hyperlipidemic rats. Of those metabolites, seven were found in normal rat, and three existed in two models, with those metabolites being involved in the reactions of decarbonylation, arabinosylation, hydroxylation, and methylation. Three metabolites, including the product of dehydroxymethylation, decarbonylation, and carbonyl hydrogenation, were identified in hyperlipidemic rats for the first time. Accordingly, the pharmacodynamic results first revealed that genistin could significantly reduce the level of lipid factors (p < 0.05), inhibited lipid accumulation in the liver, and reversed the liver function abnormalities caused by lipid peroxidation. For metabolomics results, HFD could significantly alter the levels of 15 endogenous metabolites, and genistin could reverse them. Creatine might be a beneficial biomarker for the activity of genistin against hyperlipidemia, as revealed via multivariate correlation analysis. These results, which have not been reported in the previous literature, may provide the foundation for genistin as a new lipid-lowering agent. Introduction Genistin (4 ,5,7-Trihydroxyiso-flavone 7-glucoside) is one of the isoflavones found in soybeans, membranous milkvetch root, and botanical herbs from East Asia, Southeast Asia, and some Pacific islands [1]. In recent years, genistin has been repeatedly reported to have anti-oxidant, anti-inflammatory, anti-bacterial, and anti-viral activities, as well as inhibiting blood lipids [2][3][4]. Studies have shown that flavonoids also have the function of reducing blood glucose, regulating glucose metabolism, blood lipids, liver enzyme activity, and blood lipids, but their function in reducing blood lipids does not seem to be so significant [5]. Glycosides are difficult to absorb into the blood circulation due to their complicated structures [6]. Some studies have shown that the concentration of the aglycone (<0.4 µm) in plasma is lower than the IC 50 values (10-50 µM) reported for its anti-cancer effect in vitro, even after ingestion of large amounts of genistin-containing soy The Establishment of Analytical Strategy In this study, we established a comprehensive and effective strategy to discover and identify genistin metabolites by using UHPLC-Q-Exactive Orbitrap MS. Firstly, a high-quality full scan was performed with a resolution of 70,000 FWHM. Secondly, highresolution extracted-ion chromatography was applied to withdraw the candidate data from positive and negative ion modes. Then, the candidate ions were systematically mined based on the common biological reactions and the reported metabolites in the literature. Those screened ions which we considered useful were added into the parent ion list (PIL) to obtain more accurate MS 2 information for structure identification. Finally, the exact structures of these metabolites were resolved based on the exact molecular weight, fragmentation mode, DPIs, and information in the literature. Furthermore, the MMDF metabolic templates were of significance in identifying those metabolites present in low levels. In this study, four templates were set in parallel to encircle the metabolites: (1) genistin (m/z 431) and its conjugation templates (m/z 269 for deglycosylation, m/z 461 for hydroxylation and methylation, m/z 429 for glucuronidation); (2) genistein (m/z 271) and its conjugation templates (m/z 243 for decarbonylation, m/z 253 for dehydration, m/z 401 for arabino glycosylation); (3) daidzin (m/z 415) and its conjugation templates (m/z 433 for hydration, m/z 445 for hydroxylation and methylation); (4) daidzein (m/z 253) and its conjugation templates (m/z 223 for hydroxymethyl loss, m/z 257 for carbonyl hydrogenation reaction, m/z 225 for decarbonylation). In addition, some metabolites were also set as new templates when these metabolites were found during the subsequent identification or when the current templates did not cover the metabolic profiles of genistin. The mass spectrum information for four prototype drugs (genistin, genistein, daidzin and daidzein) were collected and resolved via the established analysis strategy. The metabolic profile of genistin is shown in Figure 1, and other compounds are shown in the Supplementary Figures S1-S3. the exact structures of these metabolites were resolved based on the exact molecular weight, fragmentation mode, DPIs, and information in the literature. Furthermore, the MMDF metabolic templates were of significance in identifying those metabolites present in low levels. In this study, four templates were set in parallel to encircle the metabolites: (1) genistin (m/z 431) and its conjugation templates (m/z 269 for deglycosylation, m/z 461 for hydroxylation and methylation, m/z 429 for glucuronidation); (2) genistein (m/z 271) and its conjugation templates (m/z 243 for decarbonylation, m/z 253 for dehydration, m/z 401 for arabino glycosylation); (3) daidzin (m/z 415) and its conjugation templates (m/z 433 for hydration, m/z 445 for hydroxylation and methylation); (4) daidzein (m/z 253) and its conjugation templates (m/z 223 for hydroxymethyl loss, m/z 257 for carbonyl hydrogenation reaction, m/z 225 for decarbonylation). In addition, some metabolites were also set as new templates when these metabolites were found during the subsequent identification or when the current templates did not cover the metabolic profiles of genistin. The mass spectrum information for four prototype drugs (genistin, genistein, daidzin and daidzein) were collected and resolved via the established analysis strategy. The metabolic profile of genistin is shown in Figure 1, and other compounds are shown in the Supplementary Figures S1-S3. The Identification Results of Genistin Metabolites in Normal and Hyperlipidemic Rats. The metabolites were screened and identified in plasma via UHPLC-Q-Exactive Orbitrap MS. Thirteen metabolites were detected in both positive and negative ion modes (Table 1). Among them, five metabolites were found in a positive ion mode, and eight metabolites were identified in a negative ion mode. Among them, the products of deglycosylation, decarbonylation, hydrogenation, and carbonyl hydrogenation were identified in the positive ion mode. The products of arabinosylation, glucuronidation, hydration, hydroxylation and methylation, decarbonylation, and dehydroxymethylation were identified in a negative ion mode. The Identification Results of Genistin Metabolites in Normal and Hyperlipidemic Rats The metabolites were screened and identified in plasma via UHPLC-Q-Exactive Orbitrap MS. Thirteen metabolites were detected in both positive and negative ion modes (Table 1). Among them, five metabolites were found in a positive ion mode, and eight metabolites were identified in a negative ion mode. Among them, the products of deglycosylation, decarbonylation, hydrogenation, and carbonyl hydrogenation were identified in the positive ion mode. The products of arabinosylation, glucuronidation, hydration, hydroxylation and methylation, decarbonylation, and dehydroxymethylation were identified in a negative ion mode. [18]. Genistin Proposed Metabolic Pathways of Genistin Thirteen metabolites (parent drug included) were found in normal and hyperlipidemic rats after oral administration of genistin. The proposed metabolic pathways of genistin are illustrated in Figure 2. Genistin (M2) served as a metabolic center to gradually produce secondary metabolites. For the common metabolic pathways of normal and hyperlipidemic rats, genistin was metabolized to M6, M1, and M9 mainly by decarbonylation, arabinylation, hydroxylation, and methylation. The reaction of three metabolites should be the conventional pathway of genistin metabolism in vivo. In normal rats, genistin was metabolized to daidzein (M0), and M0 was further metabolized to M3, M5, M4, M7, and M8 underwent glucuronidation, hydroxylation and methylation, hydration, decarbonylation, and hydrogenation, respectively. These metabolic reactions might be stress patterns of genistin being cleared in a normal organism. In hyperlipidemic rats, genistin was metabolized mainly to M10, M12, and M11 by dehydroxymethylation, decarbonylation, and carbonyl hydrogenation. It has been suggested that genistin may be metabolized into these three metabolites which are involved in the pathogenesis of hyperlipidemia. Genistin Reduced Lipid Factor Levels and Hepatic Lipid Accumulation in Rats with Hyperlipidemia Induced by HFD During the experiment, the body weights of rats in all groups were measured per week. As shown in Figure 3, HFD could rapidly increase the body weight of rats. However, when we gave genistin, it significantly slowed down the weight gain, and the effect was consistent with simvastatin. At the end of the experiment (18 weeks), we found that the levels of plasma TC, TG, LDL-C in Mod rats were significantly increased (p < 0.01), whereas HDL-C was decreased compared with that in rats in Con (p < 0.01), which indicated that HFD could indeed cause hyperlipidemia in rats. After treatment, simvastatin and genistin at different doses significantly reduced the levels of plasma TC, TG, and LDL-C (p < 0.01), and elevated the standard of HDL-C (p < 0.01). This beneficial function was also observed in the liver color of rats in all groups. All results indicated that genistin has the remarkable function of regulating blood lipids. Molecules 2023, 28, x FOR PEER REVIEW 7 of 19 Figure 2. The proposed metabolic patterns of genistin in normal and hyperlipidemic rats. The parent nucleus of genistin was labeled with red; the common metabolites in hyperlipidemic and normal administration rats were labeled with green; metabolites in normal administration rats were labeled with blue; and metabolites in hyperlipidemic administration rats were labeled with pink. Genistin Reduced Lipid Factor Levels and Hepatic Lipid Accumulation in Rats with Hyperlipidemia Induced by HFD During the experiment, the body weights of rats in all groups were measured per week. As shown in Figure 3, HFD could rapidly increase the body weight of rats. However, when we gave genistin, it significantly slowed down the weight gain, and the effect was consistent with simvastatin. At the end of the experiment (18 weeks), we found that the levels of plasma TC, TG, LDL-C in Mod rats were significantly increased (p < 0.01), whereas HDL-C was decreased compared with that in rats in Con (p < 0.01), which indicated that HFD could indeed cause hyperlipidemia in rats. After treatment, simvastatin and genistin at different doses significantly reduced the levels of plasma TC, TG, and LDL-C (p < 0.01), and elevated the standard of HDL-C (p < 0.01). This beneficial function was also observed in the liver color of rats in all groups. All results indicated that genistin has the remarkable function of regulating blood lipids. The proposed metabolic patterns of genistin in normal and hyperlipidemic rats. The parent nucleus of genistin was labeled with red; the common metabolites in hyperlipidemic and normal administration rats were labeled with green; metabolites in normal administration rats were labeled with blue; and metabolites in hyperlipidemic administration rats were labeled with pink. For Oil red O staining, more lipids accumulated in the liver after being raised with HFD. As we know, the excessive lipids could cause a lipid peroxidation reaction with the participation of reactive oxygen species, and lipid peroxidation was associated with cell permeability, DNA damage, and protein synthesis disorders. In Mod rats, HFD significantly increased the content of MDA (p < 0.05) while decreasing the level of SOD compared with that in Con rats (p < 0.05). The result indicated that excessive HFD could accelerate the process of lipid peroxidation in vivo. Moreover, lipid peroxidation emerged as having a negative relationship with liver function. For H&E staining, the livers of rats in Mod indeed showed increased symptoms of cavitation and fibrosis, which also corroborated the above statement. Moreover, these elevated levels of ALT in Mod rats also corroborated the abnormal liver function compared with Con (p < 0.05). Surprisingly, genistin obviously reversed the above situation; this means that genistin could reduce lipid accumulation in the liver. Concurrently, genistin also significantly reduced the level of MDA (p < 0.05) and increased the content of SOD (p < 0.05). It was shown that genistin has an inhibitory effect on lipid peroxidation. Genistin also had a beneficial effect on liver function by reducing fibrosis and vacuolation of liver cells. This result was also reflected in the decreased ALT due to genistin. In brief, genistin could reduce HFD-induced lipid factor levels and hepatic lipid accumulation in rats. For Oil red O staining, more lipids accumulated in the liver after being raised with HFD. As we know, the excessive lipids could cause a lipid peroxidation reaction with the participation of reactive oxygen species, and lipid peroxidation was associated with cell permeability, DNA damage, and protein synthesis disorders. In Mod rats, HFD significantly increased the content of MDA (p < 0.05) while decreasing the level of SOD compared with that in Con rats (p < 0.05). The result indicated that excessive HFD could accelerate the process of lipid peroxidation in vivo. Moreover, lipid peroxidation emerged as having a negative relationship with liver function. For H&E staining, the livers of rats . n = 6. * p < 0.05, ** p < 0.01, and *** p < 0.001, Con vs. Mod; and # p < 0.05, ## p < 0.01, and ### p < 0.00, Sim vs. Mod, Hig vs. Mod, Low vs. Mod. The Mechanism Analysis of Genistin against Hyperlipidemia via Plasma Metabolomics To determine the underlying mechanism of genistin in the treatment of hyperlipidemia, multivariant PCA and OPLS-DA were applied with SIMCA-P 14.0 Demo software ( Figure 4A,B,D,E). PCA is an unsupervised cluster analysis model without preprocessed datasets. It can be used to evaluate differences in metabolic profiles of Con, Mod, and Gen. factor levels and hepatic lipid accumulation in rats. The Mechanism Analysis of Genistin against Hyperlipidemia via Plasma Metabolomics To determine the underlying mechanism of genistin in the treatment of hyperlipidemia, multivariant PCA and OPLS-DA were applied with SIMCA-P 14.0 Demo software ( Figure 4A,B,D,E). PCA is an unsupervised cluster analysis model without preprocessed datasets. It can be used to evaluate differences in metabolic profiles of Con, Mod, and Gen. Above all, the QC sample could be gathered together in PCA score plots of positive and negative ion modes, illustrating the stability of the analysis system during data collection. Furthermore, the separation trend between Con and Mod were highly obvious. This consequence was reflected in the destructive effect of HFD on metabolic profiling in rats. As expected, Mod was also significantly different from Gen, the metabolic profiling of which was more inclined to that of the Con. Next, PLS-DA was adopted to verify the rationality of the data model. This supervisory analysis model revealed the difference between Mod, Con, and Gen. This simulation coefficient of R2Y (0.989, 0.984) and Q2 Above all, the QC sample could be gathered together in PCA score plots of positive and negative ion modes, illustrating the stability of the analysis system during data collection. Furthermore, the separation trend between Con and Mod were highly obvious. This consequence was reflected in the destructive effect of HFD on metabolic profiling in rats. As expected, Mod was also significantly different from Gen, the metabolic profiling of which was more inclined to that of the Con. Next, PLS-DA was adopted to verify the rationality of the data model. This supervisory analysis model revealed the difference between Mod, Con, and Gen. This simulation coefficient of R2Y (0.989, 0.984) and Q2 (0.869, 0.875) in positive and negative ion modes revealed the reliability of the model, and the 200 permutation tests also confirmed the above results. OPLS-DA was utilized to identify significantly changed metabolites after genistin treatment. On the score plots of OPLS-DA in positive and negative ion modes, there were obvious separation trends between Con and Mod and between Mod and Gen. Subsequently, the S-plot with the threshold of VIP > 1.0 and p < 0.05 could be used to screen and identify differential metabolites in the OPLS-DA model ( Figure 4C,F). Finally, a total of 15 endogenous metabolites were significantly altered between the Con and Mod ( Figure 4G, Table 2). Among them, six metabolites were obviously up-regulated (p < 0.05), and nine metabolites were significantly decreased (p < 0.05) with HFD intervention. Nevertheless, genistin could still obviously down-regulate six metabolites (p < 0.05), including L-Tryptophan, L-Proline, Octadecanoic acid, 4-Amino-4-cyanobutanoic acid, sn-Glycero-3-phosphoethanolamine, and 5-Aminolevulinate. Genistin also up-regulated 9 of 15 metabolites, involving L-Leucine, Creatine, L-Carnitine, O-Ureido-L-serine, anserine, N-Formimino-L-aspartate, L-Ornithine, 2,3-Dihydroxycarbamazepine, and 5-Hydroxyisourate. Approximate changes in multiples of metabolites are shown in Additional Table 1. These results demonstrated that genistin indeed reverses metabolic disorders caused by HFD. The cluster analysis with heat map was used to verify the above hypothesis ( Figure 4H). We found that the metabolic profiling of genistin in the treatment of hyperlipidemia was remarkably similar to that of the Con, whereas the contrary was true for that of the Mod. The results showed that the levels of metabolites in the plasma were significantly different between the Con and Mod rats, and that the levels in the Gen tended to recover to those of the Con, suggesting that genistin could correct the abnormal levels of plasma metabolites in hyperlipidemic rats. * p < 0.05, ** p < 0.01, *** p < 0.001, and **** p < 0.0001, Con vs. Mod; # p < 0.05, ## p < 0.01, and ### p < 0.001, Gen vs. Mod. The structures, molecular weights, and codes of differential metabolites were assigned according to the human metabolome database and KEGG compound database The Enrichment Analysis of Metabolic Pathways Fifteen differential metabolites were imported to the MetaboAnalyst 5.0 dataset, and their results revealed the metabolic pathway of genistin against hyperlipidemia ( Figure 5A). The results showed that these metabolites in the plasma were responsible mainly for arginine and proline metabolism and arginine biosynthesis, and so on. Therefore, these metabolic pathways should be classified as target pathways, which were associated with the beneficial activity of genistin. metabolic pathways should be classified as target pathways, which were associated with the beneficial activity of genistin. Moreover, the Spearman correlation analysis was used to determine the significant correlations between metabolites and cytokines. As shown in Figure 5C, six metabolites, including 4-Amino-4-cyanobutanoic acid, octadecanoic acid, L-Proline, L-Tryptophan, sn-Glycero-3-phosphoethanolamine, and 5-Aminolevulinate, showed positive correlation with TC, TG, LDL-C, ALT, and MDA and were negatively related to SOD and HDL-C. Interestingly, six metabolites seemed to be selectively related to cytokines. Among them, 4-Amino-4-cyanobutanoic acid and octadecanoic acid indeed showed positive correlation with TG, TC, and LDL-C (p < 0.05), illustrating that two metabolites might aggravate the damage of hyperlipidemia to the body. The insignificant relationship between two metabolites and HDL-C also proved that they focused only on the transport of total cholesterol from the liver to the plasma. In addition, 4-Amino-4-cyanobutanoic acid and octadecanoic acid were all negative correlated with SOD (p < 0.001); the result suggested that they were associated with lipid peroxidation in the liver. 5-Aminolevulinate, sn-Glycero-3-phosphoethanolamine, and L-Tryptophan all showed negative correlations with SOD and HDL-C (p < 0.05). 5-Aminolevulinate, sn-Glycero-3-phosphoethanolamine, and L-Tryptophan could not only induce hyperlipidemia but also cause adverse effects on liver function based on the positive relationship between these compounds and TG, MDA, ALT, and LDL-C (p < 0.05). The relationships between nine other metabolites and cytokines were contrary to the aforementioned situation. For indicators, all metabolites were negatively correlated with ALT (p < 0.05), illustrating the function of liver protection. However, three of nine Moreover, the Spearman correlation analysis was used to determine the significant correlations between metabolites and cytokines. As shown in Figure 5C, six metabolites, including 4-Amino-4-cyanobutanoic acid, octadecanoic acid, L-Proline, L-Tryptophan, sn-Glycero-3-phosphoethanolamine, and 5-Aminolevulinate, showed positive correlation with TC, TG, LDL-C, ALT, and MDA and were negatively related to SOD and HDL-C. Interestingly, six metabolites seemed to be selectively related to cytokines. Among them, 4-Amino-4-cyanobutanoic acid and octadecanoic acid indeed showed positive correlation with TG, TC, and LDL-C (p < 0.05), illustrating that two metabolites might aggravate the damage of hyperlipidemia to the body. The insignificant relationship between two metabolites and HDL-C also proved that they focused only on the transport of total cholesterol from the liver to the plasma. In addition, 4-Amino-4-cyanobutanoic acid and octadecanoic acid were all negative correlated with SOD (p < 0.001); the result suggested that they were associated with lipid peroxidation in the liver. 5-Aminolevulinate, sn-Glycero-3-phosphoethanolamine, and L-Tryptophan all showed negative correlations with SOD and HDL-C (p < 0.05). 5-Aminolevulinate, sn-Glycero-3-phosphoethanolamine, and L-Tryptophan could not only induce hyperlipidemia but also cause adverse effects on liver function based on the positive relationship between these compounds and TG, MDA, ALT, and LDL-C (p < 0.05). The relationships between nine other metabolites and cytokines were contrary to the aforementioned situation. For indicators, all metabolites were negatively correlated with ALT (p < 0.05), illustrating the function of liver protection. However, three of nine metabolites, namely anserine, creatine, and O-Ureido-L-serine, were negatively correlated with TG, TC, and LDL-C (p < 0.05). Among those metabolites, only creatine could show the negative correlation with the above three factors (p < 0.01) and was positively correlated with HDL-C (p < 0.05). The result suggested that creatine seemed to show a unique function in alleviating hyperlipidemia. Moreover, the correlations between creatine and SOD and MDA also confirmed that creatine could inhibit lipid peroxidation in the liver. Other metabolites that could demonstrate this function were anserine, 2,3-Dihydroxycarbamazepine, and N-Formimino-L-aspartate (p < 0.05). Distance-based redundancy analysis (db-RDA) was applied to determine the differential biomarkers of genistin against hyperlipidemia. 4-Amino-4-cyanobutanoic acid and octadecanoic acid were negatively associated with SOD while showing a positive correlation with TG, TC, and LDL-C. In addition, creatine was more closely associated with cytokines than other metabolites. Thus, creatine should be considered a beneficial biomarker for genistin in the treatment of hyperlipidemia. Discussion The metabolic behaviors of drug or natural products in vivo have always been the focus of study for their continued development. The reactions involved in their metabolic pathways may be associated with the involved targets or endogenous pathways. In addition, the metabolic differences of drugs or natural products under normal and pathological conditions have been reported in the previous literature [22][23][24], which may provide some information about their mechanism. In this study, a high dose of genistin (150 mg/kg) was given orally to normal SD rats and hyperlipidemic SD rats [25]. Thirteen metabolites in plasma were found via UHPLC-Q-Exactive Orbitrap MS with an efficient analysis strategy. Among them, three metabolites were detected in both normal and pathological rats, involving the reactions of decarbonylation (M6), L-arabinylation (M1), and hydroxylation and methylation (M9). Meanwhile, seven metabolites were found only in rats administered normally, such as deglycosylation (M0), glucuronidation (M3), hydroxylation and methylation (M5), hydration (M4), decarbonylation (M7), and hydrogenation (M8) products of genistin. Based on the above results, we speculated that the reactions of three metabolites detected in both normal and pathological rats should be the routine pathways for genistin metabolism in vivo, and metabolic reactions of seven metabolites in normal rats might be stress patterns of genistin being cleared in a normal organism. Three other metabolites could be detected only in hyperlipidemic rats, including the reactions of dehydroxymethylation (M10), decarbonylation (M12), and carbonyl hydrogenation (M11). This fact demonstrates that genistin may be metabolized into three metabolites to participate in the pathogenesis of hyperlipidemia, or three metabolites have some advantages in the treatment of hyperlipidemia with genistin. It is worthwhile for this latent problem to be studied in the future. Isoflavones, also known as estrogen-like substances, have been proven to have remarkable abilities in regulating hormone homeostasis, cell proliferation, and metabolic regulation in women, such as soy isoflavone and Pueraria isoflavones [26][27][28]. In the next experiment, the special function of genistin was evaluated in HFD-induced hyperlipidemic rats. At first, HFD could disturb the lipid homeostasis in rats and accelerate the process of lipid peroxidation in liver [29]. The levels of ALT corroborated the above conclusions compared with normal rats. Secondly, genistin obviously reversed the disorder in vivo caused by HFD (p < 0.05); this effective activity was also confirmed by pathological results. Thirdly, hyperlipidemia was characterized by abnormally elevated levels of TC, TG, and LDL-C and abnormally reduced HDL-C levels. Genistin significantly improved this pathological change. At present, the research of genistin in the treatment of hyperlipidemia has never been mentioned, such that these results can construct the foundation for genistin as a new lipid-lowering agent. Finally, the mechanism behind genistin in the treatment of hyperlipidemia was preliminarily elucidated using metabolomics. In PCA and OPLS-DA score plots, the metabolic profile of genistin in rats, which was similar to that of normal rats, was verifiably separated in rats with hyperlipidemia induced by HFD. The levels of 15 metabolites in Mod rats were significantly adjusted compared with Con rats (p < 0.05), and these metabolites were significantly reversed by genistin (p < 0.05), which belonged to the metabolic pathways of arginine and proline metabolism and arginine biosynthesis. Spearman correlation analysis and db-RDA revealed that creatine should be considered a beneficial biomarker of genistin against hyperlipidemia. Creatine in mammal body could be synthesized from arginine, methionine, and proline found in the kidney, liver, and pancreas [30][31][32]. As an energy source that can be endogenously synthesized or obtained through diet and supplement, creatine is involved in cell metabolism through adenosine triphosphate (ATP) supplementation, which provides energy for skeletal muscles, organs, and tissues [33]. The deposition of lipids leads to the loss of ATP in the body, and creatine could promote the process of lipid β oxidation to release more ATP in the participation of multiple targets, such as adenosine 5'-monophosphate (AMP)-activated protein kinase (AMPK), peroxisome-proliferator-activated receptors (PPARs), sterol-regulatory element-binding proteins (SREBPs) [34]. Certainly, the relationship between creatine and lipids also reduces lipid peroxidation in the body [35]. In this study, genistin could significantly promote creatine production through the arginine and proline metabolic pathways. However, the relationship behind genistin, creatine, and the metabolic pathway remains mysterious, and the role of three genistin metabolites in hyperlipidemic rats should not be ignored. Chemicals and Reagents The reference substances of genistin, genistein, daidzin, and daidzein were commercially provided by Chengdu Must Biotechnology Co., Ltd. (Chengdu, China) with a purity ≥98% via UV-UHPLC. Their structures were fully elucidated by comparing the spectral data (ESI-MS and 1 H, 13 Before the experiment, all animals had to be maintained under standard animal room conditions (temperature 24 ± 2 • C, humidity 55-60%, 12/12 h light/dark cycles) with standard feed and water ad libitum for 1.0 week. Afterward, all rats were randomly divided into the control group (6 rats) and the hyperlipidemic group (6 rats) according to body weight. The rats in the normal group were fed normal rodent chow (Pengyue, Shandong, China), and those in the hyperlipidemic group were fed with a high-fat diet, containing a standard chow diet (65%), sucrose (20%), lard (15%), cholesterol (5%), sodium cholate (5%), and 5% yolk powder (Huafukang, Beijing, China) for 15 weeks. After 15 weeks, compared with the control group, the levels of TC, TG, and LDL-C in the hyperlipidemic group were significantly increased, and the level of HDL-C was significantly decreased, indicating that the animal model was successfully established. Collection and Preparation of Plasma Samples All rats in both groups were given genistin (150 mg/kg) via oral administration. After oral administration, the blood samples (about 0.5 mL) were taken from the rats in the normal and hyperlipidemic groups at different times of 0.5, 1, 1.5, 2, 4, and 6 h. The obtained samples were placed in the anticoagulant EP tubes of heparin sodium. After resting for 10.0 min, each blood sample was centrifuged for 15.0 min (3500 rpm, 4 • C). An amount of 100.0 µL of upper plasma was taken from the rats in the normal administration group at each time point to obtain mixed plasma. Cold methanol (3.0 mL) was added to mixed plasma samples (1.0 mL) for precipitation, and the supernatant was obtained via centrifugation for 10.0 min (4000 rpm, 4 • C). Plasma of the two groups were blown dry with a nitrogen blow dryer and stored in a refrigerator at −80 • C until use. Before analysis, samples of these two groups were redissolved in 300 µL methanol and centrifuged at 20,000 rpm. Collection of UHPLC-Q-Exactive Orbitrap MS Data The unsearchable metabolites were determined via UHPLC-Q-Exactive Orbitrap MS. Firstly, LC analysis was performed on a DIONEX Ultimate 300 UHPLC system (Thermo Fisher Scientific, MA, USA) with a binary pump, an autosampler, and a column oven. The chromatographic separation was performed with an ACQUITY UPLC BEH C18 column Then, the rats in the hyperlipidemic group were again allocated into the four following groups: the model group (Mod, n = 6); simvastatin group (Sim, n = 6) at the dose of 5 mg/kg/d (drug weight/body weight/day); genistin high-dose group (Hig, n = 6) at the dose of 5 mg/kg/d; and genistin low-dose group (Low, n = 6) at the dose of 2.5 mg/kg/d [36]. All drugs were administered orally to the respective rats. Except for the Con group, other rats were still fed a high-fat diet for the 3 weeks of treatment. Collection and Preparation of Biological Samples At the end of the experiment, all rats were fasted for 12 h with only deionized water. Then, the rats from five groups were killed in parallel using 10% chloral hydrate. All abdominal aortic blood samples were collected from each rat in each group and placed in EP tubes coated with heparin sodium and were centrifuged (3500 rpm) for 15.0 min at 4 • C, and the supernatants were taken for testing. The livers of all rats were taken out and rinsed with normal saline. Some livers were immersed in 4% paraformaldehyde for histopathological analysis. The remaining livers were rapidly quenched in liquid nitrogen and stored at −80 • C until use. The levels of TG, TC, LDL-C, HDL-C, ALT, SOD, and MDA in plasma samples from all rats were measured using a microplate reader (SpectraMax iD5, Pleasanton, CA, USA) [37][38][39]. The hepatic tissues fixed in 4% PFA were dehydrated and embedded in paraffin, crosssectioned into 4 µm-thick slices, and stained with hematoxylin-eosin (H&E). The sections of the remaining liver tissues were cleaned with PBS and cultured with 60% isopropanol for 5.0 min and then dyed in 0.5% Oil Red O staining liquid (Sigma, St Saint Louis, MO, USA) for 20.0 min. After being cleaned by PBS, all sections were stained with hematoxylin stain (Solarbio Science and Technology, Beijing, China) for 2.0 min [40]. The abovementioned indicators were all used to evaluate the anti-hyperlipidemia function of genistin. Preparation of Biological Samples In total, 200.0 µL plasma from each rat in each group was taken and added into an 800.0 µL mixture of cold methanol and acetonitrile (1:4). After 5.0 min, the miscible liquids were centrifuged at 4000 rpm for 10.0 min to obtain the supernatants [41]. All supernatants were rapidly dried with nitrogen and stored in a refrigerator at −80 • C until use. In addition, a 10 µL solution from each plasma sample was mixed and marked as quality control (QC) samples. The stability of the instrument needed to be calibrated using QC samples after every 5 plasma samples. The UHPLC-MS analysis was performed on a Q-Exactive MS/MS (Thermo Fisher Scientific, MA, USA). An electrospray ionization (ESI) ion source was used. The samples were collected via Full MS/dd-MS 2 scanning mode. The first-order scanning resolution was 70,000; the second-order scanning resolution was 35,000; the Fourier high-resolution scanning range was m/z 50-1050; and the ion chamber collision energy was 40%. The capillary temperature was 320 • C.The sheath gas flow rate was 30 arb; the auxiliary gas flow rate was 10 arb; and the spray voltage was 3.0 kV. Multivariate Analysis of UHPLC-Q-Exactive Orbitrap MS Data The UHPLC-Q-Exactive Orbitrap MS data were processed with Compound Discoverer 3.0 software (Thermo Fisher Scientific, Waltham, MA, USA) for noise cancellation, baseline correction, and normalization to obtain reliable datasets with some information, including m/z, peak intensities, and retention times. The relevant parameters were set to C Afterward, the processed datasets were added to the SIMCA-P 14.0 software (Umetrics, Sweden) to perform the principal component analysis (PCA) and orthogonal to partial least-squares-discriminant analysis (OPLS-DA). Among them, PCA was applied to discriminate the separation trends of all groups. OPLS-DA was used to characterize metabolic perturbation of hyperlipidemia. In addition, the S-plot scores were used to screen the differential metabolites of hyperlipidemia treated by genistin combined with other judgment methods, such as variable importance in projection (VIP) (generated in the OPLS-DA mode) and p-value (formed from relative intensity). Subsequently, the S-plot with the threshold of VIP > 1.0 and p < 0.05 could be used to screen and identify differential metabolites in the OPLS-DA model. The structures, molecular weights, and codes of differential metabolites were assigned according to the human metabolome database (What Is Dementia. Available online: http://www.alz.org/what-is-dementia.asp (accessed on 1 February 2023)). Finally, the Spearman correlation analysis and db-RDA analysis were used to determine the relationship between lipid factors and differential metabolites (https://www.bioincloud.tech/ (accessed on 1 February 2023)). Statistical analysis The statistical analysis of the data was performed using SPSS 22.0 software (Chicago, IL, USA). An unpaired Student's t-test was performed for a two-group comparison. For multiple comparisons, ANOVA was used. p < 0.05 was defined as statistically significant. The statistical analyses and figures were performed using GraphPad Prism 8.0 software (Santiago, MN, USA). Fasting body weight data were analyzed using one-way ANOVA. Conclusions In this study, the metabolic differences and similarities of genistin in normal rats and hyperlipidemic rats were compared. These results are able to provide the foundation for the metabolic mechanism of genistin in pathological and normal rats. The efficacy results revealed the explicit function of genistin against hyperlipidemia, which has been rarely reported; thus, the results demonstrate the possibility of genistin as a new lipid-lowering agent. The mechanism on genistin in the treatment of hyperlipidemia was preliminarily elucidated using metabolomics. Genistin may treat hyperlipidemia by regulating the level of creatine, which can be produced by the arginine and proline metabolic pathway in vivo. However, there are some limitations of the present study. Firstly, the certain relationship between the differences in metabolic behavior and metabolic pathways of genistin in normal and hyperlipidemic rats and the results of pharmacodynamics is still unclear. Secondly, it is also ambiguous whether there are differences in the levels of this metabolite. Finally, the character of genistin metabolites in the treatment of hyperlipidemia by mediating the level of creatine in vivo should also not be ignored. In summary, the relationship between genistin and hyperlipidemia may be further revealed in subsequent studies. Supplementary Materials: The following supporting information can be downloaded at: https: //www.mdpi.com/article/10.3390/molecules28052242/s1, Figures S1-S3: the metabolic profiles of daidzin, daidzein, genistein; Table S1: approximate values of fold changes of metabolites in the studied groups; Table S2: identified potential biomarkers regulated by genistin and their fragment ion information. Informed Consent Statement: Not applicable. Data Availability Statement: Most of the data used during the preparation of the manuscript are included in the Results and Discussion sections. However, for any additional details of the procedures and the original raw files, please contact the corresponding authors.
2023-03-02T16:20:11.949Z
2023-02-28T00:00:00.000
{ "year": 2023, "sha1": "dc316c14fdc4ac0adccf90b7e770189468e63764", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/1420-3049/28/5/2242/pdf?version=1677572773", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "03148d8b3727d327e31bf3e591e969004ad77c92", "s2fieldsofstudy": [ "Chemistry", "Biology" ], "extfieldsofstudy": [] }
119633077
pes2o/s2orc
v3-fos-license
Accelerated and Airy-Bloch oscillations A quantum particle subjected to a constant force undergoes an accelerated motion following a parabolic path, which differs from the classical motion just because of wave packet spreading (quantum diffusion). However, when a periodic potential is added (such as in a crystal) the particle undergoes Bragg scattering and an oscillatory (rather than accelerated) motion is found, corresponding to the famous Bloch oscillations. Here we introduce an exactly-solvable quantum Hamiltonian model, corresponding to a generalized Wannier-Stark Hamiltonian $\hat{H}$, in which a quantum particle shows an intermediate dynamical behavior, namely an oscillatory motion superimposed to an accelerated one. Such a novel dynamical behavior is referred to as {\it accelerated Bloch oscillations}. Analytical expressions of the spectrum, improper eigenfunctions and propagator of the generalized Wannier-Stark Hamiltonian $\hat{H}$ are derived. Finally, it is shown that acceleration and quantum diffusion in the generalized Wannier-Stark Hamiltonian are prevented for Airy wave packets, which undergo a periodic breathing dynamics that can be referred to as {\it Airy-Bloch oscillations}. Introduction Stark Hamiltonians play a central role in several fields of physics, including quantum mechanics, condensed matter physics and optics 1,2,3 . In non-relativistic quantum mechanics, the simplest case is that of a quantum particle subjected to a constant force F , which is described by the Hamiltonian where the first term of the right hand side of Eq.(1), ǫp 2 x , is the kinetic energy operator, the second term is the potential of the external force, andp x = −i∂ x is the momentum operator. As is well known (see e.g. chap.8.1 of Ref. 2 or Ref. 4 ) the spectrum ofĤ is purely absolutely continuous with a set of (improper) eigenfunctions given by shifted Ary functions. Like in the classical limit, a quantum wave packet undergoes a uniformly accelerated motion but spreads owing to quantum diffusion. Indeed, after a change of reference frame (from th rest frame to an accelerated one) and a gauge transformation of the wave function the Stark Hamiltonian (1) is equivalent to the Hamiltonian of a freely moving particle, i.e. with F = 0 (see, for example, 5,6,7 ). A much richer and physically relevant dynamics arises when a periodic potential is added to the external force, i.e. for the Hamiltonian where V (x) is the periodic potential with period d. Such a problem was originally studied by Bloch, Zener and Wannier in a series of seminal papers in connection with the motion of an electron in a crystal subjected to a uniform electric field 8,9,10,11,12 . Under the assumption that a single lattice band is excited and the force is weak to neglect tunneling into other bands of the crystal (Zener tunneling), the quantum motion of the electron can be described by an effective single-band Wannier-Stark Hamiltonian 3,13 . Remarkably, owing to Bragg scattering in the lattice a quantum wave packet in the crystal undergoes an oscillatory (rather than uniformly accelerated) motion with period T B = 2π/(F d), the famous Bloch oscillations (BOs). While in natural crystals electronic BOs have never been observed owing to dephasing and many-body effects, BOs have been impressively demonstrated in a number of experiments after the advent of semiconductor superlattices as terahertz radiation emitted from coherently oscillating electrons 14,15 . Quantum and classical analogues of BOs have been also proposed and observed in a wide variety of different physical systems, such as in ultracold atoms and Bose-Einstein condensates 16,17,18,19,20,21,22 , In this work we introduce an exactly-solvable generalized Wannier-Stark Hamiltonian in which the dynamics of a quantum wave packet shows a behavior intermediate between the uniform acceleration of Hamiltonian (1) and the oscillatory BO dynamics of Hamiltonian (2). Namely, an oscillatory dynamics is superimposed to a parabolic motion, an effect that will be referred to as accelerated Bloch oscillations. The generalized Wannier-Stark Hamiltonian considered in this work has the form where T (q) is a periodic function of q with period 2π/d and zero mean. It differs from Eq.(1) because of the additional term T (p x ) to the free-particle kinetic energy operator. A possible physical implementation of such an Hamiltonian has been recently introduced in optics Ref. 58 , where transverse light dynamics in a self-imaging optical cavity is described by an Hamiltonian of the form described by Eq.(1). In Ref. 58 , the limiting case ǫ = 0 was mainly investigated, corresponding to pseudo-periodic BOs dynamics and absence of quantum diffusion. Here we consider the more general case ǫ = 0 and derive analytical expressions of the spectrum, eigenfunctions and propagator of the Hamiltonian (3). In particular, we prove rigorously that for ǫ = 0 the new phenomenon of accelerated Bloch oscillations arises: an oscillatory motion with period T B = 2π/(F d), i.e. analogous to BOs, is superimposed to the freelyaccelerating motion of the wave packet (as if T = 0 in (3)). Such a result holds for any normalizable wave packet, but can be violated for an initial non-normalizable probability distribution. Interestingly, for an initial condition corresponding to a (non-normalizable) Airy wave packet, eigenfunction of (1), a periodic oscillatory dynamics without acceleration is found, a phenomenon that can be referred to as Airy-Bloch oscillations. Generalized Wannier-Stark Hamiltonian Let us consider a quantum system described by the one-dimensional Schrödinger equation with Hamiltonian given by Eq.(3), where is a periodic function of period 2π/d with zero mean (T 0 = 0) and p x = −i∂ x is the momentum operator. For ǫ = 0, the Hamiltonian (3) is a Wannier-Stark Hamiltonian with a periodic kinetic energy operator that is generally introduced in solid-state physics as an effective Hamiltonian to describe single-band electron dynamics in slowly-varying external fields 3,13 . In this case T (q) is the band dispersion curve, and the discretization x = nd (n = 0, ±1, ±2, ±3, ...) at localized Wannier sites must be accomplished 3,13 . As is well-known, for ǫ = 0 and provided that the discretization x = nd is accomplished,Ĥ has a pure point spectrum given by an equally-spaced Wannier-Stark ladder with energy separation ω B = F d = 2π/T B ; in real space an initially localized wave packet undergoes periodic BOs with period T B . The generalized Wannier-Stark Hamiltonian (3) that we consider in this work differs from the effective Hamiltonian introduced in the theory of electron dynamics in crystals 3,13 because (i) the variable x is continuous rather than discrete, and (ii) the term ǫp 2 x breaks the periodicity of the kinetic energy operator. In such a case, H has a pure absolutely continuous spectrum −∞ < E < ∞ and the improper (non-normalizable) eigenfunctions φ E (x), satisfying the eigenvalue equation can be analytically determined in terms of series of shifted Airy functions. The explicit expression of φ E (x), derived in Appendix A, reads where we have set and the following normalization condition holds It is interesting to specialize the general result given by Eqs. (7) and (8) to the following two limiting cases. (i) In the limit T (q) = 0, i.e. when the Hamiltonian (3) reduces to the standard Stark Hamiltonian (1), one has ρ n = (ǫ −1/3 F −1/6 )δ n,0 and thus the improper eigenfunctions (7) reduce to the usual form (see 4 ) (ii) In the limit ǫ → 0, i.e. when the Hamiltonian (3) has the form of the effective Wannier-Stark Hamiltonian of solid-state physics with a periodic kinetic energy operator 3,13 , taking into account that one obtains where we have set Equations (12) and (13) justify the result previously given in Ref. 58 and indicate that, even at ǫ = 0, the spectrum ofĤ is absolutely continuous. Such a result stems from the fact that the variable x here is considered a continuous variable, rather than discretized as in single-band electron transport theory 13 . Propagator The solution to the Schrödinger equation (4) with a given initial condition ψ(x, 0) can be formally written as The propagatorÛ , or its kernel U(x, y, t) entering in Eq. (14), can be determined from the spectral representation ofĤ, i.e. one has Substitution of Eq. (7) into Eq.(15) and after some cumbersome calculations one obtains (see Appendix B for technical details) where we have set and ρ n are given by Eq. (8). Exact analytical expressions of the function G l (t), and thus of the propagator (16), can be given in the important case of a sinusoidal shape of T (q), i.e. T (q) = κ cos(qd), which in electronic crystal theory describes a nearest-neighbor tight-binding lattice band. This important and exactly-solvable case is presented in Appendix C [see Eq.(C.6)]. It is interesting to specialize the form of the propagator given by Eq.(16) to the following two limiting cases. (i) In the limit T (q) = 0, i.e. when the Hamiltonian (3) reduces to the standard Stark Hamiltonian (1), one has G l (t) = (ǫ 2 F ) −1/3 δ l,0 and thus which is precisely the propagator of the Stark Hamiltonian (1) (see for example Eq.(7.4) in Ref. 1 ). (ii) In the limit ǫ = 0, Eq.(16) shows a singular behavior and the propagator U can be calculated as a limit of Eq.(16) as ǫ → 0. Since U is the kernel of an integral transformation [Eq. (14)], by use of the phase stationary method as ǫ → 0 one can set in all terms under the sum in Eq. (16). Using this result, after some straightforward calculations one obtains where σ n are defined by Eq.(13). Accelerated Bloch oscillations Propagation of an arbitrary initial wave packet ψ(x, 0) is determined by the integral transformation (14) with the propagator U defined by Eq. (16). To study the properties of wave packet evolution, let us distinguish three cases: (1), i.e. T (q) = 0. The propagator is given by Eq.(18), which describes particle acceleration and quantum diffusion. An initial wave packet undergoes uniform acceleration and the center of mass of the wave packet x(t) follows the parabolic trajectory 59 Analytical expression for ψ(x, t) can be given, for example, for an initial Gaussian wave packet distribution. In addition to the parabolic motion, wave packet spreading is observed as a result of quantum diffusion. An example of accelerated Gaussian wave packet is shown in Fig.1. In Fig.1(a) the evolution of the probability density |ψ(x, t)| 2 is depicted in a pseudo color map for parameter values ǫ = 1/2, F = 0.2, T (q) = 0 and for the initial condition ψ(x, 0) ∝ exp(−x 2 /w 2 ) with w = 5. Figure 1(b) shows the evolution of the wave packet center of mass x(t) and width ∆x(t) defined by Note that the wave packet center of mass follows the parabolic trajectory according to Eq. (21), and that wave packet spreading arises because of quantum diffusion. where we used the property n σ n σ * n−l = (1/F )δ l,0 . Hence from Eqs. (14) and (24) on has Equation (25) shows that the dynamics is pseudo-periodic, i.e. the probability distribution |ψ(x, t)| 2 undergoes a periodic dynamics with the BO period T B , i.e. |ψ(x, t m )| 2 = |ψ(x, 0)| 2 , whereas the wave function ψ(x, t) does not [this is due to the phase factor exp(−2πimx/d) in Eq. (25)]. An example of pseudo-periodic dynamics for an initial Gaussian wave packet is shown in Fig.2 for a sinusoidal shape of T (q), i.e. T (q) = κ cos(qd). Parameter values used in the simulations are ǫ = 0, F = 0.2, κ = 1 and d = 4, corresponding to a BO period T B = 2π/(F d) ≃ 7.85. The initial condition is the Gaussian wave packet ψ(x, 0) ∝ exp(−x 2 /w 2 ) with w = 5 as in Fig.1. Figure 2(a) and (b) depict on a pseudo color map the temporal evolution of the probability density |ψ(x, t)| 2 and of the real part Re(ψ(x, t)) of the wave function, respectively. Note that, while the probability density undergoes a periodic oscillatory behavior analogous to the ordinary BOs in a crystal, the wave function does not; in particular the interference fringes visible in Fig.2(b) arise from the phase factor exp(−iF xt) appearing in Eq. (20). Such a pseudo-periodic dynamical regime was previously introduced in Ref. 58 and referred to as pseudo-Bloch oscillations. (iii) Third-case: accelerated Bloch oscillations. This is the most general case, corresponding to T (q) = 0 and ǫ = 0. In this case the dynamical behavior is intermediate between the two previously considered regimes, i.e. one observes an oscillatory dynamics of the wave packet superimposed to the parabolic path as described by Eq. (21). Such a property can be proven by observing that, at times t m = mT B (m = 0, ±1, ±2, ...), the two propagators U(x, y, t) as given by Eq. (16) and Eq. (18) do coincide. In fact, at times t = t m one has where Ω l are defined by Eq.(A.17) in Appendix A. Substitution of Eq.(26) into Eq.(16) yields which coincides with Eq.(18) taken at t = t m . Indicating byĤ 1 andĤ 2 the two Hamiltonians defined by Eqs. (1) and (3), respectively, i.e.Ĥ 1 is the limiting case of H 2 when T (q) = 0, the previous result can be formally written as 60 i.e. if the dynamics is mapped at discretized times t m integer multiplies than the BO period T B the two Hamiltonians (1) and (3) yields the same evolution. Therefore the oscillatory motion in each BO cycle is superimposed to the parabolic path (21). Such a dynamical behavior can be thus referred to as accelerated Bloch oscillations. At times different than t m = mT B , the following relation between the solutions ψ 1 (x, t) = exp(−itĤ 1 )ψ(x, 0) and ψ 2 (x, t) = exp(−itĤ 2 )ψ(x, 0) to the Schrödinger equations with HamiltoniansĤ 1 andĤ 2 corresponding to the same initial condition ψ(x, t) can be derived (see Appendix D) where we have set For example, for a sinusoidal function T (q) = κ cos(qd) the explicit expressions of the functions Λ l (t), as obtained from Eqs. (30) and (C.5), read Equation (29) indicates that, at times t = t m , the solution ψ 2 (x, t) is given by a suitable superposition (interference) of shifted replica of ψ 1 (x, t), weighted by the complex amplitudes Λ l (t). At t = t m , Λ l (t m ) = δ l,0 and thus ψ 2 (x, t m ) = ψ 1 (x, t m ) as previously discussed. An example of accelerated BOs is shown in Fig.3 for a sinusoidal function T (q) = κ cos(qd) and for parameter values ǫ = 1/2, F = 0.2, κ = 1 and d = 4, i.e. for the same parameter values as in Fig.2 except for ǫ = 1/2. The initial condition is the Gaussian wave packet ψ(x, 0) ∝ exp(−x 2 /w 2 ) with w = 5 (i.e. as in Figs.1 and 2). The figure clearly shows that the wave packet undergoes and oscillatory motion with period T B over the averaged parabolic path of Fig.1. Note that, while in the pseudo BOs regime [case (ii), see Fig.2] wave packet spreading is suppressed, in the regime of accelerated BOs (Fig.3) the wave packet spreads on average following the same spreading law of the parabolic case (i) discussed above. Airy-Bloch oscillations As shown in Sec. 3.2, any normalizable wave packet undergoes accelerated BOs and spreading when evolved by the generalized Wannier-Stark Hamiltonian (3) with a non-vanishing value of ǫ. However, such a property can be violated by an initial wave packet that is not normalizable, i.e. such that ∞ −∞ dx |ψ(x, 0)| 2 = ∞. An interesting case is the one corresponding to an initial condition ψ(x, 0) which is a generalized eigenfunction of the Stark Hamiltonian (1), i.e. an Airy wave packet. Let us assume, for example, the eigenfunction of (1) with energy E = 0, which apart from a normalization constant is given by [see Eq. (10)] For the free-particle Schrödinger equation, i.e. for the HamiltonianĤ = ǫp 2 x , Airy wave packets are shape preserving ones and undergo a self-accelerating motion, as originally shown by Berry and Balazs in Ref. 61 and extended in several subsequent works (see, for example, 62 and references therein). For the Stark Hamiltonian (1), they do not accelerate, i.e. they are at rest, because of the additional force F : indeed they are eigenstates of the Hamiltonian (1). Interestingly, we show now that for the generalized Wannier-Stark Hamiltonian (3) the Airy wave packet (32) evolves undergoing a periodic breathing dynamics with the BO period T B = 2π/(F d). Such a periodic and acceleration-free breathing dynamics is referred to as Airy-Bloch oscillations. In fact, the evolution of the wave packet (32) under the generalized Wannier-Stark Hamiltonian (3) can be readily calculated using Eq. (29) and taking into account that ψ(x, 0) is an eigenfunction of (1) with zero energy. One obtains Since Λ l (t m ) = δ l,0 for t m = mT B , one has ψ(x, t m ) = ψ(x, 0), i.e. a strict periodic dynamics is obtained. Hence for the initial Airy distribution (32) of the wave function the acceleration motion is suppressed, and a periodic breathing dynamics is established. An example of Airy-Bloch oscillations is shown in Fig.4(a). The periodic dynamics shown in the figure resembles a quantum carpet 63 in Hamiltonians with absolutely continuous spectrum, such as quantum carpets arising from the Talbot effect for the free-particle HamiltonianĤ = ǫp 2 x 63,64 . In practice, the Airy distribution is an idealized one and corresponds to a delocalized (non-normalizable) state of the particle. However, it can be approximated by normalizable distributions obtained by enveloping the Airy function with a sufficiently-decaying function at x → −∞. For example, an initial normalizable wave packet distribution that approximates Eq.(32) is given by where a > 0 and N is a normalization constant. Note that the function defined by Eq. (34) goes to zero at both x → ±∞ and is normalizable because the Airy function decays at x → ∞ faster than exponential. Since the wave packet defined by Eq.(34) is normalizable, its center of mass undergoes an oscillatory and (on average) accelerated motion; however, for a sufficiently large values of a the Airy-Bloch breathing dynamics of Fig.4(a) can be observed in the earlier BO cycles, as shown in Fig.4(b-d). After a few BO cycles, the periodicity is lost according to the analysis of Sec.3.2. This can be seen by computing the temporal evolution of the revival probability into the original state, defined as The behavior of P rev (t) is shown in Fg.4(e). Note that, as a decreases, periodicity of the dynamics is rapidly lost. Conclusion In this work the phenomena of accelerated and Airy-Bloch oscillations have been predicted, which provide significant extensions of the famous Bloch oscillations originally predicted for electrons in a crystal under a uniform electric field. We introduced an exactly-solvable generalized Wannier-Stark Hamiltonian and showed rather generally that a dynamical regime intermediate between a pure oscillatory and an uniformly accelerated motion arises (accelerated Bloch oscillations). As a special case, for wave packets with an Airy shape acceleration can be suppressed and a pure oscillatory (breathing) dynamics, leading to quantum carpets, is predicted (Airy-Bloch oscillations). Owing to the possibility to emulate Schrödinger equations with engineered potentials and kinetic energy operators offered by optics 65,66 , the predicted phenomena of accelerated and Airy-Bloch oscillations could be observed in an optical setting, as discussed in Ref. 58 . Appendix A. Generalized eigenfunctions of the Hamiltonian (3) In this Appendix we derive Eqs. (7) and (8) Hence we may look for a solution to the eigenvalue equation (6) of the form where the coefficients ρ n , α and β in Eq. which is a periodic function of q with period 2π/d. The coefficients ρ n are determined from he spectrum S(q) by the inverse relation From Eqs.(A.6) and (A.7) the following differential equation for the spectrum is obtained which can be solved, yielding The parameter β is determined by imposing the periodicity of S(q), i.e. that S(q + 2π/d) = S(q). Since Finally, the constant S(0) is determined by imposing that the generalized eigenfunctions φ E (x) satisfy the usual normalization condition (9) given in the text. To this aim, let us explicitly calculate the scalar product φ E ′ (x)|φ E (x) using Eq.(A.2). One obtains Taking into account that where we have set Using Eq.(A.7) it can be readily shown that Since |S(q)| 2 = |S(0)| 2 [see Eq.(A.12)], one has Ω l = |S(0)| 2 δ l,0 , and thus Hence the normalization (9) is obtained by assuming In this Appendix we derive the analytical form (16) given in the text for the kernel U(x, y, t) of the propagatorÛ (t) of the Hamiltonian (3). Substitution of Eq. (7) into Eq.(15) yields with α = (F/ǫ) 1/3 . After a change of the integration variable and summation indices on the right hand side of Eq.(B.1) one obtains where we have set Let us indicate by ψ 1 (x, t) = exp(−itĤ 1 )ψ(x, 0) , ψ 2 (x, t) = exp(−itĤ 2 )ψ(x, 0) (C.7) the solutions to the Schrödinger equation i∂ t ψ =Ĥψ with HamiltoniansĤ 1 = ǫp 2 x + F x andĤ 2 =Ĥ 1 + T (p x ), respectively, corresponding to the same initial condition ψ(x, 0). One has where the propagators U 1 (x, y, t) and U 2 (x, y, t) are given by Eqs. (18) and (16), respectively. From a comparison of Eqs. (16) and (18)
2016-05-24T16:07:18.000Z
2015-09-16T00:00:00.000
{ "year": 2016, "sha1": "c43d7a2e7f21f87d0e8b9d8cf4aacacac29c5e6f", "oa_license": null, "oa_url": "http://arxiv.org/pdf/1509.04959", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "c43d7a2e7f21f87d0e8b9d8cf4aacacac29c5e6f", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics", "Mathematics" ] }
270498811
pes2o/s2orc
v3-fos-license
Comparative Analysis of Physiological Responses and Intestinal Microbiota in Juvenile Soft-Shelled Turtle (Pelodiscus sinensis) Fed Four Types of Dietary Carbohydrates Simple Summary Carbohydrate is an important energy nutrient in the feed of aquatic animals. Generally, aquatic animals usually exhibit a varying efficiency in utilizing different carbohydrate sources. In order to understand the carbohydrate utilization efficiency, this study investigated the physiological responses of soft-shelled turtles (Pelodiscus sinensis) that were fed four types of carbohydrates with different complexities and configurations. The results indicated that the best growth performance and feed efficiency were found in the starch diet, followed by the glucose, fructose, and cellulose groups in sequence. Dietary starch demonstrated a robust lipogenic function by inducing the expression of the genes involved in glucolipid metabolism, with the results of elevated plasma triglyceride levels and an increased lipid content in both the whole body and the liver. Glucose and fructose diets caused postprandial hyperglycemia in P. sinensis due to the un-inhibited gluconeogenesis. P. sinensis that were fed a fructose diet did not exhibit a higher lipid deposition compared to the glucose diet, as seen from mammals. Cellulose was not a suitable energy source for P. sinensis. Abstract A 60 day feeding trial was conducted to evaluate the impacts of dietary carbohydrates with different complexities and configurations on the growth, plasma parameters, apparent digestibility, intestinal microbiota, glucose, and lipid metabolism of soft-shelled turtles (Pelodiscus sinensis). Four experimental diets were formulated by adding 170 g/kg glucose, fructose, α-starch, or cellulose, respectively. A total of 280 turtles (initial body weight 5.11 ± 0.21 g) were distributed into 28 tanks and were fed twice daily. The results showed that the best growth performance and apparent digestibility was observed in the α-starch group, followed by the glucose, fructose, and cellulose groups (p < 0.05). Monosaccharides (glucose and fructose) significantly enhanced the postprandial plasma glucose levels and hepatosomatic index compared to polysaccharides, due to the un-inhibited gluconeogenesis (p < 0.05). Starch significantly up-regulated the expression of the genes involved in glycolysis, pentose phosphate pathway, lipid anabolism and catabolism, and the transcriptional regulation factors of glycolipid metabolism (srebp and chrebp) (p < 0.05), resulting in higher plasma triglyceride levels and lipid contents in the liver and the whole body. The fructose group exhibited a lower lipid deposition compared with the glucose group, mainly by inhibiting the expression of srebp and chrebp. Cellulose enhanced the proportion of opportunistic pathogenic bacteria. In conclusion, P. sinensis utilized α-starch better than glucose, fructose, and cellulose. Introduction The soft-shelled turtle (Pelodiscus sinensis) is an aquatic reptile found in China, Japan, and South Korea.Its natural population has been steadily decreasing, leading to its assessment as a vulnerable species on the IUCN (International Union for Conservation of Nature) Red List [1].In recent years, P. sinensis has emerged as a prominent freshwater aquaculture species in China due to its nutritional value, pharmacological functions, and immune-boosting properties [2].In 2022, China achieved a substantial yield of 373,709 metric tons of P. sinensis, as indicated by the latest statistical data [3].As a carnivorous animal, P. sinensis primarily consume a varied diet in their natural habit, including small fish, mollusks, insect larvae, and the seed of marsh plants [4].The rapid growth of P. sinensis obtained in intensive aquaculture heavily relies on the dietary protein supply, primarily sourced from fishmeal [5,6].But the continually lower availability and the increasing price of fishmeal have become the bottleneck for the sustainable development of turtle aquaculture.Hence, some strategies should be implemented to reduce the protein levels and substitute fishmeal with alternative nutrients in the artificial feed of turtles.Carbohydrates are the second most important energy source in turtle feeds, offering numerous benefits such as excellent accessibility, cost-effectiveness, favorable feed shaping, and the absence of ammonia emissions [7][8][9][10].Understanding the utilization efficiency of carbohydrate in the feed of P. sinensis is helpful to resolve this problem. Glucose is a common monosaccharide in nature and serves as the dominant carbohydrate source for metabolic processes in the animal body.Starch, polymerized from glucose, is the main complex carbohydrate source stored in plant.Compared to monosaccharides, polysaccharides are better utilized for many aquatic animals, including P. sinensis [11][12][13][14][15][16][17].Nevertheless, some aquatic animals exhibited a better utilization capacity of monosaccharides [18][19][20].The discrepancies in the utilization ability of various carbohydrate sources are usually considered to be closely related to the complexity of the carbohydrate structure, which directly influences both the speed of carbohydrate absorption in the intestine and postprandial hyperglycemia.The utilization capacity of carbohydrates with different complexities varies in different species, and may be correlated to the digestibility and metabolism.A recent study demonstrated that the Nile tilapia (Oreochromis niloticus), which fed on a polysaccharide diet, manifested better lipid deposition and a higher expression of key genes in glycolysis and lipid metabolism than monosaccharide manipulation [17].Therefore, more studies on digestibility and metabolism should be conducted to disclose the mechanism of utilizing carbohydrates with different complexities. Fructose is a natural isomer of glucose, exhibiting a different metabolic pathway in the bodies of animals compared to glucose.In contrast to glucose, fructose metabolism is independent of insulin and is not regulated by the feedback of ATP and citrate (i.e., there is no rate-limiting enzyme in fructose metabolism), resulting in a very rapid metabolic process [21].Research has demonstrated that fructose is more prone to promoting lipid deposition compared to glucose in mammals, and excessive fructose consumption would lead to hyperlipidemia and non-alcoholic fatty liver disease [22].However, this phenomenon has not been observed in fish studies.Most fish that are fed on a fructose diet usually show a decrease or stabilization in triglycerides and body lipid content [9,23].At present, the underlying mechanism for the differences in fructose metabolism between mammals and aquatic animals remains unknown.Additionally, the difference in glucose and fructose utilization has not been reported in P. sinensis.So, further investigations on fructose supplementation in the diet are necessary to address this knowledge gap. Intestinal microbiota occupies an essential position in maintaining the host's nutrient metabolism and body health, and also participates in the digestion and fermentation process of dietary carbohydrates [24].Different diets affect the structure of the intestinal microbial community in aquatic animals and an unbalanced diet composition may lead to the augmentation of harmful bacteria in the intestine, causing inflammation and intestinal damage [25,26].Currently, the impact of carbohydrate sources on the intestinal microbiota of aquatic animals has received scant attention, despite the fact that this host-microbe interaction holds the potential to shed light on numerous unresolved questions. Given the challenges in the feed of P. sinensis and advances in research on carbohydrate metabolism, we hypothesize that P. sinensis possess a distinct metabolic mechanism for utilizing various carbohydrate sources.Therefore, this study was conducted to investigate the impacts of four types of carbohydrates with varying complexities and isomers (glucose, fructose, α-starch, and cellulose) on the growth performance, physiological indices, digestibility, intestinal microbiota, and gene expression related to glucose and lipid metabolism in P. sinensis.The comprehensive research on the physiological responses to different types of carbohydrates would lay the foundation for enhancing dietary carbohydrate utilization and promoting the sustainable development of P. sinensis aquaculture. Experimental Design and Breeding Process Four isonitrogenous (48%) and isolipidic (8%) experimental diets with different carbohydrates were formulated by adding 170 g/kg glucose, fructose (monosaccharide), α-cassava-starch (digestible polysaccharide), or cellulose (indigestible polysaccharide), respectively (Table 1).The total dry feed ingredients were weighed, mixed, and crushed through 80 mesh screens, followed by supplementation with oil and water.Particles of 3.0 mm diameter were made using a feed pelleting machine (EL-260, Youyi Machinery Factory, Weihai, China) and were preserved in a refrigerator at −20 • C until use. , 500IU/g; VE, 5.5 mg/g; VK 3 , 0.5 mg/g; CuSO The turtles (approximately 3 g) used in this experiment were obtained from Yutian Farm (Tangshan, China).Acclimatization was conducted for two weeks under conditions suitable for the growth and survival of P. sinensis.After that, 280 turtles (5.11 ± 0.21 g) were randomly selected, averagely allocated to 28 tanks, and fed with the respective experimental diets in septuplicate to apparent satiation at 08:30 and 17:30 each day.After 30 min of feeding, the feed residue was siphoned out, dried, and weighed.The measured dissolution loss rate of feed was used to accurately calibrate the feed intake.During the experimental period, the culture water was monitored to maintain a suitable quality (temperature 30 ± 0.5 • C, pH 7.8 ± 0.2, dissolved oxygen concentration ≥6 mg/L, and ammonia nitrogen concentration ≤0.1 mg/L).The feeding trial lasted for 60 days.At the end of the feeding trial, turtles were deprived of diets for 24 h.All turtles were anesthetized with eugenol (1:10,000; Shanghai Reagent Corp., Shanghai, China) and were subsequently weighed in batch.Three turtles per tank were sampled randomly to analyze whole-body proximate compositions.Additionally, another three turtles from each tank were randomly chosen and individually weighed, and were then sampled for blood and other tissues.Blood was drawn from the neck and put in the centrifuge tubes with heparin rinse, and were centrifuged at 4 • C, 3000× g for 15 min to obtain plasma, which was preserved at −80 • C for further biochemical analysis.After blood collection, the turtles were dissected on an ice-cold plate for sampling viscera and liver to calculate the hepatic somatic index (HSI), visceral somatic index (VSI), proximate compositions of liver, and the intestinal substances were isolated under sterile conditions for intestinal microbiota analysis.The mid-gut and liver tissues (1 cm length) were taken and fixed in 4% paraformaldehyde solution waiting for morphological analysis.Subsequently, the remaining liver was wrapped in aluminum foil and quickly frozen in liquid nitrogen, then stored in a −80 • C refrigerator for further gene expression studies. Apparent Digestibility Analysis In order to determine the apparent digestibility, yttrium oxide (Y 2 O 3 ) was supplemented at a 1 g/kg level in diets as an inert indicator.From the second week of formal trial, fresh feces (in a capsule state) in each tank was collected daily, dried at 65 • C for 12 h, and stored at −20 • C. The yttrium contents in feed and feces were measured using an inductively coupled plasma spectrometer (LABTAM 8410, Labtam Instrument Co., Ltd., Canberra, Australia).Meanwhile, the crude protein and lipid contents and energy values of feed and feces were determined.The apparent digestibility coefficients were calculated using the following method: ADC (Apparent digestibility coefficient) of dry matter (%) = 100 × (nutrient or energy in feces/nutrient or energy in diet)]. Histological Analysis The fixed intestinal tissues were dehydrated step-by-step with different concentrations of alcohol and xylene in a mixed xylene solution, which was followed by waxing, embedding, and slicing, as described by Guo et al. (2023) [27].The intestinal sections were stained using hematoxylin and eosin (H&E).The fixed liver segments were stained using oil red O.All slices of intestinal sections and liver lipid droplets were observed with a ZEISS microscope (Imager A1m, Carl Zeiss AG, Oberkochen, Germany). Proximate Composition and Plasma Biochemical Analysis The nutrient compositions in the diets, turtle bodies, feces, and livers were determined using standard methods [28].Moisture, crude protein, and ash contents were analyzed using an oven (GZX-9240MBE, Shanghai, China), Auto Kjeldahl System (KjeltecTM 8420, FOSS Tecator, Hillerød, Denmark), and muffle furnace (Taisete Co. Ltd., Tianjin, China), respectively.The crude lipid contents were determined using the Soxhlet extraction method.The moisture of the liver was analyzed using a vacuum freeze dryer (GZX-9240MBE, Shanghai, China), and the liver lipid content was determined following the chloroform-methanol extraction method [29], according to the description of Peng et al. (2014) [30].Gross energy was measured using a Parr 6200 Calorimeter (Parr Instrument Company, Moline, IL, USA).Liver glycogen contents were determined according to the instruction of commercial kits (Nanjing Jiancheng Bioengineering Institute, Nanjing, China).Plasma glucose, total protein, cholesterol, and triglyceride contents were determined using the glucose oxidase method, biuret method, cholesterol oxidase method, and GPO-POD method, respectively. Analysis of Intestinal Microbiota Genomic DNA from the intestinal sample was extracted with the PowerFecal™ DNA Isolation Kit (MoBio Laboratories, Carlsbad, CA USA).The Illumina MiSeq platform was used to perform the high-throughput sequencing after amplification of the 16S rRNA V3-V4 region with barcoded fusion primers of 338F and 806R.All sequences were classified into operational taxonomic units (OTUs) picked at a 97% similarity level using QIIME 1.9.1 (Quantitative Insights Into Microbial Ecology) after the removal of low quality scores and were then merged using FLASH 1.2.11software. RT-qPCR Analysis Total RNAs from intestinal and liver tissues were extracted with TRIzol reagent, according to the manufacturer's instructions.The quantity and quality of the extracted RNA were analyzed at 260/280 nm using an ultrafine ultraviolet spectrophotometer (NanoDrop 1000, Thermo, Waltham, MA, USA).cDNA was synthesized via reverse synthesis using the PrimeScriptTM reverse transcription kit (TransGen Biotech Co., Ltd., Beijing, China).The cDNA concentrations were unified according to the concentrations determined before qPCR.A SYBR Green I SuperMix kit (TransGen Biotech Co., Ltd., Beijing, China) was used to determine the gene expressions.β-actin was adopted as the reference gene, and PCR was performed using real-time quantitative PCR.Specific Primers were designed with Premier 5.0 software (Table 2).RT-qPCR was performed on an ABI7300 qPCR system (ABI Prism 7300, Waltham, MA, USA) with a total volume of 20 µL, containing 10 µL of 2 × Trans-Script Top Green qPCR Super, 0.5 µL of 10 mM forward and reverse primers, 7 µL of nuclease-free water, and 2 µL of cDNA templates.The following reaction conditions were used: 94 • C for 30 s followed by 45 cycles consisting of 94 • C for 5 s and 60 • C for 30 s.After each reaction, a melting curve analysis of the amplification products was performed to determine the specificity of the amplification reaction.The relative expressions of the target genes were calculated according to the 2 −∆∆Ct method. Table 2.Nucleotide sequences of primers used to quantify gene expressions in the real-time PCR analysis. Statistical Analysis All data are shown as mean ± S.E.M.Before analysis, the normality and homogeneity of variance were verified using a Shapiro-Wilk test and Levene's test, respectively.Oneway ANOVA was used in this study and was carried out using STATISTICA 10.0 software (Statsoft Inc., Tulsa, OK, USA).Whenever there was a significant difference, Duncan's test was used for the multiple comparisons.Alpha-diversity and beta-diversity analysis of intestinal microbial communities were performed using QIIME.Beta diversity of microbial communities among samples was analyzed using UniFrac distance metrics and visualized via Principal Coordinate Analysis (PCoA).Statistical differences in the relative abundance of microbiota were analyzed using STAMP based on Welch's t-test.A p < 0.05 was considered as the significant level. Growth Performance Table 3 shows that the turtles fed with an α-starch diet had a significantly lower feeding rate (FR) and feed conversion ratio (FCR) than those fed other diets (p < 0.001).The values of FBW, WGR, and SGR in the starch diet were higher than that in the cellulose group (p = 0.024).The starch-fed turtles showed a significantly higher PER than the other groups (p < 0.001).There are significant differences in PDE or LDE values between any two groups, and the order is starch > glucose > fructose > cellulose (p < 0.001).The HSI and VSI of turtles fed glucose and fructose diets were dramatically higher than those in the starch and cellulose groups (p < 0.001). Apparent Digestibility Coefficient Different types of carbohydrates remarkably impacted the apparent digestibility coefficients of turtles (Table 4).The ADCs of dry matter, energy, and carbohydrates among all treatments exhibited significant differences between any two groups, following the order of starch > glucose > fructose > cellulose (p < 0.001).The turtles fed with glucose and α-starch had a significantly higher ADC protein than those fed with fructose and cellulose diets (p = 0.001).Values in the same row followed by different superscripts letters are significantly different (p < 0.05).ADC represents apparent digestibility coefficient. Proximate Compositions of Whole Body and Liver Table 5 demonstrates that the crude protein and lipid contents of turtles were significantly different between any two groups, with the order of starch > glucose > fructose > cellulose (p < 0.001), but the moisture contents exhibited a complete reversal in comparison to the protein and lipid trends.The liver lipid levels in the glucose and starch groups were significantly higher than those in the fructose and cellulose groups (p = 0.001), and the liver glycogen contents of turtles fed with a cellulose diet was significantly lower than those fed with other diets (p < 0.001).Values in the same row followed by different superscripts letters are significantly different (p < 0.05). Plasma Biochemical Parameters Table 6 shows that significantly higher plasma glucose levels were observed in glucose and fructose treatments compared to starch and cellulose diets (p = 0.002).There are significant differences in plasma triacylglycerol (TG) levels among the groups in the following order: starch > glucose > fructose > cellulose (p < 0.001).The plasma total cholesterol contents of turtles in the glucose and cellulose groups were remarkably lower than that in the fructose and starch groups (p = 0.006).Plasma total protein contents were not dramatically affected by carbohydrate sources (p = 0.386). Intestine and Liver Tissue Morphology The intestinal tissue structures in all treatments were clear and complete.The intestinal villus of the turtles fed with a starch diet were slender and mostly finger-like, while the intestinal villus of the turtles fed with a cellulose diet were short and conical (Figure 1A).Oil red O staining of liver sections revealed a higher presence of lipid droplets in the starch and glucose groups compared to the fructose and cellulose groups (Figure 1B). Intestine and Liver Tissue Morphology The intestinal tissue structures in all treatments were clear and complete.The intestinal villus of the turtles fed with a starch diet were slender and mostly finger-like, while the intestinal villus of the turtles fed with a cellulose diet were short and conical (Figure 1A).Oil red O staining of liver sections revealed a higher presence of lipid droplets in the starch and glucose groups compared to the fructose and cellulose groups (Figure 1B). mRNA Expression of Genes Related to Glucose and Lipid Metabolism The relative mRNA expression of genes related to glucose metabolism is depicted in Figure 2A.The expressions of glut2 in the gut and liver of those in the cellulose group were significantly lower than that in other groups, while the liver glut2 mRNA in the fructose group was down-regulated significantly compared to the glucose group (p < 0.001).Among all the groups, the gene expressions of the key enzymes involved in the glycolysis pathway were the lowest in the cellulose group.The hk expressions in the glucose group, as well as the hk and pk expressions in the starch treatment, were significantly elevated compared to the fructose group (p < 0.001).Genes related to gluconeogenesis in the fructose group exhibited the highest level of expression among these groups, and were significantly higher than that in the starch and cellulose groups.The g6pdh mRNA levels in the glucose and starch groups were dramatically higher than that in the fructose and cellulose treatments (p < 0.001).The expressions of gys and pyg in the glucose group were the highest levels among all groups. The relative mRNA expressions of the genes involved in lipid metabolism are shown in Figure 2B.The expression of fasn in the starch group, as well as the expression of acaca in the glucose and starch groups, were significantly higher than that in the fructose and cellulose groups (p = 0.002, p < 0.001, respectively).The mRNA levels of pparα, acox1, srebp, and chrebp in the starch and glucose groups were significantly higher than that in the fructose and cellulose groups, with the cpt gene in the starch diet exhibiting the highest expression level among all treatments. in the fructose and cellulose treatments (p < 0.001).The expressions of gys and pyg in the glucose group were the highest levels among all groups. The relative mRNA expressions of the genes involved in lipid metabolism are shown in Figure 2B.The expression of fasn in the starch group, as well as the expression of acaca in the glucose and starch groups, were significantly higher than that in the fructose and cellulose groups (p = 0.002, p < 0.001, respectively).The mRNA levels of pparα, acox1, srebp, and chrebp in the starch and glucose groups were significantly higher than that in the fructose and cellulose groups, with the cpt gene in the starch diet exhibiting the highest expression level among all treatments. Intestinal Microbiota The intestinal microbial diversity is presented in Table 7 and Figure 3.No significant differences were observed in Shannon, Simpson, Chao1, and ACE parameters among all groups (p > 0.05).There were 1117 OTUs in all the samples.The total OTUs in the glucose group, fructose group, α-starch group, and cellulose group were 935 OTUs, 156 OTUs, 368 OTUs, and 375 OTUs, respectively (Figure 3A).Principal coordinates analysis (PCoA) showed that the samples in each group were close to each other or were clustered together.No significant difference was found in the diversity of microbial communities among groups (Figure 3B).As shown in Figure 4, the dominant intestinal microfloras at the phylum level were Firmicutes, Proteobacteria, and Bacteroidetes.The turtles fed with fructose and α-starch diets showed a higher abundance of Firmicutes and a lower abundance of Proteobacteria compared with those fed with glucose and cellulose diets, and the ratio of Firmicutes to Bacteroides was the largest in the starch group.A higher abundance of Proteobacteria, As shown in Figure 4, the dominant intestinal microfloras at the phylum level were Firmicutes, Proteobacteria, and Bacteroidetes.The turtles fed with fructose and α-starch diets showed a higher abundance of Firmicutes and a lower abundance of Proteobacteria compared with those fed with glucose and cellulose diets, and the ratio of Firmicutes to Bacteroides was the largest in the starch group.A higher abundance of Proteobacteria, Actinobacteria, and Bacteroidetes was discovered in the turtles fed with the glucose diet (Figure 4A).At the genus level, Romboutsia, Clostridium-sensu-stricto-1, and Epulopiscium were the dominant microflora.The Romboutsia abundance in the starch group was the highest among all groups, while the fructose diet increased the Clostridium-sensu-stricto-1 and Epulopiscium abundance.The Clostridium-sensu-stricto-1 and Epulopiscium abundance in the glucose group was much lower than that in the other groups (Figure 4B).and Epulopiscium abundance.The Clostridium-sensu-stricto-1 and Epulopiscium abundance in the glucose group was much lower than that in the other groups (Figure 4B).Student's t-tests on genus-level taxa were performed to identify the bacterial genera exhibiting significant differences between two groups (Figure 5).The abundances of Lactobacillus, Comamonas, Bifidobacterium, and Leuconostoc in the glucose group were significantly higher than those in the fructose group (p = 0.038, 0.019, 0.004, and 0.013, respectively), and the Turicibacter abundance was significantly lower than that in the fructose group (p < 0.001).The Comamonas and Arenimonas abundances in the starch group (p = 0.021 and 0.003, respectively) and the cellulose group (p = 0.024 and 0.008, respectively) were significantly decreased compared with the glucose group.The Psedogracilibacillus abundances in the starch diet were higher than that in the fructose diet (p = 0.020).The Pseudomonas and Alcaligenes abundances in the cellulose group were significantly higher than those in the fructose group (p = 0.022 and 0.006, respectively).The Paenibacillus abundance in the cellulose group was significantly lower than that in the starch group (p = 0.012).However, the Alcaligenes and Chryseobacterium abundances were remarkably increased compared with the starch group (p = 0.033 and 0.032, respectively).Student's t-tests on genus-level taxa were performed to identify the bacterial genera exhibiting significant differences between two groups (Figure 5).The abundances of Lactobacillus, Comamonas, Bifidobacterium, and Leuconostoc in the glucose group were significantly higher than those in the fructose group (p = 0.038, 0.019, 0.004, and 0.013, respectively), and the Turicibacter abundance was significantly lower than that in the fructose group (p < 0.001).The Comamonas and Arenimonas abundances in the starch group (p = 0.021 and 0.003, respectively) and the cellulose group (p = 0.024 and 0.008, respectively) were significantly decreased compared with the glucose group.The Psedogracilibacillus abundances in the starch diet were higher than that in the fructose diet (p = 0.020).The Pseudomonas and Alcaligenes abundances in the cellulose group were significantly higher than those in the fructose group (p = 0.022 and 0.006, respectively).The Paenibacillus abundance in the cellulose group was significantly lower than that in the starch group (p = 0.012).However, the Alcaligenes and Chryseobacterium abundances were remarkably increased compared with the starch group (p = 0.033 and 0.032, respectively). Discussion The present study demonstrated that four types of carbohydrate dramatically impacted the growth performance of P. sinensis.The best growth performance and feed efficiency were found in the starch diet, followed by the glucose, fructose, and cellulose groups in sequence.This finding suggests that P. sinensis has a better utilization of digestible polysaccharides compared to monosaccharides and indigestible polysaccharides.This is consistent with our previous study on P. sinensis, although fructose was not included in it [11].Conversely, this finding is inconsistent with the results of the red-footed tortoise (Chelonoidis carbonaria), a forest-dwelling herbivorous tortoise.No significant difference was observed in the growth of the red-footed tortoise when fed with starch and fiber [31].Apart from that, there are no other research reports on the utilization of carbohydrate sources in reptiles to date.Because few reptiles have become cultured economic species, the nutritional requirements and metabolic mechanism of reptiles were not well documented [6].Previous studies have demonstrated that the metabolism characteristics of P. sinensis on nutrient utilization are more likely to resemble those of carnivorous fish [4].Similar results were also observed in gibel carp Carassius auratus [16], cobia Rachycentron canadus [12], sturgeon Acipenser schrenckii [9], Nile tilapia [32], and amur minnow Rhynchocypris lagowskii [33].As a comparison, some fish such as gilthead sea bream Sparus aurata, grass carp Ctenopharyngodon idellus, and blunt snout bream Megalobrama amblycephala [18][19][20], which are mostly herbivorous or omnivorous, preferred monosaccharides to be their optimal carbohydrate source.The different ability to utilize carbohydrate sources among these animals may be directly related to the trophic level and food habits of the species [34], specifically in terms of the diverse gastrointestinal structure, digestive and absorptive system, and metabolism regulations. The stunted growth performance of turtles fed with glucose and fructose diets in this study was attributed to the disorder of monosaccharide absorption and metabolism within the body.Upon ingestion, the high concentrations of monosaccharides in the glucose and fructose diets were absorbed quickly and directly into the bloodstream without prior decomposition, resulting in a reduced absorption efficiency and elevated postprandial blood glucose levels.This is supported by the findings of our study, which showed that turtles fed with glucose and fructose diets exhibited significantly lower apparent digestibility coefficients and higher plasma glucose contents compared to those fed with the α-starch diet.Comparable results have also been observed in Nile tilapia [32].Furthermore, the rapid absorption of monosaccharides is far higher than their utilization rate, which leads to prolonged postprandial hyperglycemia and the abnormal accumulation of liver glycogen [35].In the current research, the plasma glucose, HSI, and liver glycogen content of turtles fed with monosaccharide diets were significantly higher than those fed with αstarch diet.The higher HSI may be related to the increased glycogen content in the liver [16].The results of this study suggested that the turtles fed on a monosaccharide diet were more likely to convert excess glucose into glycogen and store it in the liver.Nevertheless, the slow digestion of α-starch was helpful to reduce glucose stress caused by postprandial glucose loading in aquatic animals [36].The digestible polysaccharides in feed are more easily converted into lipids compared to monosaccharides, as evidenced by the findings that the lipid retention efficiency in the starch group exceeded 150%.In the present study, the lipid contents of the whole body and liver, as well as the plasma triglyceride content, of the turtles fed with an α-starch diet were higher than those fed with glucose and fructose diets.This is consistent with the reports in grouper Epinephelus malabaricus [37] and Nile tilapia [32].Moreover, the accumulation of a large number of red lipid droplets in the liver sections of the turtles in the starch group observed in this study also further confirmed that digestible polysaccharides were preferred to enhance lipid deposition over monosaccharides.At the same time, these results further demonstrated that glucose metabolism exhibited a profound interplay with lipid metabolism, which significantly influencing the utilization ability of carbohydrates with different complexities [38]. The present study showed that the expression of genes involved in glucose metabolism was significantly affected by dietary carbohydrates sources.Compared to turtles fed with a starch diet, those fed with glucose and fructose diets down-regulated the key genes involved in glucose catabolism (such as pk), and conversely up-regulated the expression of key gluconeogenesis genes such as g6pase, pepck, and fbp.Similar results were found in the studies of cobia and Nile tilapia [12,17].These results indicated that turtles fed with a monosaccharide-based diet exhibited limited effectiveness in suppressing gluconeogenesis, ultimately worsening glucose metabolism abnormalities and subsequently leading to a decline in the growth performance and feed utilization rate of the turtles in the monosaccharide groups in this study. G6PDH is one of the key enzymes on the pentose phosphate pathway involved in fatty acid synthesis [7,38].In this study, the expression of g6pdh of the turtles fed with a starch diet was higher than those fed with other diets, which is consistent with the results in blunt snout bream, gilthead sea bream, grouper, and Nile tilapia [7,15,17,37].The glycolytic capacity of the turtles fed with an α-starch diet was also higher than those in the glucose and fructose groups in this study.The dihydroxyacetone phosphate produced in glycolysis can be converted into glycerol 3-phosphate.Acetyl coenzyme A, an important component of lipid synthesis, is generated through the complete dehydrogenation of both glycerol 3-phosphate and pyruvate [39].Moreover, on the pathway of lipid metabolism, the expression of the genes involved in lipid metabolism (except for acaca) were up-regulated by stimulating the expression of the srebp and chrebp genes in the livers of turtles fed with a starch diet, which indicated that the glucolipid conversion efficiency of the turtles fed with an α-starch diet was higher than that of the turtles fed with other diets.The increased expression of lipolysis-related genes (pparα, acox1, and cpt) in starch diets could reduce the risk of diseases caused by excessive fat deposition, thereby ensuring the health of the turtles.This also provides a valid molecular explanation for the above physiological observation that digestible polysaccharides are more easily converted into lipids than monosaccharides. Fructose is an isomer of glucose that is found in nature and can be converted into glucose within the body.However, fructose metabolism differs from glucose metabolism in that it is not regulated by insulin.In mammals, fructose is often considered as the primary trigger for inducing non-alcoholic fatty liver disease due to its tendency to promote lipid synthesis [40,41].Fructose has been demonstrated to facilitate triglycerides and lipid synthesis, as well as elevate the mRNA expression of the enzymes involved in glycolysis, lipogenesis, and gluconeogenesis [42,43].Nevertheless, contrary to expectations, our study did not reveal this phenomenon in P. sinensis.Instead, we observed that the lipid droplets, the LRE values, and the crude lipid contents of whole body and liver were significantly lower in turtles fed with fructose compared to those fed with glucose.Additionally, the expressions of key genes on the pathway of lipid metabolism in the liver were downregulated in the fructose group compared to the glucose group, including the expression of srebp and chrebp.Based on these observations, it is speculated that fructose may interfere with normal lipid metabolism by inhibiting the expression of the srebp and chrebp genes of the turtles.This finding aligns with reports on other aquatic animals, such as sturgeon and Nile tilapia [16,17].Up until now, no study has reported that fish can efficiently utilize fructose like glucose, even in the case of pacu (Piaractus mesopotamicus), a species of fruiteating fish [44].Therefore, the metabolic characteristics of P. sinensis on fructose utilization are more akin to those of fish, and they cannot utilize fructose as a suitable carbohydrate source.Actually, fructose primarily comes from fruits in nature, but fruits are not part of the dietary preferences of P. sinensis.It is conceivable that the turtle's inefficient utilization of fructose in this study stems from its long-term adaptive evolution in an aquatic habitat where fructose sources are relatively scarce. Cellulose is an indigestible carbohydrate for most animals except for herbivores.In this experiment, cellulose was included as a negative control to evaluate the growth and physiological responses of turtles when fed zero-digestible carbohydrate diets.As anticipated, the results of this study clearly showed that P. sinensis lacked the ability to digest cellulose.The lowest ADC in the cellulose group is closely related to the shorter villi observed in the histomorphological sections.All key genes involved in glucose and lipid metabolism were inactive compared to other carbohydrate sources, which resulted in the lowest digestibility, growth, feed utilization, and lipid deposition among all groups.Consequently, cellulose did not serve as an energy source for P. sinensis. The dietary component is an important factor influencing the diversity and structure of the intestinal microbiota [45].In this study, carbohydrate sources did not affect the diversity and richness of intestinal microbiota.At the phylum level, Firmicutes, Proteobacteria, and Bacteroidetes were the dominant intestinal bacterial species in P. sinensis, and similar patterns had been observed in other animal species.For example, the cecal microbiota of mice fed with dietary carbohydrates was predominantly composed of Firmicutes, and contained low levels of Bacteroidetes and Actinobacteria [46].Firmicutes and Bacteroides play an important role in host lipid metabolism, and their ratios are proportional to the ability of the host to store lipid [47].In this study, the ratio of Firmicutes to Bacteroides was the largest in the intestinal microbiota of the α-starch group.This might be the reason why the α-starch group had advantages in growth and lipid metabolism.At the genus level, the probiotic content (Paenibacillus) of turtles in the cellulose group was lower, while the content of opportunistic pathogens (Pseudomanas and Chryseobacterium) was increased compared to other groups.Pseudomonas and Chryseobacterium were common pathogenic bacteria in most aquaculture animals [48].The high abundance of Chryseobacterium might affect the growth and survival of aquatic animals [49][50][51].The overgrowth of opportunistic pathogens may be due to the damage of the intestinal microbiota barrier caused by the defect of the host immune defense system or intestinal mucosal barrier [52].The results in this study suggest that the long-term consumption of excessive cellulose (17%) will threaten the intestinal health of the turtles.To the best of our knowledge, this is the first report on the relationship between carbohydrate sources and intestinal microbiota in soft-shelled turtles.Intestinal microbiota may be an important indicator for monitoring nutritional status and physical health, and should not be ignored. Conclusions Overall (Figure 6), the soft-shelled turtle exhibited a superior utilization of starch compared to glucose and fructose.Dietary starch has a strong lipogenic function by inducing the expression of the genes involved in glucolipid metabolism, with the results of elevated plasma TG levels and increased lipid contents in both the whole body and the liver.Glucose and fructose diets caused postprandial hyperglycemia in P. sinensis due to the un-inhibited gluconeogenesis.P. sinensis fed with fructose did not show a higher capacity for lipid deposition than glucose, as seen from mammals.Instead, fructose interfered with the glucolipid metabolism of P. sinensis by suppressing the expression of shrebp and chrebp.Cellulose did not serve as an energy source for P. sinensis. Figure 1 .Figure 1 . Figure 1.Histomorphological analysis of mid-gut and liver in Pelodiscus sinensis fed with diets containing different types of carbohydrate.(A) Mid-gut H & E stained sections.(B) Liver oil red O sections. Figure 3 . Figure 3. Diversity analysis of intestinal microbiota in Pelodiscus sinensis fed with diets containing different types of carbohydrates.(A) Venn diagram of intestinal microbial OTUs.(B) PCoA analysis based on weighted unifrac distance.Note: G, F, S, and C represent glucose group, fructose group, α-starch group, and cellulose group; the same applies below. Figure 3 . Figure 3. Diversity analysis of intestinal microbiota in Pelodiscus sinensis fed with diets containing different types of carbohydrates.(A) Venn diagram of intestinal microbial OTUs.(B) PCoA analysis based on weighted unifrac distance.Note: G, F, S, and C represent glucose group, fructose group, α-starch group, and cellulose group; the same applies below. Figure 4 . Figure 4. Relative abundances of dominant intestinal microbiota in the intestine of P. sinensis fed with diets containing different carbohydrate sources.(A) Relative abundance of microbial phyla.(B) Relative abundance of microbial genera. Figure 4 . Figure 4. Relative abundances of dominant intestinal microbiota in the intestine of P. sinensis fed with diets containing different carbohydrate sources.(A) Relative abundance of microbial phyla.(B) Relative abundance of microbial genera. Animals 2024 ,Figure 6 . Figure 6.General summary for the comparative responses in P. sinensis between starch and glucose diets, as well as fructose and glucose diets.The green arrow indicates the promoting effect; the red arrow indicates the decreasing effect; the black horizontal line indicates no effect.Author Contributions: Conceptualization, H.L.; methodology, H.L., Y.Z., and H.S.; software, H.S. and Y.Z.; validation, T.R., Q.G., X.S., and X.L.; formal analysis, H.S. and H.L.; investigation, Y.Z. and H.S.; resources, H.L.; data curation, Y.Z., H.S. Z.L., and P.Z.; writing-original draft preparation, H.S. and Y.Z.; writing-review and editing, H.L. and Z.L.; visualization, H.L. and H.S.; supervision, H.L.; project administration, H.L.; funding acquisition, H.L. All authors have read and agreed to the published version of the manuscript. Figure 6 . Figure 6.General summary for the comparative responses in P. sinensis between starch and glucose diets, as well as fructose and glucose diets.The green arrow indicates the promoting effect; the red arrow indicates the decreasing effect; the black horizontal line indicates no effect. Table 1 . Formulation and proximate composition of experimental diets (air-dried basis). 4 Table 3 . Growth performance of Pelodiscus sinensis fed with diets containing different types of carbohydrate (n = 7). Table 5 . Proximate compositions of whole body and liver of Pelodiscus sinensis fed with diets containing different types of carbohydrate (wet-weight basis, %) (n = 7). Table 6 . Plasma biochemical parameters of Pelodiscus sinensis fed with diets containing different types of carbohydrate (n = 7). Values in the same row followed by different superscripts letters are significantly different (p < 0.05).GLU, glucose; TG, triacylglycerol; CHOL, total cholesterol; TP, total protein. Values in the same row followed by different superscripts letters are significantly different (p < 0.05).GLU, glucose; TG, triacylglycerol; CHOL, total cholesterol; TP, total protein. Table 7 . Diversity indexes on operational taxonomic units (OTUs) level in intestinal microbiota of Pelodiscus sinensis fed with diets containing different carbohydrate sources.
2024-06-15T15:09:44.474Z
2024-06-01T00:00:00.000
{ "year": 2024, "sha1": "629d65a7b71d623cdbc80653a663ffbf4ed30c61", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2076-2615/14/12/1781/pdf?version=1718274772", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "621b3d28cfdbaf7b5e5177b216ab671b69494f1d", "s2fieldsofstudy": [ "Agricultural and Food Sciences" ], "extfieldsofstudy": [] }
127783735
pes2o/s2orc
v3-fos-license
Perceptions on Floods by the Population Residing in the Watershed of Ribeirão Garcia , Brazil Any anthropic action transforms the environment. However, the Urbanization is a human achievement, not the “villain” or “protagonist” of passive actors and experienced environmental degradations, but the model of urban occupation, construction and densification, interconnected with the ineffectiveness of the current drainage system. When evaluating programs to minimize flood impacts, the social component is a necessary but often neglected dimension. This component can be evaluated through the articulation between the actors (population and public power) and the resident community’s perception in relation to the needs and interventions. The present work seeks to analyze the perception of the community resident in the watershed of Ribeirão Garcia regarding the problems arising from urban floods. The repeated flood events occurring in the Ribeirao Garcia watershed in the city of Blumenau-SC, Brazil, make this place a potential area for an investigative process, which can subsidize future decision-making processes aimed at the adequacy of a sustainable system in medium and long term. For the accomplishment of this work, a qualitative research was adopted. Fieldwork and semi-structured interviews with residents in the sample areas have been carried out to analyse land use and occupation. The procedures that supported this sample survey were divided into four steps: definition of the total population and the sample size; preparation of a questionnaire; application of the questionnaire and tabulation of results. It can be seen that, despite the problems of floods and landslides to residents of all sample areas, the locals have different perceptions regarding the proposed issues and that these different perceptions are linked to the sample geographic location. The analysed space is full of contrasts in physical, social and economic aspects, which favours one part of the population and disfavours the other. The process of densification How to cite this paper: Krambeck, B.P.D. and Carlos, L. (2019) Perceptions on Floods by the Population Residing in the Watershed of Ribeirão Garcia, Brazil. Atmospheric and Climate Sciences, 9, 172-189. https://doi.org/10.4236/acs.2019.91012 Received: November 10, 2018 Accepted: January 21, 2019 Published: January 24, 2019 Copyright © 2019 by author(s) and Scientific Research Publishing Inc. This work is licensed under the Creative Commons Attribution International License (CC BY 4.0). http://creativecommons.org/licenses/by/4.0/ Introduction Floods are natural phenomena caused by the dynamics of nature and are intensified by anthropic intervention in the environment.The socio-environmental effects are aggravated as the land use and occupation process is inadequately carried out, in which the population, usually of low income, occupies places that are inadequate to living and are exposed to environmental and pathological risks. The last fifty years have been marked by the accelerated growth of the Brazilian urban population, which grew from around 19 million in 1950 to over 190 million in 2010 [1].However, during this period, state investments in urban infrastructure were insufficient.This gap has compromised environmental quality in urban settlements, directly affecting water resources. The discussions and proposals regarding the urban infrastructure of Brazil have gradually evolved since the 1970s, especially when problems related to flooding in the urban area intensified.However, only in the 2000s the maturing of the discussions allowed the instrumentalization of tools that were proposed to mitigate the problems of "urban waters" from a modern perspective.In the evolution of this process the community's perception in relation to this topic emerges as a question to be investigated. Although often disregarded, the perception of the social component is a necessary dimension in the evaluation of programs aimed at minimizing the impacts of floods.This component can be evaluated through the articulation between the actors (population and public power) and the resident community's perception in relation to risk, the needs and interventions [2] [3] [4]. During the last few decades, a crescent number of researchers have tried to respond to numerous questions, examining the opinions expressed by the people B. P. D. Krambeck, L. Carlos DOI: 10.4236/acs.2019.91012174 Atmospheric and Climate Sciences when questioned to assess the dangers to which they are, or which may be subject to future. Every individual, when placed in front of a risk situation, tends to respond based on their beliefs, knowledge baggage and experience [5]. According to [6] the concept of risk is extremely complex, because, in addition to the scientific factors, it is intrinsically associated with social elements and their perception.The risks involve many difficult-to-measure uncertainly. The investigation team of the Decision research Center, Oregon, directed by Paul Slovic and Baruch Fischhoff, was one of the pioneers in this field of research.Through the work developed by this team, it has shown that the risk assessment by the laity does not resemble those of the experts, because people in their day-to-day do not make estimates of probabilities, therefore their thinking never it can be summed up from a one-way perspective [7]. This current research also had the merit of contributing to the affirmation of this current with decision makers, influencing decisively the new strategies for disseminating information and disseminating technical knowledge among the populations. In the current national context of environmental management, and specifically of water resources, participatory and decentralized processes are prioritized and must incorporate the civil society.Therefore, the opinion and perception of the population affected by the problems and interventions can be an important tool in conducting processes of urban transformation and (re)formulation of public policies. Based on this assumption, the present work seeks to analyze the perception of the community resident in the watershed of Ribeirão Garcia as to the problems arising from urban floods. The repeated flood events occurring in the Ribeirão Garcia watershed, Blumenau-Brazil make this area a potential space for an investigative process, which can support future decision-making processes aimed at the adaptation of a sustainable system in the medium and long term. Characterization of Study Area The Ribeirão Garcia Watershed is located in the south of Blumenau City (SC), in the low valley of the Itajaí-Açu River with a territorial dimension equivalent to 158.9 km 2 , representing approximately 30% of the total surface of the municipality.It is located between the co-ordinates: 26˚55' and 27˚08' of South Latitude and 49˚01' and 49˚10' of Greenwich West Longitude, spindle 22 (Figure 1). The drainage system of Ribeirão Garcia is developed on the right bank of the Itajaí-Açu River, with the confluence of the water courses occurring within the urban site of the municipality of Blumenau.The Garcia Stream is approximately 41.7 km long from the main source to the mouth of the Itajaí-Açu River.It crosses the city from the south to the north, comprehending almost totally the southern region.The headwaters of its main forming streams are located about with 5300 ha of extension in the highest parts of the watershed (Figure 2). The urban space of Blumenau presents notable differences in topography and relief morphology, with altimetric amplitudes and systems of steeper slopes to the south of the city.The Garcia Valley, especially the low course, constitutes a landscape strongly conditioned by processes of anthropogenic derivation and is considered the most critical area of the municipality.The anthropic action has aggravated the flood events and intensified the processes of landslides. Geologically, the river watershed of Ribeirão Garcia is positioned on three distinct stratigraphic units (Figure 3(a)): to the south, Itajaí Group, composed of sandstones of the Gaspar Formation; secondly, the argillite and layered siltstones of the Campo Alegre Formation; in the south-extreme, the Brusque Metamorphic Complex, composed of rocks with low metamorphic degree, represented by shale and basements of the Itajaí-Flank South Group [8] [9].From a hydrological point of view, the Itajaí group is an important aquifer because it has high porosity and permeability.Geomorphologically, the Ribeirão Garcia Watershed is installed on the Santa Catarina shield (Figure 3(b)), where transcurrent geological faults create the V-shaped valleys, characterized by steep slopes and deep valleys, where the main rivers flow. The Ribeirão Garcia watershed presents a marked upward slope, where the altimetric bands are superior to 900 m and are located in the Itajaí mountain range.Altitudes lower than 50 m are more representative in the vicinity of the Ribeirão Garcia exudate (Figure 4). From the pedological point of view (Figure 5(a)), the non-hydromorphic Alic Cambisols (aluminum saturation ≥ 50%) predominate in the watershed, characterized by an incipient B horizon, low textural gradient and the medium to high silt/clay.Soils usually have sequences of horizons A, B and C, with variations in depth, colour, texture and structure [10].Considering the Plant Cover, the Ribeirão Garcia Watershed presents a good natural vegetation cover (Atlantic Forest) in the most mountainous parts, concentrated in areas of the Faxinal Farm, Spitzkopf Hill, Artex and extensive areas of pine and eucalyptus reforestation, as well as recent reforestation of several palm tree species (Figure 5(b)). The high altimetric range of Blumenau municipality and the proximity to the Atlantic coast (40 km) create climatic conditions typical of the city and, consequently, of the Ribeirão Garcia watershed. According to Wladimir Köppen's classification, the region of Blumenau fits the Cfa climate type.In climates of group C (climate group indicator), the temperature of the coldest month varies between 18˚C and −3˚C, whereas the temperature of the warmer months remains above 10˚C. The Cf type of climate (according to the rainfall regime) presents rains equally distributed throughout the year, with no dry season. The urban area of the watershed occupies approximately 14.06 km 2 .Most of the population is concentrated in the central portion of Ribeirão toward the mouth, and they inhabit the following neighbourhoods: Garcia, Progresso, Gloria, Valparaiso, Ribeirão Fresco, Vila Formosa, Centro and Jardim Blumenau (Figure 6). The Ribeirão Garcia Watershed is the most populous area of the municipality, with 47,577 thousand inhabitants (Table 1), and a large part of this population suffers with the floods.Blumenau City Hall considers that the Garcia district has reached the limit of occupation (spatial limit) registering a true demographic explosion towards the tops of the chains of hills of Itapuí Street, that delimits it [11]. As for the economic aspects, according to Blumenau [11], the primary sector of agriculture is developed in small mini-farms, with holistic culture and still significant crops of banana, corn, cassava, rice, sweet potatoes and others.With respect to the industrial sector, Blumenau has approximately 1750.00 industries, being textiles, clothing, metallurgy and civil construction the most significant branches.Under the aspect of mineral resources, specifically, the clay slates of the Itajaí group may be significant in the floor paving area. The main human activities found in the rural portion of the watershed are concentrated in subsistence agriculture, livestock, fish farming and leisure. Methods and Materials For the accomplishment of this work, a quantitative research was adopted.Fieldwork and semi-structured interviews with the residents of the sample areas were carried out to analyze land use and occupation. The procedures that supported this sample survey were divided into four stages: definition of the total population and sample size; preparation of a questionnaire; application of the questionnaire and tabulation of results.The definition of the local population was based on the population projection per districts for the Municipality of Blumenau, conducted by the Brazilian Institute of Geography (IBGE) for the year 2010 presented in Table 1. For this work the sample system was chosen by representative units, since it presents greater ease of sample selection and greater precision when compared to the random sample system. It should be emphasized, however, that the use of the technique of sample surveys to survey social perception and changes in behaviour in relation to urban drainage, especially after prevention and restoration programs, should be considered.The time to detect such transformations is uncertain, and in this case, a continuous process of sampling research that allows a temporal monitoring in the study area should be applied [12]. According to Cochan [12] and Bolfarine and Bussabi [13], the pattern of the samples is systematic, that is, chosen according to a classification.Representative samples were selected from a pre-categorization of the urban Basin area into three landscape units: high, medium and low Garcia, because the geographical position of the sample changes the perception of the resident community (geographical perception).Factors such as relief, land use and occupation, macro-zoning and zoning, and value plan.Six homogeneous zones/samples were selected (Figure 7), two located at the top (samples 5 and 6), two in the middle (samples 3 and 4) and two on the low Garcia (samples 1 and 2), with an area of approximately 234,406.00 m 2 /each, so that each sample corresponds to approximately 1.5% of the total urbanized area of the watershed, totalling 10% of the total urbanized area, base year 2009. Six sampling zones were chosen because natural disasters, among them, the flooding affect people in different forms in function of their geographical location in the watershed. The floor high amplitude found between the high and the middle Garcia condition the shallow-breast-feeding, causing the water to reach the lower areas with great speed, thus generating greater damage.This dynamics generates intensities of damage and different perceptions. The samples were physically characterized for their location, geology, geomorphology, pedology, drainage network, and were characterized for their attributes established in the Municipal Master Plan, such as: Macrozoning, Zoning and Value Plan (Table 2). In the study areas, city schools of elementary level were selected, named sample bases (Table 3), located in each selected sample area, totalling approximately Macrozoning: Consolidation Area-it covers already urbanized areas, whose occupation will be achieved by intensifying the land use in a balanced way in relation to services, infrastructure, equipment and environment, in order to avoid their idleness or overloading and optimize collective investments.Controlled densification area-already urbanized areas or not, which need to be controlled by the geological, topographical, hydrological and urban factors.The community assessment was performed through a questionnaire.At each sample basis, a questionnaire was given to the students, to be completed by their parents.The questionnaire was designed to identify two main aspects: the resident community's environmental perception in relation to urban drainage; and the community's perception of state action (Figure 8). Results and Discussion Of the 3450 questionnaires applied, 2193 returned, corresponding to 63.57% of the total number of applied questionnaires.The results were analysed through descriptive statistics. Regarding the age structure of the sample, the majority of interviewees are between 30 and 50 years old.Overall, few young people between 20 and 30 years old or adults over 50 were interviewed (Table 4).Most of the interviewees have been in area for at least 10 years, which gives them credibility when analysing the proposed questions. In relation to the level of education, the majority of the interviewees attends or only attended elementary school.As the age group compatible with this level of education (6 -14 years) was not contemplated by this research, one can conclude about the residents of the region have low level of education (Table 4). The results are related to the main directions of the questionnaire: the resident community's environmental perception in relation to the urban drainage and the community's perception in relation to the state action (Figure 9). The interviewees, residents in the districts inserted in the Ribeirão Garcia watershed, which in many cases suffer with the flooding problem, showed a great deal of discontent with the situation which they are subjected to.Many of them verbally reported that they had filed lawsuits against Blumenau City Government, requesting measures that would alleviate the problems.The added that the measures taken by the public power were insufficient or inefficient and that the problem persists. It can be noticed that, despite the problems of floods and landslides to residents of all sampled areas, the inhabitants have different perceptions regarding the proposed issues and that these different perceptions are linked to the sample geographic location (Figure 10). The interviewees residing in samples 5 and 6, located more upstream of the Ribeirão Garcia watershed and whose residences present a constructive pattern considered of medium and high standard, recognize their own contribution to the flood process, but blame the public power for neglecting the maintenance of urban micro and macro drainage systems and question the occupation model adopted by the public power for the areas in which they are inserted.However, the reality presented by this first group is totally different from that of the residents of samples 1, 2, 3 and 4, where the interviewees' houses were built on the banks of streams, in waterways (thalwegs) and areas subject to landslides, in many cases without any planning, security or infrastructure.Large parts of the residents in the streets inserted in these samples are in irregular subdivisions and have shown a great desire to be relocated by the city government to other areas of the city.It is noticed that, to a large extent, these residents blame the public power for the problems to which they are subjected to and do not recognize their own contribution to this process.The analyzed space is full of contrasts, in physical, social and economic aspects, which favours one part of the population and disfavours another, since the process of densification and over-occupation of inadequate areas has been one of the negative effects of a disorganized housing sector, a speculative real estate market and different levels of infrastructure among neighbourhoods, consolidating, in many aspects, a process of social exclusion and spatial segregation. The presence of subnormal occupations occurs mainly in the areas farthest from the watershed, making it even more difficult to implement infrastructure and sanitation for these families.However, there are small outbreaks of subnormal occupations in the middle of areas near the central region of the municipality, in lands with high added value that resist to real estate speculation. It can be verified that the lack of efficient and effective urban planning has caused disastrous environmental effects, besides the worsening of social disparities and loss of population's living quality.Urban sustainability is a priority and must be built day-to-day, and part of that construction is based on the legitimacy of public policies that must be constantly updated.These policies must adapt to the demands of urban services as well as to social and environmental demands. Understanding the problem of floods as a process depends mainly on understanding the (non-linear) history of their production, the model of urban development, and the perception that the resident population has of the problem. The present urban network of the city of Blumenau and, consequently, of the Ribeirão Garcia watershed, is strongly conditioned by the colonial land structure.It still perpetuates allotments with a single street, non-existence of secondary streets and perpendicular to the level curves.Although the urban legislation seeks to improve the conditions for the implantation of allotments, the inheritance of the colonial period, that is, the land structure, associated to the relief of the city, prevents effective improvements. One must not forget that the occupation of space, essentially when it does not comply with the spatial planning, can aggravate situations of risk.In the case of Conclusions The method adopted allowed to cover a representative portion of the community and that the collected data were sorted and structured, allowing them to be transformed into useful and credible information. Information that can help the public to direct their actions based on more comprehensive and consistent information, as well as to plan and develop their fact-based projects, reflected in data.The analysis of the resident community's perception of the Ribeirão Garcia watershed shows that the most susceptible communities to this type of event are those of low income, located in risk areas.It also demonstrates that, although Figure 1 . Figure 1.Location map of the Ribeirão Garcia watershed in the municipality of Blumenau. Figure 5 . Figure 5. (a) Soil map of Ribeirão Garcia Watershed and (b) Forest cover map. Figure 6 . Figure 6.Neighbourhoods of the Municipality of Blumenau/SC located in the watershed of Ribeirão Garcia. 2 Zoning: Residential Zone 1-ZR1-A territorial area characterized by low density with height limitation.Residential Zone 2-ZR2-It is a territorial area with low density, without limitation of height.Special location Zone-ZLE 1-A territorial space considered of importance for the development of the city, intended for cultural protection and/or the development of tourist attractions and relevant landmarks.Obs.The Urbanization Coefficients of Residential Areas 1 and 2 and of the ZLE 1 Special Zone, as well as the proportion of each zone, covered by the zoning in each sample area, are presented in Annex 3.3 ZF-Fiscal Zone. Figure 10 . Figure 10.Longitudinal profile of the Ribeirao Garcia watershed with spatialization of the sample areas. Blumenau, the form of occupation of the territory undoubtedly increased the exposure to the risk of floods and landslides.It is worth noting that infrastructure is decisive in mitigating or amplifying the consequences of a natural disaster.Infrastructure networks, such as roads and bridges, that allow people, goods, services and information to circulate, as well as the means of distress and emergency, can determine, in areas of equal susceptibility, different degrees of vulnerability on the part of the population.However, public power alone does not represent a panacea for the solution of these socio-environmental problems.Together with efficient public policies, another sphere must act for an effective change of the situation, and this sphere is represented by education.People's attitude toward nature may change over time.The resident population in the Ribeirão Garcia watershed is one of the facets of the flood problem in that area, since it contributes to increase the frequency and intensity of flooding, due to a lack of knowledge and a public housing policy aimed at the low income population.The studies of environmental perception are important tools and support the formulation of guidelines aimed at the implementation of an Environmental Education work, in which a change in the scale of attitudes and rescue of values can be promoted, leading to behavioural changes and social transformations.Environmental Education plays an important role in the reflection of environmental problems through social awareness, which implies a process of reflection and understanding of the environmental processes, leading to people's participation and the recovery of citizenship in decision making processes.It also allows people to develop a more comprehensive vision, through which attitudes and skills are developed, aiming at the citizens' critical and participatory action in the environment they are inserted in and interact with. Table 1 . Area in km2and projection of population (total), number of households and density (inhabitants per km 2 ), by the new neighbourhood division-Blumenau 2010-located in the Ribeirão Garcia Watershed. Table 2 . Characterization of sample zones. Table 3 . Sample bases and their addresses. Table 4 . Interviewees' age levels and levels of education.
2019-02-12T15:34:51.804Z
2019-01-23T00:00:00.000
{ "year": 2019, "sha1": "fcfd2a0e23905bc1b4b4a63051a7a03baf0758bd", "oa_license": "CCBY", "oa_url": "http://www.scirp.org/journal/PaperDownload.aspx?paperID=90160", "oa_status": "GOLD", "pdf_src": "ScienceParseMerged", "pdf_hash": "fcfd2a0e23905bc1b4b4a63051a7a03baf0758bd", "s2fieldsofstudy": [ "Economics" ], "extfieldsofstudy": [ "Mathematics" ] }
265791475
pes2o/s2orc
v3-fos-license
Long-term disease-free survival following comprehensive involved site radiotherapy for oligometastases Introduction Despite recent advances in drug development, durable complete remissions with systemic therapy alone for metastatic cancers remain infrequent. With the development of advanced radiation technologies capable of selectively sparing normal tissues, patients with oligometastases are often amenable to comprehensive involved site radiotherapy with curative intent. This study reports the long-term outcomes and patterns of failure for patients treated with total metastatic ablation often in combination with systemic therapy. Materials and methods Consecutive adult patients with oligometastases from solid tumor malignancy treated by a single high volume radiation oncologist between 2014 and 2021 were retrospectively analyzed. Oligometastases were defined as 5 or fewer metastatic lesions where all sites of active disease are amenable to local treatment. Comprehensive involved site radiotherapy consisted of stereotactic radiotherapy to a median dose of 27 Gy in 3 fractions and intensity modulated radiation therapy to a median dose of 50 Gy in 15 fractions. This study analyzed overall survival, progression-free survival, patterns of failure and toxicity. Results A total of 130 patients with 209 treated distant metastases were treated with a median follow-up of 36 months. The 4-year overall survival, progression-free survival, local control and distant control was 41%, 23%, 86% and 29%. Patterns of failure include 23% alive and free of disease (NED), 52% distant failure only, 9% NED but death from comorbid illness, 7% both local and distant failure, 4% NED but lost to follow-up, 4% referred to hospice before restaging, 1% local only failure, 1% alive with second primary cancer. Late grade 3+ toxicities occurred in 4% of patients, most commonly radionecrosis. Conclusion Involved site radiotherapy to all areas of known disease can safely achieve durable complete remissions in patients with oligometastases treated in the real world setting. Distant failures account for the majority of treatment failures and isolated local failures are exceedingly uncommon. Oligometastases represents a promising setting to investigate novel therapeutics targeting minimal residual disease. Introduction There is great enthusiasm for advances in drug development targeting distant metastases from solid tumors in the mainstream media (1).Despite significant progress, metastatic cancer remains largely incurable and results in approximately 90% of cancer deaths (2,3).Following treatment with either immunotherapy or molecularly targeted systemic therapies alone, responses are uncommon benefiting less than 13% of all cancer patients (4,5).Published evidence dating to the late 2000's established the longterm curative potential of radiation therapy to all areas of known disease for patients with oligometastases (6)(7)(8).Two randomized trials demonstrated improved progression-free survival and overall survival when comprehensive local consolidative therapy is added to systemic therapy alone for patients with oligometastases from non-small cell lung cancer or mixed primary tumors (9,10).By contrast, adding stereotactic radiation to some but not all distant metastases fails to improve outcomes compared to immunotherapy alone (11)(12)(13). In the real world setting, patients with less than 5 distant metastases represent approximately 30% of patients requiring radiation therapy for metastatic disease (14).While much of the published evidence of radiation therapy for extracranial oligometastases focused narrowly on stereotactic body radiation therapy for well selected patients, real world patients include clinical presentations requiring alternative modes of radiation therapy including stereotactic radiosurgery for brain metastases or intensity modulated radiation therapy for a bulky primary tumor and regional nodes (9,15,16).We hypothesized that the development of advanced radiation technologies capable of sparing normal tissues at risk along with appropriate risk stratification would allow for the safe and effective application of comprehensive involved site radiation for a broader group of patients with oligometastases seen in the context of a busy community hospital practice (17). Patient selection This study was approved by the Good Samaritan University Hospital IRB #16-016 with waiver of informed consent.The study population included consecutive patients ≥18 years of age with pathologically confirmed solid tumor malignancy with oligometastases referred to a single high volume radiation oncologist.Oligometastases were defined as 5 or fewer active metastatic lesions on whole body imaging where all sites of active disease, including the primary tumor and involved regional lymph nodes, were amenable to treatment.For patients with metachronous metastases, the primary tumor was controlled with prior local therapy.Whole body imaging included PET/CT, CT chest, abdomen and pelvis, bone scan or MRI of the brain or spine as per National Comprehensive Cancer Network guidelines for specific primary tumors.Relevant baseline patient characteristics include ECOG performance status, primary tumor and histology, pre-treatment serum albumin, ESTRO/EORTC oligometastatic disease classification, age, gender, metastasis site, number of metastases treated, cumulative GTV volume, radiation dose and number of fractions for each treatment site, whether the primary tumor was also treated and systemic therapy (18).Diverse radiation dose schedules to primary tumor and metastases were converted to a biological equivalent dose (BED) using the formula BED = D x [1+d/(a/b)] where D is total dose delivered, d is dose per fraction and a/b=10 for malignant tumors.This information was extracted from retrospective EPIC and Aria chart review. Treatment and follow-up Patient immobilization was highly personalized based on location.All patients underwent CT simulation and contouring and external beam radiotherapy treatment planning was performed on Eclipse.When appropriate, MRI, PET or CT with contrast was imported and fused to assist with accurate target delineation.Depending on location, volume and organs at risk, intensity modulated radiation therapy, stereotactic body radiotherapy or stereotactic radiosurgery was prescribed with PTV expansions as appropriate.The GTV (or ITV for tumors with organ motion) received ≥100% of the prescribed dose and the PTV received a minimum of ≥95% of the prescribed dose.When conflicting, organ at risk dose limits were prioritized over PTV coverage.Imageguided radiation therapy was delivered on the Varian TrueBeam or Varian Edge equipped with a 6-degree of freedom robotic couch and cone beam CT.Brachytherapy planning was performed on Oncentra and delivered using Nucleotron high dose rate brachytherapy.A small subset of patients underwent surgery (most commonly craniotomy) or interventional radiology ablation in addition to radiotherapy. Systemic therapy was administered at the discretion of the treating medical oncology and/or urologist.Prior to radiation, 68% were not actively receiving systemic therapy while 32% received systemic therapy with diverse treatment regimens (Supplementary Table 1).During or following radiation, 74% received systemic therapy and 26% received no systemic therapy.Systemic treatment regimens included 18% chemotherapy alone, 15% hormonal therapy with or without androgen receptor inhibitor or CDK4/6 inhibitor, 12% immunotherapy alone, 10% chemotherapy combined with biologically targeted therapy, 9% biologically targeted therapy alone, 9% chemoimmunotherapy and 2% hormonal therapy with chemotherapy or targeted therapy. During radiotherapy, patients were assessed weekly.Following radiotherapy, patients were followed by radiation oncology and medical oncology using EPIC and supplemented by tumor imaging and blood work.In the community hospital setting, follow-up is quite robust with scheduled outpatient follow-up supplemented by a daily inpatient huddle jointly attended by both medical oncology and radiation oncology. Outcomes The primary endpoints were overall survival and progressionfree survival using the Kaplan Meier method measured from date of consultation until death or most recent follow-up.Potential predictors of survival were assessed using the log-rank test using cutpoints validated in the published literature.Variables with a p value of <0.10 were entered into Cox multivariable regression analysis.Treatment failures were further classified to estimate local control and distant control on a per patient basis.Patient and treatment characteristics were reported with median and interquartile ranges (IQR) for continuous variables.Acute and late toxicities were scored using the Common Terminology Criteria for Adverse Events (CTCAE) version 5.0.Statistical analysis was performed using Stata version 13.1. Patient and treatment characteristics Between 1/2014 and 12/2021, a total of 130 patients with 209 targeted distant metastases were treated by a single radiation oncologist.Patient and disease characteristics were summarized in Table 1.The most common primary tumors were lung (35%), prostate (12%) and breast (9%).The median follow-up among surviving patients was 35.2 months (IQR 19.5 to 64.1 months). Radiation technique included stereotactic radiation for 69 patients to a median dose of 27 Gy (IQR 27 to 33 Gy) in a median of 3 fractions (IQR 3 to 4).Image-guided radiation therapy was administered to 84 patients to a median dose of 50 Gy (IQR 45 to 59.4 Gy) in a median of 15 fractions (IQR 10 to 28 fractions).Brachytherapy was delivered to 2 patients to a median dose of 24 Gy in a median of 4 fractions.Treatment of the primary tumor ± regional lymph nodes was administered to 47% of patients.The median cumulative GTV volume was 44.1 cc (IQR 14.1 to 117.1 cc).An example of the treatment technique and follow-up is shown in Figure 1. Patterns of failure include 23% alive and free of disease (NED), 52% distant failure only, 9% NED but death from comorbid illness, 7% both local and distant failure, 4% NED but lost to follow-up, 4% referred to hospice before restaging, 1% local only failure, 1% alive with second primary cancer.Specific causes of comorbid death are listed in Supplementary Table 4.Among the 30 patients who remain alive and NED, 8 patients did not receive systemic therapy and the most common primary tumors were 9 patients with non-small cell lung cancer, 5 patients with prostate adenocarcinoma and 5 patients with breast adenocarcinoma. Toxicity Toxicities for all patients are summarized in Table 2. Grade 1 to 2 acute toxicities were recorded in 38% of patients.High grade acute toxicities included 1 patient with grade 3 skin toxicity and 1 patient with esophageal cancer and distant lymph node metastases who experienced grade 5 cardiac complications following esophagectomy with pathologic complete response (Table 2).Late grade 2 toxicities included 2 patients with radionecrosis, 1 patient with grade 2 vaginal stenosis, 1 patient with grade 2 pneumonitis, 1 patient with grade 2 erectile dysfunction and 1 patient with grade 2 urinary toxicity.Late grade ≥3 toxicities included 3 cases of grade 3 radionecrosis requiring surgery, 1 case of orthopedic screw fixation fracture and 1 case of grade 3 rectal bleeding (Table 2).The 4-year cumulative incidence of late grade ≥3 toxicity rate was 5% (95% CI, 2-12) (Figure 2E). Discussion The concept of curative intent radiation therapy to all areas of known metastatic disease was first proposed by Hellman and Weichselbaum in 1994 (19).By safely irradiating all areas of known disease, usually in combination with systemic therapies, a small but reproducible minority of previously incurable patients achieve long-term complete remissions (3,10,20).In this large single physician experience of comprehensively treating 130 patients with limited metastatic disease from 2014 to 2021, 30 patients are not only alive but without evidence of disease.While prior studies focused on the treatment of extracranial oligometastases treated with stereotactic body radiotherapy, this large real world experience included patients with oligometastases requiring treatment of the primary site, regional nodes and brain metastases (15). In the authors' opinion, this study better represents the entire spectrum of oligometastases in the context of patients with distant metastases referred to radiation oncology.Despite using lower biologically equivalent doses than prior studies that focused exclusively on stereotactic body radiotherapy, involved site radiation achieved durable targeted metastasis control in 86% of patients with oligometastases with an acceptable toxicity profile.Prior studies reported 63 to 87% local control at 3 to 5 year followup although comparisons across studies are unreliable due to heterogeneity (10, 21-24).The Duke University group reported ~90% tumor metastasis control at a median follow-up of 2 years for oligometastasis patients treated to 50 Gy in 10 fractions (16).Taken together, these data expand access to effective local oligometastasis treatment for the many clinical presentations not amenable to stereotactic body radiotherapy including those with bulky disease immediately adjacent to organs at risk.The median GTV treated in this series was 44.1 cc vs. 8.2 cc in a large multi-institutional oligometastasis database focused exclusively on stereotactic body radiotherapy (25). While drug development continues to progress for many solid tumors, systemic therapy alone for distant metastases is generally not curative and may induce therapy-resistant genomic driver mutations (26,27).In this series, the majority of treatment failures were the result of the development of new metastatic tumors despite advances in systemic therapy.Since isolated local failures are exceedingly rare, oligometastases may be an appropriate population to test novel therapeutics targeting either minimal residual disease or dormant micrometastases (3).Immune checkpoint inhibitors appear more effective against primary tumors and micrometastases compared to macrometastases (26).Durvulamab as consolidative treatment for stage III lung cancer following chemoradiation improves long-term overall survival and represents a potential model for this drug development strategy (28).Systematically combining comprehensive involved site radiotherapy with more effective systemic therapies represents a highly promising alternative to drug therapy alone for patients with Stage IIIB rectal cancer initially treated with total neoadjuvant therapy followed by sphincter-sparing surgery with pathologic complete response followed by 2 additional cycles of adjuvant FOLFOX.While on surveillance, the patient presented with an elevated CEA of 18. oligometastases.Systemic therapy alone remains the standard approach for patients with >5 distant metastases radiotherapy reserved for palliation of symptoms since subtotal metastatic ablation does not appear to alter the natural history of polymetastases (11)(12)(13). As a single institution retrospective series of oligometastases, the patient population is inherently heterogeneous and the sample size is relatively small.The small sample size undoubtedly contributed to the inability to disprove the null hypothesis with potential predictors of progression-free survival and overall survival with radiation dose intensity, cumulative GTV volume, adjuvant systemic therapy, synchronous vs. metachronous metastases and number of metastases (Table 1, Supplementary Table 2).For the majority of these variables, there was a large numerical difference in progression-free survival but this failed to reach statistical significance.Additionally, hepatobiliary primary tumors appears to be an unfavorable primary site but did not reach statistical significance on multivariable analysis.There was no systemic therapy alone control arm so it is possible that the long-term disease-free survival and overall survival would have been similar with systemic therapy alone.Generalizability and scalability are always valid critiques of any single physician experience.On the other hand, it is well established that including opinions from a diverse group improve decision making by avoiding groupthink and the perspective of the community practitioner in academic discourse should not be ignored (29). Although single institution and particularly single physician series are not currently in vogue, this study design has counterintuitive strengths.In contrast to large academic centers, radiation oncologists in community practice are generalists that result in practical advantages for treatment and follow-up for with oligometastases.High volume general radiation oncologists are facile at safely and effectively administering radiation therapy throughout the body so there is no fragmentation of care between anatomic sites (30).Since distance travelled is reduced for patients choosing care at the community setting, these suburban patients are more likely present to their local hospital rather than the urban academic medical center for acute hospitalization thus enhancing the completeness of follow-up in the context of distant metastases (31).In the specific case of specialized cancer specific hospitals, there may not be an associated emergency room and they generally will not share a common electronic medical record platform with the local primary care provider or other non-oncology specialists (32).While retrospective series are not typically associated with complete and deep record keeping, as a single physician series, these patients are extremely well known to the physician over a period of years (33).It seems likely that followup quality will be more complete than multi-institutional databases reflecting the experience of a large number of providers (34).Finally for radiation oncologist with extensive experience with treatment distant metastases, selection of curative intent comprehensive metastatic ablation was informed not only by technical feasibility but also by prognosis using a validated model to supplement clinical intuition (17,35). In conclusion, involved site radiotherapy to all areas of known disease can safely achieve durable complete remissions in >20% of patients with oligometastases treated in the real world setting.Distant failures account for the majority of treatment failures and isolated local failures are exceedingly uncommon.Oligometastases represent a promising setting to investigate novel therapeutics targeting minimal residual disease. 3. PET/CT demonstrated a new 3.5 cm retrocrural node with a SUVmax of 3.5 without additional areas of FDG avid disease.Biopsy confirmed metastatic rectal adenocarcinoma.(A) Treated with involved site radiotherapy to 50 Gy in 10 fractions to the PET positive node while covering PET negative prominent paraaortic nodes to 40 Gy in 10 fractions.(B) Radiation plan demonstrating selective sparing of uninvolved bowel, liver, kidneys and spinal cord.(C) Restaging 6 month PET/CT negative.Remains on surveillance off therapy more than 3 years after treatment with a recent CEA 1.3, undetectable circulating tumor DNA and negative CT and MRI. FIGURE 2 (A) Overall survival for patients with oligometastases.(B) Progression-free survival for patients with oligometastases.(C) Local control for patients with oligometastases.(D) Distant control for patients with oligometastases.(E) Late Grade ≥3 Toxicity for patients with oligometastases. TABLE 1 Characteristics of 130 patients with oligometastases.
2023-12-07T16:15:31.925Z
2023-12-05T00:00:00.000
{ "year": 2023, "sha1": "8a9fd5212c041e121b178b9de104cc0daa7c3950", "oa_license": "CCBY", "oa_url": "https://www.frontiersin.org/articles/10.3389/fonc.2023.1267626/pdf?isPublishedV2=False", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "27741a7b687639bfb0ee21d23bff6e9b8e95c82c", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
6058315
pes2o/s2orc
v3-fos-license
Forced Expression of ZNF143 Restrains Cancer Cell Growth We previously reported that the transcription factor Zinc Finger Protein 143 (ZNF143) regulates the expression of genes associated with cell cycle and cell division, and that downregulation of ZNF143 induces cell cycle arrest at G2/M. To assess the function of ZNF143 expression in the cell cycle, we established two cells with forced expression of ZNF143 derived from PC3 prostate cancer cell lines. These cell lines overexpress genes associated with cell cycle and cell division, such as polo-like kinase 1 (PLK1), aurora kinase B (AURKB) and some minichromosome maintenance complex components (MCM). However, the doubling time of cells with forced expression of ZNF143 was approximately twice as long as its control counterpart cell line. Analysis following serum starvation and re-seeding showed that PC3 cells were synchronized at G1 in the cell cycle. Also, ZNF143 expression fluctuated, and was at its lowest level in G2/M. However, PC3 cells with forced expression of ZNF143 synchronized at G2/M, and showed lack of cell cycle-dependent fluctuation of nuclear expression of MCM proteins. Furthermore, G2/M population of both cisplatin-resistant PCDP6 cells over-expressing ZNF143 (derived from PC3 cells) and cells with forced expression of ZNF143 was significantly higher than that of each counterpart, and the doubling time of PCDP6 cells is about 2.5 times longer than that of PC3 cells. These data suggested that fluctuations in ZNF143 expression are required both for gene expression associated with cell cycle and for cell division. Introduction Zinc finger protein 143 (ZNF143) is a transcription factor identified as a human homolog of Staf [1] and is involved in the transcriptional regulation of snRNA and snRNA-type genes by RNA polymerase II or III [2,3]. The 638 amino acid ZNF143 protein contains seven zinc fingers and binds to the YY(A/T)CCC(A/G)N(A/C)AT(G/C)C(A/C)YY sequence in promoter regions [1,4,5]. Functional classification of ZNF143 target genes has revealed that many of these genes are important for cell proliferation [5]. We have previously reported that knockdown of ZNF143 reduces cell proliferation and induces G2M cell cycle arrest. Additionally, we found that ZNF143 knockdown resulted in the downregulation of 152 genes in PC3 human prostate cancer cells. Of these 152 genes, 41 (27%) were associated with cell cycle and DNA replication. In particular, polo-like kinase 1 (PLK1), aurora kinase B (AURKB) and some minichromosome maintenance complex component (MCM) genes were transcriptionally regulated by ZNF143 [6]. We also previously reported that ZNF143 expression is induced by DNA-damaging agents, and is overexpressed in cisplatin-resistant prostate cancer PC3 cell lines [7,8]. However, the proliferation rate of the cisplatin-resistant cell lines is generally slower than that of its parent cell line. To assess the effect of ZNF143 expression on the cell cycle, we established PC3 cells with forced expression of ZNF143, and investigated protein expression associated with cell division, as well as ZNF143 expression according to cell cycle phase. The proliferation of PC3 cells with forced expression of ZNF143 is much slower than that of its wild-type counterpart cell line, and nuclear expression of several MCMs between these cells is different. We hypothesize that the expression cycle of ZNF143 is associated with cell division. Cell Proliferation of PC3 Cells with Forced Expression of ZNF143 We previously reported that knockdown of ZNF143 in PC3 prostate cancer cells reduces the cell proliferation rate, induces G2M cell cycle arrest, and results in downregulation of 41 genes associated with cell cycle and DNA replication [6]. To assess the effect of ZNF143 expression on cell cycle, we established PC3 cells with forced expression of ZNF143 (Figure 1a). Total ZNF143 protein in these cell lines is about 1.5 times higher than that of its counterpart normal cell line. As shown in Figure 1b, cellular expression of PLK1, AURKB, MCM2, MCM3, MCM5, MCM6 and Cyclin B1 is upregulated in PC3 cells with forced expression of ZNF143. These results are consistent with results obtained with ZNF143 siRNA knock down [6]. We predicted that the proliferation rate of these cells might increase according to ZNF143 expression. Unexpectedly, the proliferation rate was much lower than its control counterpart cell line (Figure 1c). The doubling time of these transfectants was then calculated based on these results (Table 1). FACS analysis showed that G1 and G2/M populations of PC3 cells with forced expression of ZNF143 significantly decreased and increased, respectively, compared with PC3 mock cells (Figure 1d). Y box binding protein 1 (YB-1) is also induced by DNA-damaging agents and is overexpressed in cisplatin-resistant cells as well as ZNF143 [9][10][11]. We reported that YB-1-regulated expression of cell division cycle 6 homolog (CDC6) [11] required for the initiation of DNA replication [12], and downregulation of both YB-1 and CDC6 induces G1 cell cycle arrest [11,13]. However, overexpression of YB-1 also reduces proliferation potency, as observed for ZNF143 [14]. Cell Cycle Profiles of PC3 and Fluctuation in ZNF143 Expression Cells with forced expression of ZNF143 exhibited the upregulation of PLK1, AURKB and some MCM proteins (Figure 1b), however, proliferation rate was lower than that of its counterpart normal PC3 cells. To clarify this contradiction, we first evaluated the expression of these genes according to cell cycle phase with PC3 cells. As shown in Figure 2, Cyclin B1 expression is higher from 6 h to 10 h and returns to the same expression level as that observed 1 h by 13 h. Figure 2. Fluctuations of protein expression associated with the cell cycle in PC3 cells. PC3 cells cultured for 12 h in serum-free medium were re-seeded. At the indicated times after re-seeding, cells were collected and fractionated into nucleoplasm and cytoplasm. Each cell lysate (100 μg) was used for Western blotting with indicated antibodies. N and C indicate nucleoplasm and cytoplasm, respectively. This cell cycle phase is consistent with 13.2 h of doubling time (Table 1). It is thought that PC3 cells were synchronized by serum starvation and re-seeding, and 6 h to 10 h represents G2 and M2 phase of the cell cycle. Nuclear expression of AURKB is almost consistent with Cyclin B1 expression, and PLK1 expression in both the cytoplasm and nucleoplasm are increased following induction of Cyclin B1 expression at 6 h. Shindo et al. reported that AURKB was expressed during S and G2/M phases [15], and this is almost consistent with our result. A previous study reported that the MCM complex is required for two events of the cell cycle; one is the entry into S phase and the other is cell division [16]. Consistently, we found that MCM protein expression in the nucleoplasm decreased at G2 phase. In particular, MCM4 and MCM7 were hardly observed in nucleoplasm. However, there are no obvious differences in the expression of Clock, Bmal1, Wee1 and CDC6 during the cell cycle phases. These indicate that serum starvation and re-seeding of cells is a straightforward method of synchronizing cells at G1 phase compared with conventional double thymidine block synchronization. Fluctuations in ZNF143 expression according to cell cycle phase were also observed ( Figure 2). First, serum starvation for 12 h decreased ZNF143 expression to 50% ( Figure 3a) and re-seeding cells with medium containing serum increased its expression to 2.3-times at 1 h ( Figure 3b). After 1 h of re-seeding, ZNF143 gradually decreased until G2/M phase and increased again at the early G1 phase ( Figure 2). Interestingly this fluctuation is completely reverse pattern compared with Cyclin B1. CCNB1IP1 (cyclin B1 interacting protein 1, E3 ubiquitin protein ligase) has an activity of E3 ubiquitin protein ligase inducing degradation of Cyclin B1 [17]. ZNF143 may transcriptionally regulate CCNB1IP1 expression to decrease Cyclin B1. Cell Cycle Profiles of Cells with Forced Expression of ZNF143 To assess the effect of ZNF143 on cell cycle, we investigated fluctuations in the expression of Cyclin B1, AURKB, PLK1 and MCM in cells with forced expression of ZNF143. The doubling time of these cells is about 23 h, therefore cells were collected every 2 h until 26 h. Before re-seeding 3xFlag-ZNF143 expression was decreased to 50% by serum starvation (Figure 3a). Exogenous ZNF143 is driven by CMV promoter containing binding motifs of AP1, CREB and NF-kappa B activated by serum stimulation. Actually 3xFlag-ZNF143 expression was increased to 2.1-times at 2 h after re-seeding with medium containing serum ( Figure 3b). As shown in Figure 4, 3xFlag-ZNF143 expression continued until 24 h, but endogenous ZNF143 declines gradually with unknown molecular mechanism. were cultured for 24 h in serum-free medium and then re-seeded. At the indicated times after re-seeding, cells were collected and fractionated into nucleoplasm and cytoplasm. Each nuclear protein (100 μg) was used for Western blotting with the indicated antibodies. Interestingly, fluctuations in Cyclin B1, AURKB and PLK1 expression were observed twice during 26 h after re-seeding, indicating cells wanted to undergo two rounds of cell cycle. However, we could not find multinuclear in cells with forced expression of ZNF143 by fluorescence microscope with DNA staining (data not shown) and FACS analysis (Figure 1d), namely division is one time during 23 h. On the other hand, Cyclin B1 expression is highest at 2 h after re-seeding indicating that PC3 cells with forced expression of ZNF143 may be synchronized at G2/M phase. It was reported that knock-down of AURKB [18] or PLK1 [19,20] induce G2/M or mitotic arrest. We previously reported that ZNF143 transcriptionally regulated AURKB and PLK1 and that knock-down of ZNF143 decreased these gene expressions and induced G2/M arrest [6]. G2/M arrest by knock-down of ZNF143 might depend on AURKB and PLK1 expressions. It was reported that knock-down of Aurora-A kinase induces G/2M arrest [21] like AURKB and PLK1, but forced expression of Aurora-A kinase also decreased cell proliferation failing to overcome the restriction point at the G1/S transition due to diminished RB phosphorylation caused by reduced Cyclin D1 expression [22]. In cells with forced expression of ZNF143, AURKB and PLK1 expressions were upregulated (Figure 1b). Constitutive activities of these kinases might deregulate the phosphorylation of several proteins associated with cell cycle and increase G2/M population resulting in deviation from normal division. MCM proteins in the nucleoplasm of PC3 cells decreased or disappeared at G2 phase (Figure 2), those in cells with forced expression of ZNF143 were continuously expressed during cell cycle (Figures 4 and 5). Conversely, MCM5 protein in the nucleoplasm increased. We reported that ZNF143 transcriptionally regulates the expression of MCM2, MCM3, MCM5 and MCM6 [6]. However, the expression profile according to cell cycle phase was not consistent with the expression pattern exhibited by ZNF143 (Figure 2). This result suggests that ZNF143 might be associated with basal transcriptional regulation of these genes. And the cell-cycle dependent fluctuations in the expression of these gene products may be post-translationally regulated. Another possibility is that fluctuations in ZNF143 expression may be associated with localization of MCM proteins. As shown in Figure 2, nuclear expression profile from S to G2/M phase was relatively consistent with the expression pattern exhibited by ZNF143. In any event, these results suggest that Cyclin/CDK pathway and cell division by MCM regulation might be dissociated in cells with forced expression of ZNF143. Further study is necessary to elucidate the molecular mechanism associated with ZNF143 on cell cycle and cell division. Proliferation of Cisplatin-Resistant Cells We previously reported that ZNF143 is overexpressed in cisplatin-resistant PCDP6 cells compared with the parent PC3 cells. Resistance was probably acquired through transcriptional expression of DNA repair-related genes such as APE1 and FEN1 by ZNF143 [8]. First, we investigated the proliferation rate of PC3 and PCDP6 cells. As shown in Figure 6a, the proliferation rate of PCDP6 cells was significantly lower than that of PC3 cells, and the doubling time of these cells was 35.8 h and 13.9 h, respectively. In general, decrease of cell proliferation is advantageous to resist anticancer agents targeting DNA. Therefore, both expression of repair genes and slow proliferation might be benefit for cisplatin resistance. As shown in Figure 6b, FACS analysis showed that G1 and G2/M populations of PCDP6 cells were significantly lower and higher, respectively. This result of PCDP6 cells is similar to that of cells with forced expression of ZNF143 (Figure 1d). Transition from G2/M to G1 phase might be disturbed in both cells and decrease of ZNF143 might be necessary for cells to pass through the G2/M checkpoint. We reported that cell growth of lung cancer cell lines was significantly correlated with cellular expression of ZNF143 [6]. It is possibility that these lung cancers express ZNF143 with fluctuation but PCDP6 cells continually overexpressed ZNF143. In either case, both downregulation of ZNF143 and forced expression of ZNF143 might induce G2/M arrest or increase G2/M population to repress the cell proliferation. These results suggest that overexpression of ZNF143 with fluctuation is different from forced expression of ZNF143 on cell cycle. Further study is necessary to elucidate the molecular mechanism associated with ZNF143 on drug resistance. Figure 6. Proliferation rates of PC3 and PCDP6 cell lines. (a) Cells were counted at the indicated times, with time zero being 24 h after seeding. Results were normalized to cell numbers at 0 h. The points represent the mean of at least three independent experiments; the bars show the SD. ** P < 0.01; (b) PC3 and PCDP6 cells were harvested, stained with propidium iodide and DNA content in single cells was measured by flow cytometry. * P < 0.05, ** P < 0.01. Cell Culture and Antibodies Human prostate cancer cells, PC3 [8], and cisplatin-resistant PC3 cells, PCDP6 cells, were cultured in Minimum Essential Medium containing 10% fetal bovine serum. Cell lines were maintained in a 5% CO2 atmosphere at 37 qC. Antibodies against Flag (M2) and β-actin (A5441) were purchased from Sigma Aldrich and Cell Signaling Technology (Beverly, MA, USA), respectively. Production of polyclonal antibody against ZNF143 and BAF57 were described previously [6]. Polyclonal antibody against Bmal1 was raised by multiple immunizations of a New Zealand white rabbit with synthetic peptides. The synthetic peptide sequence was LGGPVDFSDLPWPL. Establishment of PC3 Cells Stably Expressing ZNF143 Establishment and cloning of PC3 cells with forced expression of ZNF143 by transfection with 3xFlag-ZNF143 expression plasmid was described previously [8]. PC3 mock cells were transfected with empty expression plasmid and selected by hygromycin without cloning. Cell Proliferation Assays and Doubling Time Cell proliferation assay was described previously [6]. Briefly, PC3, PCDP6 and PC3 cells with forced expression of ZNF143 were seeded into 12-well plates at a density of 1 × 10 4 cells per well. Cells were harvested by trypsinization and counted every 24 h with a Coulter-type cell size analyzer (CDA-500; Sysmex Corp., Kobe, Japan). The first measurement time was set as time zero. Proliferation curve graph was converted into log function and doubling-time was calculated. Western Blotting and Cell Cycle Expression Profiling The preparation of cytoplasmic and nuclear proteins was described previously [6,8]. The indicated amounts of whole cell lysate, cytoplasmic protein (cytoplasm) and nuclear protein (nucleoplasm) were subjected to Western blotting and detection was performed using enhanced chemiluminescence (Amersham, Piscataway, NJ, USA). Protein levels were quantified using Multi Gauge Version 3.0 (Fujifilm, Tokyo, Japan). For synchronization of the cell cycle, the serum starvation and re-seeding method was employed. PC3 cells and PC3 cells with forced expression of ZNF143 were washed twice with PBS and cultured in serum-free medium for 12 h and 24 h, respectively. Cells were disaggregated with trypsin and re-seeded in culture dishes with medium containing 10% fetal bovine serum. At the indicated times after re-seeding, cells were collected and proteins were prepared. Statistical Analysis Student t test was used for statistical analysis of the variables between the two groups. All error bars indicate standard deviation. Conclusions We found fluctuations in ZNF143 expression with prostate cancer PC3 cells. Either down-regulation of ZNF143 or forced overexpression of ZNF143 decreased cell proliferation. Cell growth is sometimes associated with the efficacy of anti-cancer agents targeting DNA, because disturbance of DNA replication decreased when cell proliferation is slow. We established several resistant cells against anticancer agents and growth rates of these cells are almost all low. However, molecular mechanism is unknown. Cisplatin resistant prostate cancer cells grow slowly with overexpression of ZNF143 and increase of G2/M population. These results similar to cells with forced expression of ZNF143. Either forced increase or forced decrease of cell cycle-related genes might induce the disturbance of cell division. We believe that ZNF143 is a promising molecular target to overcome cancers. Deregulation of the cell cycle-dependent fluctuation in ZNF143 expression also might prevent the cancer proliferation even if cancer cells maintain strong expression of ZNF143. Further analysis of interplay between ZNF143 and other cell cycle regulators is required to understand the essential role of ZNF143 in cell cycle and drug sensitivity.
2016-04-23T08:45:58.166Z
2011-10-19T00:00:00.000
{ "year": 2011, "sha1": "4c1cdfeb75aedf95cc04f1bf30396abcdaa14058", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2072-6694/3/4/3909/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "4c1cdfeb75aedf95cc04f1bf30396abcdaa14058", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
55414605
pes2o/s2orc
v3-fos-license
Extend TOPSIS-Based Two-Sided Matching Decision in Incomplete Indifferent Order Relations Setting Considering Matching Aspirations This paper develops a method for two-sided matching decision in the environment of incomplete indifferent order relations. The two-sided matching decision problem with incomplete indifferent order relations and matching aspirations is firstly described. In order to solve this problem, the incomplete indifferent order relations are converted into the generalized Borda number matrices. The matching aspiration matrix can be determined based on the model calculation on the reciprocal differences of generalized Borda numbers. On this basis, the weighted satisfaction degree matrices are set up. The extended relative closeness matrices are determined by using an extended TOPSIS technique. Moreover, a two-sided matching model is developed. The two-sided matching alternative can be obtained by solving the model. For the purpose of illustration, an example including sensitivity analysis is presented. INTRODUCTION The two-sided matching decision involves how to match the agents of one side with the agents of the other side based on the preferences of the agents of both sides.The problems of two-sided matching decision exist widely in reality, such as stable marriage assignment (Kümmel et al., 2016;Cseh and Manlove, 2016;Doğan and Yıldız, 2016), college admission (Braun et al., 2014;Chen and Kao, 2014;Liu and Peng, 2015), employee selection (Wang et al., 2011;Mendes et al., 2010;Chen et al., 2016), and personnel assignment (Gallego and Larrain, 2012;Taylor, 2013;Gharote et al., 2015).Therefore, the two-sided matching decision is a hot topic with extensive actual backgrounds.Gale and Shapley (1962) initially research the problems of college admissions and marriage.In their studies, the concept of stable matching is proposed; then the existence and optimality of stable matching are given; at last, the deferred acceptance algorithm is developed.From then on, various different concepts, theories, techniques and algorithms have been presented with respect to the two-sided matching decision with different formats of information.For example, Li and Fan (2014) propose a stable two-sided matching method considering psychological behavior of agents on both sides to solve the two-sided matching problem with ordinal numbers.Castillo and Dianat (2016) study the truncation strategies in a centralized matching clearinghouse based on the deferred acceptance algorithm.Xu et al. (2015) propose the matching algorithms for one-to-one two-sided dynamic service markets.Chen et al. (2016) point out that the generalized median stable matchings exist in many-to-many two-sided matching markets when contracts are strong replaceable and satisfy the law of aggregate demand.Liang et al. (2015) propose a novel decision analysis method to solve the multiple target of satisfied and stable two-sided matching decision problem considering the preference ordering, where the targets could be satisfied, weak satisfied and stable, -satisfied and stable, satisfied and stable. The existing studies enrich the theories of two-sided matching decision, and develop the different algorithms for solving the problems of two-sided matching decision with various formats of information, and expand the actual application background.However, on the one hand, the preferences provided by agents of two sides may be in the format of incomplete indifferent order relations in some practical problems owing to imprecise source of information containing unquantifiable information and incomplete information.In this case, the classical two-sided matching decision cannot effectively deal with these kinds of problems.On the other hand, the matching aspirations of agents of two sides are seldom considered in the existing studies.Therefore, how to investigate the problem of two-sided matching decision with incomplete indifferent order relations considering matching aspirations is a valuable research topic.In view of this, this paper presents a two-sided matching decision method with incomplete indifferent order relations considering matching aspirations based on an extended TOPSIS (technique for order performance by similarity to an ideal solution) method.It is well known that, the TOPSIS method is first developed by Hwang and Yoon (1981), and is one of the classical multi-attribute decision methods.The basic idea of TOPSIS method is that the selected alternative should have the shortest distance from the positive ideal solution and the farthest distance from the negative ideal solution (Hwang and Yoon, 1981;Yue, 2014).This article intends to apply the idea of TOPSIS into two-sided matching decision with incomplete indifferent order relations information. The structure of this paper is organized as follows: Section 2 formulates the considered two-sided matching problem.Section 3 presents an extend TOPSIS-based method for two-sided matching decision.Section 4 gives an example.Section 5 concludes this paper. THE CONSIDERED TWO-SIDED MATCHING PROBLEM This paper considers the two-sided matching problem, where the preferences provided by the agents of two sides are in the format of incomplete indifferent order relations.And the research angle employed in this paper is matching aspiration.The notation of the considered two-sided matching problem is given as follows. ∂ be the incomplete indifferent order relation given by agent ℘ , where ∂ denotes the number of agents of side Here, symbol "  " or "  " denotes "superior to" or "be equivalent to"; Let be the matching aspiration between ∂ i and ℘ , which usually satisfies the characteristic of non-negativity and normalization. Remark 1.In above presentation, the matching aspiration is unknown.The determination method will be given in section 3.2. Remark 2. The concept of two-sided matching can be seen by reference (Yue, 2014).Then we know that a twosided matching (or a two-sided matching alternative) can be expressed by the union of the set of matching pair and the set of single pair . Based on the above analysis, the problem researched here is how to obtain the reasonable two-sided matching alternative based on the incomplete indifferent order relations , and the matching aspirations ( ∈ , ∈ ). Construction of the Normalized Borda Matrices In order to handle with the incomplete indifferent order relations, the definitions of the generalized Borda numbers are introduced. Contribution of this paper to the literature • Rather than making a contribution to the relevant theories of stable two-sided matching, the focus of this study is to obtain the two-sided matching alternative, which can reflect the matching aspirations of agents. • This study tries to use the generalized Borda numbers for handling incomplete indifferent order relations. • The findings of this study can enhance our understanding of an extend TOPSIS technology in two-sided matching decision. Determination of the Matching Aspirations In order to determine the matching aspiration , the following analysis combined with the absolute difference Remark 4. In Eq. ( 5), the case of �̅ ∂→℘ − ̅ ℘→∂ � = 0 may sometimes occur.At this time, Eq. ( 5) is meaningless. In order to handle with this case, the denominator �̅ ∂→℘ − ̅ ℘→∂ � can be replaced by According to Remark 3, Eq. ( 5) can be further expressed by Based on the above analysis, the selection of should make ↔℘ greatest.Therefore, the objective function is established as Moreover, the following linear programming model (M-1) can be constructed, i.e., = 1; ∈ (0, 1), ∈ , ∈ Theorem 1.The optimal solution (noted as * ) of model (M-1) is expressed by Then the partial derivatives of function with respect to variables and can be computed, i.e., Building of the Extended Relative Closeness Matrices Obviously, ∂→℘ ∈ [0,1] ⋃ , and the greater ∂→℘ is, the higher the satisfaction degree of agent ∂ i over ℘ is. Similarly, with respect to weighted satisfaction degree matrix Obviously, ℘→∂ ∈ [0,1] ⋃ , and the greater ℘→∂ is, the higher the satisfaction degree of agent ℘ over ∂ i is. Development of the Two-Sided Matching Model Firstly, the 0-1 variable is introduced, where According to the meanings of the extended relative closeness, maximization of the extended relative closeness can be regarded as the objective function.Furthermore, considering the constraint condition of one-to-one twosided matching, the following two-sided matching model (M-2) can be developed, i.e., Determination of the Two-Sided Matching Algorithm In sum, an algorithm for solving the two-sided matching problem under the conditions of incomplete indifferent order relations considering matching aspirations is given.The steps of the algorithm are provided as follows. Step Step 11.Transform model (M-2) into model (M-3) by using the linear weighted method. Step 12. Determine the two-sided matching alternative by solving model (M-3). ILLUSTRATED EXAMPLE In this section, an example is used to illustrate the application of the proposed extend TOPSIS-based two-sided matching decision. Suppose an oversea venture-capital company plans to invest a cell-phone company in Nan Chang of China.In order to enable the new cell-phone company to run smoothly, the manager intends to assign experienced staffs to vacant positions in the new factory.Each position in the new cell-phone company is held by one staff, and each staff is assigned to only one position.There are five vacant positions, which consist of a purchaser ( 1 ), a material handler ( 2 ), a production planner ( 3 ), a technician ( 4 ) and a quality inspector ( 5 ).Now seven experienced staffs ℘ 1 , ℘ 2 , …, and ℘ 7 who have multiple skills apply to the five positions.The decision makers from five position departments evaluate the staffs from four perspectives: personality characteristics, technical skill, previous experience, and human relationship skill.Seven staffs evaluate the positions from three perspectives: salary and welfare, development space, and work environment.The incomplete indifferent order relations ) are given below.In order to enhance the level of operating efficiency, the intermediary who specializes in human resource allocation is employed to show the two-sided matching alternative. To solve the above problem, the proposed two-sided matching decision is used and the procedure is given as follows. Step 1.According to the incomplete indifferent order relations × can be built by Eqs. ( 1a)-( 1c) and ( 2), which is shown in Table 1. According to the incomplete indifferent order relations × can be built by Eqs.(3a)-( 3c) and ( 4), which is shown in Table 2. Step 7. Based on the weighted satisfaction degree matrix 25), which is shown in Table 11. In the following, we discuss the influence of weights ∂ and ℘ towards the two-sided matching alternative. Case III.If ∂ = 0.9, ℘ = 0.1, then model (M-2) is transformed into model (M-3), where ℘↔∂ = 0.9 ∂→℘ + 0.1 ℘→∂ .Similarly, through solving the model, the two-sided matching alternative * can be obtained, where In conclusion, the comparative analysis on the influence of weights ∂ and ℘ towards the two-sided matching alternative is shown in Table 15.From Table 15, we know that the two-sided matching alternative may be changed when weights ∂ and ℘ are changed.Therefore, weights ∂ and ℘ play an important role in determining the two-sided matching alternative. CONCLUSIONS This paper presented a decision method for solving the two-sided matching problem with incomplete indifferent order relations considering matching aspirations.The incomplete indifferent order relations were transformed into the generalized Borda number matrices, and the matching aspirations were calculated based on model calculation.Based on this, the weighted satisfaction degree matrices were built.Then, the extended relative closeness matrices can be determined by using an extended TOPSIS method.Furthermore, a two-sided matching model can be constructed.By solving the proposed model, the two-sided matching alternative was determined.An example with sensitivity analysis is also given to illustrate the effectiveness of the presented method. Comparing with the existing research, the main contribution of this paper is as follows: (1) the generalized Borda numbers were adopted to handle incomplete indifferent order relations, which is a new idea; (2) the research angle was matching aspirations, and hence the obtained two-sided matching alternative could reflect the matching aspirations of agents; (3) an idea of extend TOPSIS was firstly introduced into two-sided matching decision, which was a novel idea; (4) the presented method developed the theory and method for two-sided matching decision with incomplete indifferent order relations. The main limitation of this paper is that it only discussed the two-sided matching problem with less incomplete indifferent order relations.And the related theory of stable matching under the condition of incomplete indifferent order relations was not studied. Hence, the following two aspects could be further studied.First, if the complete two-sided matching alternative cannot obtained based on the information of incomplete indifferent order relations, then this kind of two-sided matching decision problem should be further investigated.Second, the theory and property of stable matching with incomplete indifferent order relations should be further probed. Figure 1 . Figure 1.The two-sided matching alternative
2018-12-05T12:56:16.901Z
2017-11-24T00:00:00.000
{ "year": 2017, "sha1": "9f52b910aaa2d6e64079931eb3da29cf5a7ff79a", "oa_license": "CCBY", "oa_url": "http://www.ejmste.com/pdf-78100-16626?filename=Extend%20TOPSIS-Based.pdf", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "9f52b910aaa2d6e64079931eb3da29cf5a7ff79a", "s2fieldsofstudy": [ "Mathematics", "Business" ], "extfieldsofstudy": [ "Mathematics" ] }
233930116
pes2o/s2orc
v3-fos-license
Evaluating Safety Issues for Taxi Transport Management Taxi drivers face many problems every day including safety issues. /e tendency to quickly transport passengers to their destinations for more income has resulted in dangerous driving behaviors leading to traffic violations. So, taxi drivers need appropriate support and training programs to improve safety and reduce the risk of crashes. Implementing different support and safety training programs requires an effective management system. /ere is a dearth of research on the safety issues of taxis from the perspective of taxi organizationmanagers./is study aims to evaluate the safety issues of taxi transport management through a case study of the Tehran Taxi Organization. A questionnaire survey was conducted with 22 regional managers and 20 transportation specialists of the Tehran Taxi Organization. Issues related to taxi drivers, roads and road users, vehicles, and management systems were evaluated in the questionnaire. Participants determined the relevance level and priority ranking of each question. /e level of agreement was then tested using the Kendall concordance test. According to the results, the use of GPS was selected as the best in-vehicle monitoring system that can be used to evaluate drivers in the fleet. Participants believed that passengers’ loading and unloading had the most risk for taxi users. /e start-inhibit technology to detect open doors was unanimously evaluated as an efficient technology for taxi safety. With respect to educating taxi users, starting education in schools had the most relevance and priority. Recommendations for increasing the safety of taxis include the use of GPS in taxis to monitor and evaluate drivers, receiving crash reports from police and submitting monthly safety assessment reports, flexibility in drivers’ working hours’ schedule, providing training on drivers fatigue management, and evaluating drivers’ health. Introduction Motor vehicle crashes and the resulting damages are one of the leading causes of fatality in the world. According to the World Health Organization, about 1.35 million people worldwide are killed in road crashes each year. A large number of the world's fatalities on the roads (74%) occur in middle-income countries [1]. It is also reported that the estimated road traffic death rate per 100000 population for Iran is 20.5 (48.7% belong to 4-wheeled vehicles) which is much higher than high-income countries (7.94 for highincome countries) and also more than middle-income countries (18.04 for middle-income countries) [1]. Tehran Taxi Organization reported that the city of Tehran, with approximately 84,000 active taxis, has one of the largest taxi fleets in the world. According to [2], the share of public transportation in urban trips is about 48.9% from which 24.3% belong to taxis. Although the taxi industry has a central function in any public transport system, it is often given less prominence by city planners and policymakers compared to other modes of public transport [3]. Measurement of public opinion with reliable statistical methods can provide useful insights to the decision-makers [4]. Safety issues related to the taxi driver, road, and vehicle are major challenges of the taxi industry. In the context of taxi drivers' safety, many studies focused on driving behavior, working conditions, and also risky drivers' characteristics and subsequent consequences [5][6][7]. Despite many studies conducted in the past to identify various risk factors contributing to taxi crashes, there is a lack of study of safety problems of taxis from the perspective of managers. e management system is responsible for certifying, training, and supervising drivers and has a vital role in improving taxi safety. us, the managers' perceptions about taxi safety issues which are related to the driver, road safety, and vehicle can determine future safety plans of the organization. As the goal of safety researchers has always been to reduce accidents and losses, evaluating safety issues of taxis can lead to achieving higher safety standards and fewer crashes. Objectives of the Study. ere is a dearth of research on how managers and transport specialists of taxi organization, who are the decision-makers in the adoption of management strategies and new technologies for improving taxi drivers' safety, perceive the taxi drivers' safety issues. is study aims to address the knowledge gap by evaluating the safety issues of Tehran taxi industry using the knowledge and perceptions of taxi transport managers and also investigating the potential effectiveness of various technologies via a questionnaire survey. Past studies on bus and truck safety management have shown that by investigating the current practices used by the transport management or agencies to reduce injuries, appropriate recommendations can be developed to enhance the safety of the drivers, vehicles, and road users [8][9][10]. However, there is a lack of study of safety problems of taxis from the perspective of managers. is study will highlight major taxi-related safety problems which are unanimous among managers and experts and also provide practical recommendations to achieve higher safety standards. As the participants in this study are managers and experts of the Tehran Taxi Organization, their opinions can determine their priorities for future decisions. e results of this research can be a valuable resource for countries that have a similar transportation system as Tehran. Structure of the Paper. e paper is organized as follows. Section 2 is intended to explain and review previous studies on taxi safety issues including workplace road safety, taxi driver's risky characteristics and driving behavior, and also the importance of a management system to improve safety. Section 3 describes the methodology adopted for this study that includes participants, study design, and data analysis. It is followed by Sections 4 and 5 that present the results and the relevant discussions, respectively. Finally, in Section 6, the conclusions of this study and the recommendations are presented. Literature Review In the next two sections, a review of relevant studies on the taxi driver's risky behavior, workplace safety issues, and the importance of a management system to improve taxi drivers' safety is presented. Taxi Driver Unsafe Behavior and Workplace Safety Issues. Taxis efforts to increase passengers' transport by getting passengers to their destinations quicker for more income have resulted in more dangerous driving behaviors [11]. A study by Adl et al. [12], reported that, in the city of Tehran, 52.5% of taxi drivers have unsafe driving behavior and 46% of these high-risk behaviors are done repeatedly (more than 80% of the time). ese risky behaviors can result in driving violations [13,14]. ese behaviors include not using turn signal light, not observing a safe distance from the car in front, abruptly changing lanes, and crossing the red light [15][16][17]. In addition, the occurrence of these risky behaviors is positively and directly related to the probability of traffic crashes [7,18]. Taxi drivers, as a key element in the taxi industry, face many problems every day including workplace safety issues. For example, fatigue caused by long driving hours [19], high workload, and more shifts directly increase the likelihood of an accident and indirectly cause fatigue and abnormal driving behavior [20]. Also, difficult working conditions for drivers can cause physical and mental problems such as cardiovascular diseases, muscle aches, stress, and sleep disorders [6]. Other problems that threaten the driver's health are smoking and use of alcohol, which are commonly used by the drivers to prevent drowsiness [21,22]. All of these problems have led to higher mortality rates because of work for taxi drivers (14.9%) versus other occupations (3.3%) [23]. A study by Kasemy et al. [24] suggested setting a clinic for an intermittent checkup and health education for taxi drivers. Many studies have been conducted to determine the factors related to crashes and assess the safety of taxis [5,25,26]. In general, these factors can be related to driver characteristics, environmental factors, and organizational management [27]. For example, a study by [26] found that, contrary to intuition, older, more educated taxi drivers reported more unsafe driving behaviors, compared to younger, less educated. Also with the aging of drivers, it increases the likelihood of crashes [25]. Another study found that driving at night increases the chance of fatality and injury from traffic crashes [5,8]. Moreover, a large number of crashes are related to driver's errors, i.e., a person's driving behavior and cognitive ability are decisive in these crashes [29]. e Importance of Management System and Managers' Perspective. e evidence from past studies indicates that there is a difference between the behavior of taxi drivers and other drivers, and therefore, there is a need to consider these differences in taxi drivers' training programs [25]. It is not conceivable to administer different training programs for taxi drivers without an effective management system, given their behavioral differences compared to other drivers. e management system is responsible for certifying, training, and supervising drivers. However, usually in practice, verification of competencies is conducted without a base or rule, monitoring drivers is simple, and training is short-run and not rigorous. It is argued that little awareness of safety issues not viewing taxi safety as a priority and gaps in specific safety expertise, knowledge, and resources could be the reasons [30]. It is important to mention that taxi markets are local in nature as no cities are the same [3], although it is expected that the initiatives which have proven successful in developed countries can be successfully applied in developing countries that are subject to different social and economic contexts [31]. However, many sociodemographic characteristics (such as employment) were found to be important in transport planning [32]. ere is a dearth of research on the safety issues of taxis from the perspective of taxi organization managers. Previously, few studies have been conducted on safety issues and problems from the perspective of managers of bus and truck organizations. ese studies have been conducted with different purposes such as providing management solutions to identify problems of the bus and commercial truck companies [10] or conducting questionnaire survey for bus managers that include questions on drivers' choices and training, drivers' motivational programs, user training, and management programs related to safety issues [9]. In addition, in studies conducted by managers of bus transportation companies, the use of new technologies like intelligent transportation system (ITS) or automatic vehicle location (AVL) has been found very effective in increasing safety and supervision [33]. Also, the use of new technologies has increased the safety of taxis as well [34]. Understanding the risks and benefits of such systems presents an opportunity to recalibrate more accurate community perceptions of driver safety [35], which can be achieved by considering demand-responsive solutions, and change the policy focus in order to improve public transport service [36]. In addition to the unsafe driving behaviors of taxi drivers, inaccessibility to reports of taxi drivers' crashes and lack of monitoring driver choices are other taxi safety problems of the city. However, the Tehran Taxi Organization has made substantial efforts to increase drivers' awareness of taxi safety issues. ese efforts include basic driver training, the use of incentive and punishment systems, and the use of new surveillance technologies. However, no specific study has been performed on the relevance and effectiveness of these methods. Literature suggests there is a need to monitor the impact of road safety management tools and control the appropriateness of safety management efforts [37]. Methodology A flowchart showing the steps followed in this study is shown in Figure 1. With the taxi driver safety issues identification based on relevant literature review and consultation with drivers, a questionnaire was designed (refer to appendix). e questionnaire included questions related to the taxi driver, road and road users, and use of technologies in taxis (refer to Section 3.2 for more details on the rationale behind the questions). e data collection included a survey of 42 participants that included 22 regional managers and 20 transportation specialists from the Tehran Taxi Organization (refer to Section 3.1 for more details on participants). To analyze the data, relevance and priority ranking was determined for each item followed by the Kendall concordance test to identify the priorities of participants and their evaluation. Finally, the relevant results and discussions were made. Participants. Tehran Taxi Organization is responsible for evaluating and monitoring the performance of taxi drivers. Considering that the metropolis of Tehran has 22 urban areas, this organization has a manager in each area to perform the mentioned tasks. All 22 regional managers of the Tehran Taxi Organization, as well as 20 transportation specialists of this organization (average work experience of 16 years), participated in this survey. Among the participants, 41 were male and one was female. e sample size was limited by the number of managers and transport specialists available at the Tehran Taxi Organization. Also, previous studies in the context of bus safety recruited 26 bus managers to evaluate their opinions regarding safety issues [8,33]. erefore, it is assumed that the current sample size is sufficient for the present study. Study Design. Using the experience of previous studies which applied the questionnaire survey method to assess bus and truck safety management [9,10,33], the primary questionnaire was framed. Questions were modified to fit into the taxi safety issues. Also, due to the differences in the safety policies and priorities between the case study presented in the paper and the previous literature, several questions were revised to fit the context of the study country. Moreover, the opinions of representatives of the Taxi Organization and taxi drivers were considered in the development of the survey questionnaire. Finally, a detailed survey questionnaire was prepared for managers and experts of the Tehran Taxi Organization to assess issues and problems related to the safety of taxis. e questionnaire consisted of three main sections as follows. Questions Related to the Taxi Driver. As taxi drivers play an important role in the taxi industry, evaluating various safety issues related to taxi drivers can help the administrator to better identify the safety priorities. Based on the previous literature, most critical drivers related issues were selected and assessed in Questions A1 to A7. Drivers' characteristics, health issues, incentive programs for drivers, and also training programs were evaluated in this section. Questions Related to Road and Road Users. Due to the high share of taxis in public transport and their constant presence in the traffic flow, the interaction between taxis and other road users such as passenger cars and pedestrians has always been an important issue for transport planners. To evaluate these issues in terms of frequency and consequences for other road users, this section was provided. Questions Related to the Use of Technologies in Taxis. e use of new technologies to increase vehicle safety and monitor taxi drivers has been widely investigated before. To identify the best monitoring and driver assistant systems from the point of view of taxi transport managers, several systems with respect to feasibility of use and current taxi organization's policies in the study country were provided in the questionnaire. Research details were provided to the participants through a representative of the Tehran Taxi Organization. en, the questionnaires were given to 22 regional managers through the "office automation" of Taxi Organization and were completed for 4 days from December 28, 2019, to January 1, 2020. Twenty transportation specialists working in the Tehran Taxi Organization also completed the questionnaires through face-to-face interviews. For each question in the questionnaire that deals with a specific topic of taxi safety, several solutions were proposed. Borrowing from literature on bus safety management studies [8,33], the relevance and priority ranking framework was utilized to assess managers' perceptions regarding taxi safety issues. e participants were asked to answer the questions using the two criteria of "relevance" and "priority" ranking. First, they were asked to select one of the five available Likert scale items (very high, high, medium, low, and very low) to determine the effectiveness or relevance of each item in terms of safety. ey were then asked to rate three items for questions in Section A and five items for Sections B and C in terms of the most important safety priorities. Table 1 is used to weight the indices [33]. e score � 3 indicates medium effectiveness with a mean score higher than 3 considered as effective. Using the given weights, the weighted mean of the two criteria of relevance and priority ranking was determined for each item, which made it possible to compare different items in each question. Data Analysis. To better identify the priorities of participants and their evaluation, we applied the Kendall concordance test. e level of agreement and consensus of managers were specified in terms of ranking priorities of the items in each question. In this test, using Kendall's concordance coefficient, the amount of selection concordance of n items is determined by m individuals [38]. e null hypothesis in this test is that there is no concordance between the priorities and they are completely independent. e null hypothesis is rejected when at least one person is in concordance with the other person or all in choosing the priority. e Kendall concordance coefficient (W) is a nonparametric statistic with values ranging from 0 to 1. A value of 0 means no concordance between choices, and a value of 1 means that the choices are unified [39]. e value of W can determine the actual estimate of concordance between the participants and show how strong this concordance is [40]. e statistical value of W is defined as follows: e parameter S is the sum of the squared deviations from the mean and is calculated as follows: If we assume that item i has a r ij priority index that is scored by selector j and has a total of n items and m selectors, R i and R parameters are defined as follows: [8]. According to this table, questions with a value of W ≥ 0.5 have a good concordance between most managers and specialists, which reflects the concordance of people's opinions and the use of similar criteria in ranking options [38]. Another parameter by which the significance of the test can be examined is through the chi-square (X 2 ). is parameter helps us to examine the significance of the Kendall test by calculating P value. At a 95% confidence level, P values ≤5% indicate the significance of the test. e following equation is used to calculate X 2 : e parameter n is the number of items available for prioritization, m is the number of participants, and W is the Kendall concordance coefficient. Relevance and Priority Ranking of the Proposed Solutions. To evaluate the solutions proposed in each question and determine their level of relevance and priority ranking, the weighted mean was determined according to the weights allocated by each participant. Figures 2-8 show the results for questions in Section A (Questions A1 to A7). e questions of this section examined safety issues and problems related to taxi drivers from the perspective of managers and transportation specialists. ese issues include the taxi driver's risky characteristics, training programs and assessment, driver incentives and punishment, and driver physical and mental health assessment programs. e results of Question A1 revealed that driver fatigue and drowsiness were found to be the most important risk factors of the driver in terms of relevance and priority ranking. ey are followed by inexperienced driver as the factor associated with risk. e results also revealed that while the driver's insufficient training was found to be important (relevance score � 3.76), it had the lowest priority among other driver's risk factors. In the context of primary training, related to Question A2, practical training with a vehicle was chosen to be more effective than theory classes or simulators. It was found to have the highest priority among initial training programs. Moreover, managers perceived sample materials such as books, not effective in comparison with other driver training programs. Also, they had the lowest priority for taxi transport planners. In regard to driver-related health problems, in Question A3, managers and specialists considered all items highly relevant (mean score of 4.48). e results of the priority ranking scale revealed that problems related to alcohol and drug use had the highest priority for taxi managers in terms of safety. It is interesting to note that participants perceived general health issues and wrong lifestyle, the lowest priority among other health issues. It is to be noted that since the managers and specialists were not taxi drivers, they responded based on their experiences of interactions with taxi drivers in the past and their overall perceptions of taxi driver safety issues. With regard to incentives and punishments, according to responses to Question A4, it is highly important to give special privileges and rewards to drivers as incentives for safe driving behavior. Managers also believed that allocating rewards for safe driving had more priority than penalizing drivers for unsafe driving, even though these penalties were found to be effective according to the relevance scale. e results of Question A5 showed that taxi transport planners unanimously assumed driver monitoring by GPS as the best way to evaluate drivers in the fleet. It is followed by the periodic observation of drivers as the second effective way to evaluate drivers in the taxi fleet. e answers to Question A6 revealed that medical examinations and periodic health check-ups were found to be the most effective way to assess drivers' health. Consulting programs and fatigue management programs were also found to be effective for health assessment. Medical examinations had the highest priority for managers in the context of drivers' health assessment. In Question A7, regarding the management of drivers' problems, financial rewards and remedial trainings can be very effective in improving the safety behavior of drivers. e results of the priority ranking scale showed that managers and experts perceived remedial trainings had more priority than monetary rewards in this context. Figures 9-12 show the results for the road section and road users (Questions B1 to B4). In these questions, the response of managers on road safety issues and the conflicts with other road users are assessed with respect to taxis. Efforts have been made to investigate the most dangerous maneuvers and actions of taxis while using roads. At the same time, in this section, the managers' opinion on the impact of different methods of educating users has been determined. According to the managers of Taxi Organization in Question B1, the most important maneuver that can be dangerous and lead to crashes for taxis is crossing intersections. It is followed by hard braking and lane changes which involved high risk and consequences for other road users. e results of the priority ranking scale were consistent with the relevance scale and showed that crossing intersections, hard braking, and lane changes, respectively, had more priority from the point of view of managers. It is to be noted that managers and specialists were asked to identify the high-risk maneuver in terms of severity of crashes and consequences for other road users using their experiences in the context of taxi safety (which may include their familiarity of relevant statistics, reports, etc.). Question B2, which showed the causes of crashes, revealed that ignorance of observing the safe distance from the front vehicle (relevance score of 4.36), driver's mistake (relevance score of 4.26), and inside visibility (relevance score of 4.26) are the main causes of crashes in participants' opinions. However, managers perceived drivers' error had more priority than other causes of crashes in terms of severity and consequences for other road users. According to the results of Question B3, passengers' loading and unloading were found to have very high priority and relevance in the context of safety factors of passengers. e results of Question A4 regarding the role of training programs for users showed that starting education in schools has a better impact than theoretical or practical training programs. e results of the questions of the vehicle section (Questions C1 and C2) are shown in Figures 13 and 14. In this section, managers' responses to new systems related to taxi safety have been assessed. According to the respondents, in Question C1, the use of vehicle control and monitoring system by GPS had the most relevance and priority in terms of driver monitoring by new technologies. It is followed by the on-board system to the analysis of incipient failure. e managers perceived digital tachograph had the lowest priority among in-vehicle monitoring systems. In terms of choosing the driver assistance system (Question C2), the managers considered the start-inhibit technology to detect open doors as the most effective feature Journal of Advanced Transportation in the context of taxi safety. Also, brake assist was found to have a high priority among proposed items. Kendall's Test of Concordance. e relevance results showed that most items scored more than three (i.e., the medium effectiveness) in terms of relevance in improving safety. To better identify the priorities of participants and the level of agreement and consensus among managers in terms of ranking priorities, Kendall's concordance test was applied. e results of the Kendall test are provided in Table 3. Kendall's concordance coefficient indicates that, in Questions A5 (drivers' assessment in the fleet), B3 (the safety of passengers and pedestrians when using a taxi), B4 (training programs for taxi users), and C2 (the most effective driver assistance system), there is a good concordance (W ≥ 0.5) between participants. is coefficient for Question C1 (the best in-vehicle monitoring system) is 0. 49 Journal of Advanced Transportation managers agree on the high priority of the proposed solution. It also shows that this solution, according to the managers, improves the safety of the problem. Only in Questions A6 (driver health assessment) and A7 (drivers' problem management), the items selected by the participants were found to be nonsignificant. Taxi Driver, Road and Road Users, and In-Vehicle Systems. e results from Question A1 revealed that fatigue and drowsiness were found to be the most important risk factors for the driver. is is consistent with the results of [41] which considered fatigue as one of the most important factors that can cause an accident. In general, fatigue reduces the speed of reaction and increases the number of errors in decision making. It is important to note that professional drivers thought that fatigue was more serious for other drivers than for themselves, and they also thought that they were effective in counteracting the effect of fatigue on their driving performance. is optimism bias is the most probable reason for a prolonged driving time which can contribute to fatigue [19]. Additionally, driver age is one of the demographic factors that made a significant contribution to the taxi drivers' fatigue [42]. Regarding the role of initial training for taxi drivers (Question A2), due to the differences in the attitude and driving behavior of taxi drivers compared to other drivers, they should be given different trainings [25]. Such targeted "risk training" in drive test has the potential to encourage safe driving behavior [43]. In addition, driving safety workshops and seminars could be scheduled to implement behavior and attitude change programs [30]. Also, reading books and educational booklets has the least impact on drivers' initial trainings from the point of view of managers. is can be examined from two perspectives. Firstly, because of the high workload of taxi drivers, they may not have sufficient time to study [20]. Secondly, many drivers do not have academic education or have only primary education, which can affect the amount of studies they can undertake [44][45][46]. e results of Question A3 showed that all proposed items were found to be highly important according to the relevance scale. Given that the drivers' health condition can directly affect driving behavior and thus the likelihood of an accident [47,48], a system needs to be established to monitor drivers' health. is system can improve the drivers' performance by determining factors that threaten the health of drivers and thus reducing the likelihood of crashes [49]. A great number of drivers' health problems are related to poor working conditions, i.e., the length of the workday and the number of days worked in a week leave little time for recreational activities and lead to a sedentary lifestyle which can cause health issues [6]. Furthermore, according to the answers of Question A6, medical examinations can be the most effective way to assess their health. Also in this question, fatigue management programs were considered important for the participants (mean score of 4.02). is is consistent with the results of Question A1 on risk factors. As fatigue in taxi drivers is different from drivers who drive long distances [50], training programs should be implemented to teach drivers how to reduce fatigue and stress. ese programs can improve their working conditions [6]. With regard to incentives and punishments, according to Question A4, managers perceived that it is highly important to give special privileges and rewards to drivers. Assuming that, for long hours of driving, most drivers' incomes are low and even lower than basic income in some cases [51], providing these privileges and rewards may assist drivers to meet their financial needs. However, government support is needed to implement these incentive programs. Moreover, nonfinancial rewards can be given to the drivers in the form of vehicle consumables such as tires, discount coupons, and similar items. Consistent with the results of this question, bus managers considered bonuses and awards highly effective in the context of improving bus safety [33]. Further investigation is needed to better identify the benefits and challenges of the incentives and punishments programs in the context of taxi safety. In Question A5, driver monitoring by GPS was the best way to evaluate drivers in the fleet. e advantages of this system are accurate information about drivers' current location and distance travelled, easy communication with the operator in case of any problems, and reporting unauthorized speed or crashes. In addition, taxi GPS trajectories data which contain massive spatial and temporal information of urban human activity and mobility can provide valuable sources to investigate residents' travel demand and future planning [52]. Using this system can also reduce the likelihood of physical attacks on the driver [53]. In Question A7 regarding the management of drivers' problems, financial rewards can be very effective, but this item does not have a high priority index. is could be due to the limited financial resources of the organization. Remedial training was also found to be effective with high priority in terms of managing drivers' problems. ese trainings include consultation and safety workshops which intend to improve skill deficiencies. e results of Question B1 showed that managers perceived intersection crossing, the most important maneuver of the taxi in terms of risk and severity of accident consequences. According to studies conducted in various countries, including the United States, Norway, and Bangladesh, 34 to 41% of all car crashes occur at or near intersections [54][55][56]. is indicates that the intersections are the most dangerous places in terms of the probability of crash for cars on the street. Additionally, hard braking was found second important dangerous maneuver which can cause accidents. Establishing restricted zones for taxis in order to pick up and drop off passengers for a better point-to-point service can result in fewer sudden stops by taxis along the street that is unexpected for other drivers [57]. e results of Question B2 revealed that driver's error had more priority than other causes of accidents from the point of view of managers. [58] reported that more than 70% of crashes are due to diver errors. Since the effect of driver's mistake is decisive in many crashes [29], one of the solutions that can reduce crashes due to the nonobservance of safety distance and driver's mistake is to use the driver's assistant system for driving at a safe speed and appropriate distance. is system can effectively reduce reaction time, reduce interactions with other road users, and maintain a safe distance with the front car [59]. To avoid human errors in driving and improve safety, autonomous or driverless cars are being investigated in depth in the literature [60] with researchers predicting that such vehicles would be soon in the market in near future [61]. Also, [62] reported that the potential consumers may be willing to pay more for using autonomous transportation modes if they become available in the future. e results of Question B3 showed that passengers' loading and unloading have a very high priority and relevance in terms of passengers' safety. In the opinion of experts in this field, ensuring the safety of taxi passengers is more significant than pedestrians. is observation is consistent with the findings from a study on bus transport management [33]. In the context of educating users, participants believed that starting education in schools has a better impact than theoretical or practical training programs. From an early age, these trainings can familiarize people with the safety issues of taxis and reduce the risk. e use of media and public awareness programs can also be effective. Due to the rapid expansion of cyberspace and the potential to connect with many people, there is an opportunity to use cyberspace for education to taxi drivers. e managers unanimously perceived GPS as the most effective in-vehicle monitoring system. e results of this question are consistent with the results of Question A5 where it was reported that the evaluation of drivers in the fleet by the GPS received the highest score for relevance and priority ranking. Leveraging various benefits of equipping a taxi fleet with GPS [53], most managers agree on the efficiency of this system. Moreover, due to the dynamic nature of the work driving environment, the organization should constantly monitor the taxi fleet to evaluate current strategies and identify potential risks [30]. On the other hand, [63] discussed the limitations regarding the use of GPS data in the context of taxi and mentioned that taxi trajectories represent a very small portion of urban mobility in most of the cities, and thus, there is a need to consider nontaxi users. e results of Question C2 showed that the start-inhibit system to detect open doors was the most effective driver assistance system. According to the results of Question B3, which considered passengers' loading and unloading highly crucial in terms of risk, this system can assure passengers' safety during these two maneuvers. In one study, managers of bus companies also found this system to be very effective in increasing bus safety [33]. Assessing Consensus among Managers. Given the W coefficient in Question A5 and Question C1, managers unanimously chose the use of the GPS as the best in-vehicle monitoring system and the best way to evaluate the driver in the fleet. One of the problems with the Taxi Organization is the lack of an accurate report of drivers' crashes and the difficulty in receiving them from the police. Given the direct impact of drivers' driving behavior on the rate of crashes [29], these reports can be a good criterion for evaluating a driver. e GPS can provide a good alternative for controlling and evaluating the drivers in the fleet, with features such as easy communication with the operator, providing drivers' current location, and automatic crash notification. e GPS, in addition to the reports received from passengers, can provide the basis for creating a driver ranking system, similar to the ride-hailing services rating system such as Uber [64]. According to the results of the Kendall test for Question B3, passengers' loading and unloading have the highest priority in terms of safety. As an effective way to reduce the risk of this movement, managers in Question C2 unanimously chose to use the start-inhibit technology to detect open doors. In addition to using this new technology, passenger safety training can be effective in reducing risk when getting in and out of a taxi. According to responses in Question B4, these trainings should begin at an early age and at schools to have the greatest impact. Conclusions Taxi is one of the on-demand passenger transport modes that serve the transport demand in the metropolitan area of Tehran. However, various factors such as driver's characteristics, environmental factors, or inadequate management programs of the taxi organization have brought about many safety-related problems. Although the Tehran Taxi Organization has made substantial efforts to increase drivers' awareness of taxi safety issues, no specific study has been performed on the relevance and effectiveness of these programs. To tackle these issues, the opinions of managers and specialists of the Taxi Organization were assessed regarding taxi safety issues through a questionnaire survey. Given that these people are the main decision-makers about the organization's safety plans, their opinions and suggestions can determine their priorities for future decisions. Furthermore, Tehran has one of the largest taxi fleets in the world, and therefore, the results of this research can be a valuable resource for countries that have a similar transportation system as Tehran. e participants responded to questions related to the taxi driver, road network and road users, and the use of new technologies to increase vehicle safety and management issues and programs. According to the findings of this study, the managers considered most of the items under those questions to be effective, which indicates the prevalence of safety issues in taxis. Participants stated that using technologies such as GPS will improve the safety of taxis. e use of GPS was unanimously chosen as the best in-vehicle monitoring system. In addition, participants believed that this system can be the best way to evaluate drivers in the fleet. e start-inhibit technology to detect open doors was evaluated unanimously efficient by managers considering the high risk of passengers' loading and unloading. e intersections are the most dangerous places in terms of the probability of taxis crashes and that the failure to observe a safe distance from the front car and driver's mistake are the main causes of crashes in participants' opinions. With respect to educating taxi users, starting education in schools has the most relevance and priority. In terms of driver risk factors, managers found fatigue and drowsiness to be very important but received less priority in driver health assessment programs. Although the managers considered the programs related to driver health assessment and the problems in this area very important, no unity was seen regarding the priority of these cases. Based on the results of this study, the following recommendations for taxi safety managers and policymakers to improve the safety of taxi drivers and their working conditions are suggested. It is to be noted that these recommendations are independent of the limitations of the organization, but can be altered based on existing conditions: (1) e use of technologies such as GPS to improve the safety of taxis can be very effective. Monitoring and evaluating drivers by intelligent control systems such as GPS can provide a basis for identifying drivers' problems and weaknesses. Based on this information, suitable specific training programs can be targeted to the driver to improve their driving performance. GPS can also be used to create a driver ranking system, similar to those used by ride-hailing services. Having said that, there may be privacy issues for drivers due to the use of GPS. However, drivers may be prepared to accept surveillance through GPS and loss of privacy if it would mean that the taxi industry would be regulated better, with improved safety and financial rewards. Further, the acceptance is more likely if taxi drivers are allowed to participate in the decision-making process and to have a say on who controls it, who decides what information to collect, and how to use it. (2) It is suggested to cooperate with the police authority to receive crash reports. Also, managers should submit a monthly safety report to evaluate the actions that are implemented. In addition, a person or a group with a specialty in vehicular safety can be assigned to evaluate crashes and assess safety. (3) In order to prevent drivers' fatigue, it is recommended to change drivers' working conditions by modifying drivers' working hours. It is also necessary to provide fatigue management training programs for drivers, which include trainings to recognize and prevent fatigue. Also, installation of drowsiness and fatigue detection system in taxis may improve the safety of the drivers. (4) In addition to safety issues, anger management, stress and fatigue management, addressing the possible problems of drivers' health and lifestyle, and eating habits are also important. For easier learning, non-face-to-face trainings through the online medium by using the cyberspace are recommended. (5) It is necessary to conduct periodic assessments of drivers' health. ese programs can include periodic check-ups, addiction tests, drivers' psychological assessment, and counselling programs to improve their lifestyle and avoid bad habits. (6) Allocating financial and nonfinancial rewards to safe drivers can be effective to encourage safe driving practices and improve drivers' driving performance. Nonfinancial rewards can be given to the drivers in the form of vehicle consumables such as tires, discount coupons, and similar items. (7) It is suggested to start trainings related to the safety of taxi users from schools. Moreover, the use of cyberspace and different online media can be effective to teach these items. More research is needed on taxi driver training and policy decisions in the taxi industry. It would be good to develop programs to evaluate taxi drivers' work concerns to foster positive changes. Failure to assess suitable safety strategies by taxi organization managers could lead to a significant waste of resources and continued loss of life and property. Moreover, in our present study, we only analyzed the perceptions of managers and specialists. ere is a need to consider taxi drivers' beliefs and evaluate them regarding the discussed issues so that a better taxi driver risk management framework could be developed to improve taxi drivers' safety. By conducting similar studies in other countries that have similar cultural context, it could help to identify and prioritize different challenges to improve taxi safety. Data Availability e data used to support the findings of this study are available from the corresponding author on request through email kayvan.aghabayk@ut.ac.ir. Conflicts of Interest e authors declare that they have no conflicts of interest. substantially improve the paper. e authors would also like to thank Dr. Alireza Ghanadan and Engineer Amir Rouhi for coordinating the interviews. Supplementary Materials e Survey Questionnaire of transport management. (Supplementary Materials)
2021-05-08T00:02:43.712Z
2021-02-26T00:00:00.000
{ "year": 2021, "sha1": "e7b6fadc0a3ccb9075e6824b5b4d5dbf27929596", "oa_license": "CCBY", "oa_url": "https://doi.org/10.1155/2021/6638640", "oa_status": "GOLD", "pdf_src": "MergedPDFExtraction", "pdf_hash": "ae3fed952e4dbff56b54396ad685544605fc1742", "s2fieldsofstudy": [ "Business" ], "extfieldsofstudy": [ "Business" ] }
255045480
pes2o/s2orc
v3-fos-license
Structure and Thermal Stability of ε/κ-Ga2O3 Films Deposited by Liquid-Injection MOCVD We report on crystal structure and thermal stability of epitaxial ε/κ-Ga2O3 thin films grown by liquid-injection metal–organic chemical vapor deposition (LI-MOCVD). Si-doped Ga2O3 films with a thickness of 120 nm and root mean square surface roughness of ~1 nm were grown using gallium-tetramethylheptanedionate (Ga(thd)3) and tetraethyl orthosilicate (TEOS) as Ga and Si precursor, respectively, on c-plane sapphire substrates at 600 °C. In particular, the possibility to discriminate between ε and κ-phase Ga2O3 using X-ray diffraction (XRD) φ-scan analysis or electron diffraction analysis using conventional TEM was investigated. It is shown that the hexagonal ε-phase can be unambiguously identified by XRD or TEM only in the case that the orthorhombic κ-phase is completely suppressed. Additionally, thermal stability of prepared ε/κ-Ga2O3 films was studied by in situ and ex situ XRD analysis and atomic force microscopy. The films were found to preserve their crystal structure at temperatures as high as 1100 °C for 5 min or annealing at 900 °C for 10 min in vacuum ambient (<1 mBar). Prolonged annealing at these temperatures led to partial transformation to β-phase Ga2O3 and possible amorphization of the films. Introduction Gallium oxide (Ga 2 O 3 ) is an ultrawide bandgap (UWB) semiconductor material that received great research interest in the last decade due to its outstanding material properties. Its UWB (~4.5-5.3 eV) and high theoretical critical electric field (~8 MV/cm) are suitable for fabrication of high-voltage and high-power electronic devices exceeding the capabilities of current power electronic device materials (Si, GaN, and SiC) [1][2][3][4][5]. Applicability of Ga 2 O 3 for high-power application can be well documented by Baliga figure of merit, which reaches a theoretical value of 3571 for Ga 2 O 3 , superseding its main competitors, such as GaN (667) and SiC (134) [3][4][5]. Up to now, breakdown field of 3.8 and 5.2 MV/cm was experimentally demonstrated for monoclinic (β) Ga 2 O 3 -based lateral metal oxide semiconductor fieldeffect transistor (MOSFET) and vertical heterostructure, respectively [6,7]. Concerning high-power switching applications, enhancement-mode β-Ga 2 O 3 MOSFET with a power figure of merit (breakdown voltage/specific ON-state resistance) of 192.5 MW/cm 2 was recently reported [8]. UWB of Ga 2 O 3 makes it also very attractive for optoelectronic devices e.g., solar-blind photodetectors, or as a host material for phosphors suitable for electroluminescent displays when activated by transition metals or rare earth elements [8]. Ga 2 O 3 crystalizes in several phases differing in bandgap and other material properties. The only thermodynamically stable phase is the monoclinic β-Ga 2 O 3 , which can be also produced as bulk crystals using melt-grown techniques [9,10]. Metastable corundum α-Ga 2 O 3 phase offers larger bandgap, wider capabilities in forming heterostructures [11], and several µm-thick layers can be grown by a simple and scalable Mist-CVD method [12]. A metastable hexagonal structure of ε-Ga 2 O 3 may allow for high-quality epitaxial layers grown on various hexagonal substrates, such as the often-used sapphire [13], but also GaN or SiC for enhanced heat spreading [14][15][16]. Recent studies discussed piezoelectric properties of ε-Ga 2 O 3 , which may give rise to future polarization-engineered heterostructures, similar to the case of III-N materials [17,18]. The concept of III-N and ε-Ga 2 O 3 integration may thus offer great potential for manufacture of power transistors with lower on-state resistance. In this case, however, the thermal stability of ε-Ga 2 O 3 during III-N barrier growth will be one of the key limiting factors and needs to be addressed. ε-Ga 2 O 3 belongs to the P6 3 mc space group in which a 4H close-packing oxygen layer sequence contains disordered Ga atoms occupying octahedra and tetrahedra sites in 2:3 stoichiometry [19,20]. However, detailed microstructural study of the films identified as ε-Ga 2 O 3 using X-ray diffraction (XRD) analysis pointed out that the real structure of the films was composed of nanoscale domains (5-10 nm in size) with orthorhombic structure belonging to the Pna2 1 space group, also known as κ-phase [20]. In contrast to ε-phase, Ga atoms are ordered in the κ-phase nanodomains, occupying octahedral and tetrahedral sites where edge-sharing octahedra and the corner sharing tetrahedra form zig-zag ribbons along the [100] direction [20]. The twinned nanodomain structure results in diffraction with pseudo-hexagonal symmetry, making the discrimination between the two phases using XRD extremely challenging. This is because the probing resolution of the XRD may be lower than the ordering range of the κ-phase nanodomains, revealing the averaged, disordered structure identified as the ε-Ga 2 O 3 . As a result, standard symmetrical 2θ/ω scans are insufficient in distinguishing between ε and nanodomain κ-phase Ga 2 O 3 . On the other hand, other commonly used XRD analyses, such as ϕ scans, can provide more conclusive results. Yet, systematic studies on the applicability of ϕ scans for unambiguous discrimination between εand κ-phase Ga 2 O 3 are limited. Conventional transmission electron microscopy (TEM) faces similar limitation when apertures larger than nano-sized domains for selected area electron diffraction (SAED) are used and it reaches smaller resolution than needed to distinguish lattice periodicity using phase contrast analysis. Consequently, while high-resolution TEM can be used for unambiguous identification of the κ-phase [20], conventional TEM suffers from inconclusive reciprocal lattice analysis of observed material. This is why we will refer to ε/κ-Ga 2 O 3 rather than phase pure Ga 2 O 3 polymorphs in the following. Thermal stability represents an important concern for epitaxial films with metastable structure, as device processing typically involves high-temperature steps for e.g., Ohmic contact annealing. Thermal stability of ε/κ-Ga 2 O 3 grown by MOCVD was examined by annealing at elevated temperatures in N 2 and O 2 atmospheres [14,28]. Xia et al. [14] reported ε/κ-phase stability up to 800 • C during annealing for 30 min in N 2 , while Fornari et al. [28] observed thermally stable films annealed at 700 • C for 3 h in N 2 or O 2 atmosphere using ex situ XRD. Detailed analysis using in situ differential scanning calorimetry revealed an onset of initial phase transformation already at 650 • C. These results demonstrate sufficiently robust thermal stability of ε/κ-Ga 2 O 3 epitaxial films required for device processing. However, the thermal stability of ε/κ-Ga 2 O 3 films in different conditions, such as vacuum or hydrogen, has not yet been studied. In this work, we report on crystal structure and thermal stability of ε/κ-Ga 2 O 3 epitaxial films grown by LI-MOCVD. In particular, we investigated the applicability of XRD or conventional TEM to discriminate between εand κ-phase Ga 2 O 3 films. In addi-tion, thermal stability of the prepared ε/κ-Ga 2 O 3 films were examined using in situ and ex situ XRD. Materials and Methods Ga 2 O 3 thin films studied here were grown by LI-MOCVD. This method represents a modification of MOCVD, where metal-organic chemicals dissolved in an appropriate solvent are injected into the vaporization part of the reactor via electromagnetic microvalves. Thermally decomposed vapors of precursors and reactant gas are transported into the deposition part of the LI-MOCVD reactor using carrier gas, where the deposition of required material occurs on the heated substrate. The liquid-injection system offers several advantages, such as versatility of depositing materials, excellent layer thickness control via precise precursor dosing, and low vapor pressures of delivered complexes [29,30]. More details on LI-MOCVD growth of αand β-Ga 2 O 3 epitaxial films can be found elsewhere [27]. Si-doped Ga 2 O 3 films with a thickness of 120 nm (measured by ellipsometry) were deposited on c-plane sapphire substrates at a deposition temperature of 600 • C. Galliumtetramethylheptanedionate (Ga(thd) 3 ) and tetraethyl orthosilicate (TEOS) as Ga and Si precursors, respectively, dissolved in toluene, and a Ar/O 2 carrier/reactant gas flow rate of 120 and 600 sccm were used. Temperature at the vaporization part of the reactor was set to 170 • C. The crystal structure of prepared films was studied by the XRD using the Bruker D8 DISCOVER diffractometer equipped with X-ray source with a rotating Cu anode operating at 12 kW. All measurements were performed in parallel beam geometry with a parabolic Goebel mirror in the primary beam producing the beam divergence~0.03 • . Symmetrical 2θ/ω scans were measured with the beam size of 1 × 6 mm 2 . To suppress the strong diffraction from the sapphire substrate, samples were tilted by an angle of 0.5 • away from the exact diffraction position of the substrate. The azimuthal ordering of the layer structure and the orientation of the Ga 2 O 3 lattices with respect to the sapphire substrate were analyzed by ϕ scans of the selected diffractions. For these measurements, the beam size was reduced to 1 × 2 mm 2 , and a parallel plate collimator with the angular acceptance of 0.35 • was inserted into the diffracted beam in order to decrease the effect of defocusing. A JEOL JEM 1200 EX transmission electron microscope was used for TEM analysis. A plane view specimen was prepared by mechanical grinding and polishing of a sample from its substrate side followed by Ar ion milling on a liquid nitrogen-cooled holder. Optical characterization in the range of 240-900 nm was performed by the USB4000 spectrometer from Ocean Optics. Thermal stability of Ga 2 O 3 films was monitored in situ by the XRD while applying the high-temperature annealing cycle using the Anton Paar DHS1100 domed hot stage annealing chamber under vacuum (>1 mBar). Consecutive symmetrical 2θ/ω scans were performed at each annealing step within the temperature range ramping up from~31 • C to 1100 • C and back down to~66 • C using 50 • C increments. Time between the measurement was kept close to 1 min to allow temperature stabilization. Surface morphology was investigated by the NT-MDT NTEGRA Prima atomic force microscope (AFM) in tapping mode. Resistivity of the films was examined by room-temperature Van der Pauw method. Despite of the Si doping, the films were found to be highly resistive, similar to previous reports [31]. Crystal Structure of ε/κ-Ga 2 O 3 Thin Films For thin films with strong preferred orientation, only diffractions from one family of lattice planes can be observed in symmetrical 2θ/ω diffraction patterns. This is the case of the monoclinic β-Ga 2 O 3 layer grown on c-sapphire, where the wide angle diffraction pattern exemplified in Figure 1 [27] (ICSD 14747). Identification of the phases can be conducted by careful determination of the position of the diffraction maxima by standard X-ray evaluation software. In this way, the monoclinic β phase can be easily distinguished from ε and κ phases. However, this straightforward method fails for the resolution of ε and κ phases. As it is seen from the 2θ values listed above, the corresponding diffraction maxima occur almost at the same angles 2θ. In this case, the measurement of a ϕ scan of an appropriate diffraction hkl inclined by a particular angle χ from the sample normal can help to resolve the two phases. In general, for unambiguous identification of a particular phase, it is sufficient to find at least one diffraction that does not coincide with the diffractions of other phases, i.e., the diffraction angles 2θ and/or the angle of inclination χ of the phase to be identified is well separated from the corresponding angles of the other phases. Unfortunately, this is not the case of the ε and κ phases of Ga 2 O 3 . Both structures are tightly interconnected and their structures are conformable on an atomic level [20]. The hexagonal Ga 2 O 3 has higher symmetry and the structure can be described as a smaller unit cell with shorter in-plane lattice parameters. On the other hand, the lower symmetry and larger unit cell of the orthorhombic Ga 2 O 3 results in a larger number of accessible diffractions. As will be shown below, however, for all measurable diffractions of the ε phase, at least one diffraction of the κ phase with almost identical 2θ and χ angles can be found. tern exemplified in Figure 1 [27] contains only the diffractions 2 01, 4 02, and 6 03 at the 2θ angles 18.907°, 38.388°, and 59.091°, respectively (indexing according to PDF 00-041-1103). Similar diffraction patterns are produced either by c-oriented hexagonal ε-Ga2O3 or by c-oriented orthorhombic κ-Ga2O3. The 2θ angles of the diffractions 0002, 0004, and 0006 are 19.164°, 38.892°, and 59.918°, respectively, for ε-Ga2O3 (ICSD 236278) and 19.105°, 38.769°, and 59.717°, respectively, for κ-Ga2O3 (ICSD 14747). Identification of the phases can be conducted by careful determination of the position of the diffraction maxima by standard X-ray evaluation software. In this way, the monoclinic β phase can be easily distinguished from ε and κ phases. However, this straightforward method fails for the resolution of ε and κ phases. As it is seen from the 2θ values listed above, the corresponding diffraction maxima occur almost at the same angles 2θ. In this case, the measurement of a φ scan of an appropriate diffraction hkl inclined by a particular angle χ from the sample normal can help to resolve the two phases. In general, for unambiguous identification of a particular phase, it is sufficient to find at least one diffraction that does not coincide with the diffractions of other phases, i.e., the diffraction angles 2θ and/or the angle of inclination χ of the phase to be identified is well separated from the corresponding angles of the other phases. Unfortunately, this is not the case of the ε and κ phases of Ga2O3. Both structures are tightly interconnected and their structures are conformable on an atomic level [20]. The hexagonal Ga2O3 has higher symmetry and the structure can be described as a smaller unit cell with shorter in-plane lattice parameters. On the other hand, the lower symmetry and larger unit cell of the orthorhombic Ga2O3 results in a larger number of accessible diffractions. As will be shown below, however, for all measurable diffractions of the ε phase, at least one diffraction of the κ phase with almost identical 2θ and χ angles can be found. In the following, the subscripts h (hexagonal) and o (orthorhombic) will be used to label the lattice parameters and the diffraction indices of ε-Ga2O3 and κ-Ga2O3, respectively. Comparing the in-plane lattice parameters ah = 0.29036 nm, ao = 0.50463 nm, and bo = 0.87020 nm of ε and κ phases of Ga2O3, respectively, one can reveal the following relations: Figure 1. Comparison of symmetric 2θ/ω scans of β-phase and ε/κ-phase Ga 2 O 3 films grown by LI-MOCVD. Details on the growth of β-Ga 2 O 3 films can be found in Ref. [27]. In the following, the subscripts h (hexagonal) and o (orthorhombic) will be used to label the lattice parameters and the diffraction indices of ε-Ga 2 O 3 and κ-Ga 2 O 3 , respectively. Comparing the in-plane lattice parameters a h = 0.29036 nm, a o = 0.50463 nm, and b o = 0.87020 nm of ε and κ phases of Ga 2 O 3 , respectively, one can reveal the following relations: Both lattices almost perfectly coincide if the orientations of their base vectors are chosen as shown in Figure 2a. Red and blue arrows represent the basic translation vectors of hexagonal and orthorhombic lattices, respectively. The translation vector magnitudes depicted in Figure 2a can be expressed as a h = |a 1 | = |a 2 | = |a 3 |, a o = |a|, and b o = |b|. The projections of the unit cells into the (0001) plane of the sapphire substrate are drawn as an orange rhombus and light blue rectangle. For completeness, the orientations of the hexagonal in-plane axes of sapphire substrate are also shown schematically as green arrows. The relations between the hexagonal and orthorhombic base vectors can be then written in vector form as where a, b and a 1 , a 2 are the in-plane base vectors of orthorhombic and hexagonal phases, respectively, and c o and c h are the corresponding vectors in the c direction. The coefficients at a 1 , a 2 and c h in Equation (1) can be arranged into the form of a matrix. The diffraction indices hkl o and hkil h of orthorhombic and hexagonal lattices are then related through this transformation matrix according to the relation: Note that the third index i of the hexagonal notation is omitted in Equation (3). Using this formula, one can easily find the indices of diffractions of the orthorhombic phase that are equivalent to those of the hexagonal phase. Careful and systematic inspection of the calculated diffraction pattern of ε-Ga 2 O 3 (see e.g., ICSD 236278) reveals that all diffractions that are suitable for ϕ scans (diffractions excepting hki0 and 000l) have their counterparts among the diffractions of κ-Ga 2 O 3 . picted in Figure 2a can be expressed as = | | = | | = | |, = | |, and = | |. The projections of the unit cells into the (0001) plane of the sapphire substrate are drawn as an orange rhombus and light blue rectangle. For completeness, the orientations of the hexagonal in-plane axes of sapphire substrate are also shown schematically as green arrows. The relations between the hexagonal and orthorhombic base vectors can be then written in vector form as and , are the in-plane base vectors of orthorhombic and hexagonal phases, respectively, and and are the corresponding vectors in the c direction. The coefficients at , and in Equation (1) can be arranged into the form of a matrix. The diffraction indices ℎ and ℎ of orthorhombic and hexagonal lattices are then related through this transformation matrix according to the relation: Note that the third index i of the hexagonal notation is omitted in Equation (3). Using this formula, one can easily find the indices of diffractions of the orthorhombic phase that are equivalent to those of the hexagonal phase. Careful and systematic inspection of the calculated diffraction pattern of ε-Ga2O3 (see e.g., ICSD 236278) reveals that all diffractions that are suitable for φ scans (diffractions excepting hki0 and 000l) have their counterparts among the diffractions of κ-Ga2O3. Table 1, along with their diffraction angle 2θ, inclination angle χ, and the calculated modulus squared structure factors |F| 2 . It is interesting to note that each diffraction in the hexagonal phase has two different counterparts in the orthorhombic phase. It can be seen that, e.g., the diffractions 1011 h and 0111 h are equivalent in hexagonal phase but the diffractions 201 o and 131 o of orthorhombic phase are not. Although their angular parameters 2θ and χ are almost identical, the intensities differ significantly. This stems from a different symmetry of both structures. From a practical point of view, the most important result is that there are always three diffractions, e.g., 1015 h , 205 o , and 135 o , which can contribute to the same ϕ scan. In addition, in the layers grown on c-sapphire substrates, three orientation variants of κ-Ga 2 O 3 lattices rotated by 120 • and 240 • are usually developed as a consequence of the symmetry of the (0001) surface. Therefore, both diffractions 205 o and 135 o of orthorhombic phase contribute equally to all six maxima observed in the ϕ scan. The difference in the intensities of 205 o and 135 o diffractions is then cancelled and all observed maxima have approximately the same intensities. This can be seen in Figure 2b, where a typical ϕ scan with six pronounced maxima is shown (red curve). The curve was recorded with the angular parameters of the diffraction 1015 h , i.e., 2θ = 62.280 • and χ = 36.36 • , but it is highly probable that the diffractions 205 o and 135 o of orthorhombic phase contribute to the observed maxima as well. An important conclusion from the above considerations is that the presence or absence of ε-Ga 2 O 3 cannot be proven by measuring the ϕ scans of diffractions of the hexagonal phase. All ϕ scans can be equally well interpreted within the framework of the orthorhombic κ-Ga 2 O 3 phase. Table 1. Three selected diffractions of hexagonal ε-Ga 2 O 3 (left panel) and their counterparts in orthorhombic κ-Ga 2 O 3 (right panel). 2θ is the diffraction angle, χ is the inclination angle with respect to the sample normal, and |F| 2 is the calculated modulus squared structure factor. The identification of the κ phase in Ga 2 O 3 thin films is more useful. One can find several diffractions of the orthorhombic κ-Ga 2 O 3 phase that are suitable for ϕ scans and that do not have a counterpart among the diffractions of the hexagonal phase. It is easy to see that the diffraction 122 o fulfils these criteria (see Figure 2a). Inverting the transformation Equation (2) Finally, from the recorded ϕ scans depicted in Figure 2b and from the orientation of the base vectors depicted in Figure 2a, we can establish the orientation relationships between the ε-Ga 2 O 3 , κ-Ga 2 O 3, and the c-sapphire substrate as (Figure 2b). From the above analysis, we can conclude that the φ scan of the diffraction 122 can serve as an indicator of the presence of orthorhombic κ-Ga2O3 phase in the layer. It is worth noting that appearance of 122 maxima in the φ scan also implies the existence of maxima in the 101 5 φ scan due to diffractions 205 and 135 , regardless of the εphase presence. These diffractions overlap possible contribution of the hexagonal diffractions 101 5 , preventing the unambiguous identification of the hexagonal Ga2O3 phase. The only possibility to detect the ε-phase is a complete suppression of the κ-phase, i.e., the Ga2O3 layer has to be grown single-phased. In this case, the maxima are detected only in the 101 5 φ scan, while no maxima could be detected in the 122 φ scan. Finally, from the recorded φ scans depicted in Figure 2b and from the orientation of the base vectors depicted in Figure 2a In the case of plan-view TEM, electron diffraction showed a rather complex pattern (Figure 3a). The SAED pattern shown in Figure 3a was taken from a thin part of the specimen, thus no contribution from the substrate (including any possible double diffraction substrate-layers spots) were involved here. For analysis of the epitaxial relation, a pattern from the thick part of the specimen was used (Figure 3b-d). To describe the pattern using the pure hexagonal ε-phase, one should suppose the layers were composed of domains with seven different domain orientations. One with the epitaxial relation ε-Ga2O3 (0001) ⌈101 0⌉ || Al2O3 (0001) 112 0 and six with the epitaxial orientations ε-Ga2O3 (112 0) ⌈0002⌉|| Al2O3 (0001) 〈112 0〉. In this way, one can describe only principal high intensity diffraction spots of the SAED pattern, while the other could be possibly explained by a presence of special lattice ordering in the domains. However, such explanation is a blind alley, because of the presence of the six orientations of (112 0) ε-Ga2O3. However, this was not reflected in the layer XRD analysis, where only 0002 diffractions were observed on 2θ-ω scans. Thus, the diffraction pattern cannot be explained by the pure ε-Ga2O3 phase. In the case of plan-view TEM, electron diffraction showed a rather complex pattern (Figure 3a). The SAED pattern shown in Figure 3a was taken from a thin part of the specimen, thus no contribution from the substrate (including any possible double diffraction substrate-layers spots) were involved here. For analysis of the epitaxial relation, a pattern from the thick part of the specimen was used (Figure 3b-d). To describe the pattern using the pure hexagonal εphase, one should suppose the layers were composed of domains with seven different domain orientations. One with the epitaxial relation ε-Ga 2 O 3 (0001) 1010 || Al 2 O 3 (0001) 1120 and six with the epitaxial orientations ε-Ga 2 O 3 1120 0002|| Al 2 O 3 (0001) 1120 . In this way, one can describe only principal high intensity diffraction spots of the SAED pattern, while the other could be possibly explained by a presence of special lattice ordering in the domains. However, such explanation is a blind alley, because of the presence of the six orientations of 1120 ε-Ga 2 O 3 . However, this was not reflected in the layer XRD analysis, where only 0002 diffractions were observed on 2θ-ω scans. Thus, the diffraction pattern cannot be explained by the pure ε-Ga 2 O 3 phase. Instead, the experimental SAED pattern can be fully explained by presence of pure orthorhombic κ-phase. Figure 3c shows indexation of one of the possible orientations of (001) oriented κ-Ga 2 O 3 and its relation to the Al 2 O 3 substrate (its diffraction spots are indexed by blue colour numbers with index A). A combination of three domain orientations of (001) κ-Ga 2 O 3 mutually rotated by 60 • can explain all observed diffraction spots in the pattern (Figure 3d). Because the κ-phase is not centrosymmetric, one can await six possible domain orientations. Yet, similarly to the XRD analysis, it is not possible to exclude the ε-Ga 2 O 3 phase presence from the Ga 2 O 3 layer, because all potentially possible diffraction spots belonging to the (0001) ε-Ga 2 O 3 phase would be superposed to the intense diffraction spots from the three (001) κ-Ga 2 O 3 domain orientation. A possible way to improve the crystal quality of our films may be the LI-MOCVD growth on a vicinal sapphire substrate with an intentional miss-cut angle from the (0001) surface. A similar approach was also applied for Ga 2 O 3 epitaxy using standard MOCVD [32]. scan with alternating angular distances 98.5° and 81.5°. Considering three orientation variants, the expected total number of maxima is twelve. This is confirmed in Figure 2b, where the φ scan of 122 diffraction measured with 2θ = 33.345° and χ = 54.63° is shown by the blue curve. Three selected maxima of φ scans that correspond to diffraction vectors depicted in Figure 2a are marked by corresponding blue and red arrows in the φ scan (Figure 2b). From the above analysis, we can conclude that the φ scan of the diffraction 122 can serve as an indicator of the presence of orthorhombic κ-Ga2O3 phase in the layer. It is worth noting that appearance of 122 maxima in the φ scan also implies the existence of maxima in the 101 5 φ scan due to diffractions 205 and 135 , regardless of the εphase presence. These diffractions overlap possible contribution of the hexagonal diffractions 101 5 , preventing the unambiguous identification of the hexagonal Ga2O3 phase. The only possibility to detect the ε-phase is a complete suppression of the κ-phase, i.e., the Ga2O3 layer has to be grown single-phased. In this case, the maxima are detected only in the 101 5 φ scan, while no maxima could be detected in the 122 φ scan. Finally, from the recorded φ scans depicted in Figure 2b and from the orientation of the base vectors depicted in Figure 2a In the case of plan-view TEM, electron diffraction showed a rather complex pattern (Figure 3a). The SAED pattern shown in Figure 3a was taken from a thin part of the specimen, thus no contribution from the substrate (including any possible double diffraction substrate-layers spots) were involved here. For analysis of the epitaxial relation, a pattern from the thick part of the specimen was used (Figure 3b-d). To describe the pattern using the pure hexagonal ε-phase, one should suppose the layers were composed of domains with seven different domain orientations. One with the epitaxial relation ε-Ga2O3 (0001) ⌈101 0⌉ || Al2O3 (0001) 112 0 and six with the epitaxial orientations ε-Ga2O3 (112 0) ⌈0002⌉|| Al2O3 (0001) 〈112 0〉. In this way, one can describe only principal high intensity diffraction spots of the SAED pattern, while the other could be possibly explained by a presence of special lattice ordering in the domains. However, such explanation is a blind alley, because of the presence of the six orientations of (112 0) ε-Ga2O3. However, this was not reflected in the layer XRD analysis, where only 0002 diffractions were observed on 2θ-ω scans. Thus, the diffraction pattern cannot be explained by the pure ε-Ga2O3 phase. Instead, the experimental SAED pattern can be fully explained by presence of pure orthorhombic κ-phase. Figure 3c shows indexation of one of the possible orientations of (001) oriented κ-Ga2O3 and its relation to the Al2O3 substrate (its diffraction spots are indexed by blue colour numbers with index A). A combination of three domain orientations of (001) κ-Ga2O3 mutually rotated by 60° can explain all observed diffraction spots in the pattern (Figure 3d). Because the κ-phase is not centrosymmetric, one can await six possible domain orientations. Yet, similarly to the XRD analysis, it is not possible to exclude the ε-Ga2O3 phase presence from the Ga2O3 layer, because all potentially possible diffraction spots belonging to the (0001) ε-Ga2O3 phase would be superposed to the intense diffraction spots from the three (001) κ-Ga2O3 domain orientation. A possible way to improve the crystal quality of our films may be the LI-MOCVD growth on a vicinal sapphire substrate with an intentional miss-cut angle from the (0001) surface. A similar approach was also applied for Ga2O3 epitaxy using standard MOCVD [32]. Figure 4 shows the UV-VIS transmittance spectra of ε/κ-Ga2O3 thin film on sapphire substrate and the transmittance spectrum of the bare sapphire substrate as a reference. A high optical transmittance of the ε/κ-Ga2O3 thin film is observed in the region of 300-900 nm wavelengths. The optical absorption edge of the ε/κ-Ga2O3 thin film in the UV part of the spectrum indicates a high value of the optical band gap. The Tauc relation between absorption coefficient (α) and photon energy (hυ) is defined as [33] Transmittance Spectrum which was further used to construct Tauc plot and to analyze the optical band gap (Eg) of the prepared layer (inset of Figure 4). In Equation (4), A is a constant, and n is the power factor with values of 0.5 or 2 associated with a direct or indirect optical transition, respectively [33]. Linear behavior of the (αhυ)1/n vs. hυ Tauc curve with n equal to 0.5 revealed a direct optical transition in the ε/κ-Ga2O3 film. The extracted value of the direct optical bandgap is Eg = 4.92 eV, which is in good agreement with published values 4.9-5.02 eV for κ-Ga2O3 [34,35] and 4.89 -5.0 eV for ε-Ga2O3 films [36,37]. Figure 4 shows the UV-VIS transmittance spectra of ε/κ-Ga 2 O 3 thin film on sapphire substrate and the transmittance spectrum of the bare sapphire substrate as a reference. A high optical transmittance of the ε/κ-Ga 2 O 3 thin film is observed in the region of 300-900 nm wavelengths. The optical absorption edge of the ε/κ-Ga 2 O 3 thin film in the UV part of the spectrum indicates a high value of the optical band gap. The Tauc relation between absorption coefficient (α) and photon energy (hυ) is defined as [33] αhv = A hv − E g n (4) which was further used to construct Tauc plot and to analyze the optical band gap (E g ) of the prepared layer (inset of Figure 4). In Equation (4), A is a constant, and n is the power factor with values of 0.5 or 2 associated with a direct or indirect optical transition, respectively [33]. Linear behavior of the (αhυ)1/n vs. hυ Tauc curve with n equal to 0.5 revealed a direct optical transition in the ε/κ-Ga 2 O 3 film. The extracted value of the direct optical bandgap is E g = 4.92 eV, which is in good agreement with published values 4.9-5.02 eV for κ-Ga 2 O 3 [34,35] and 4.89 -5.0 eV for ε-Ga 2 O 3 films [36,37]. Thermal Stability of ε/κ-Ga2O3 Thin Films Thermal stability of the prepared ε/κ-Ga2O3 films was first investigated by a hightemperature heating/cooling cycle using in situ XRD 2θ/ω measurements. Figure 5 shows evolution of the ε/κ-Ga2O3 0006 diffraction (59.87° at 30 °C) of the film heated from 31 up Thermal Stability of ε/κ-Ga 2 O 3 Thin Films Thermal stability of the prepared ε/κ-Ga 2 O 3 films was first investigated by a hightemperature heating/cooling cycle using in situ XRD 2θ/ω measurements. Figure 5 shows evolution of the ε/κ-Ga 2 O 3 0006 diffraction (59.87 • at 30 • C) of the film heated from 31 up to 1100 • C followed by a 25-minute long dwelling time and cooling down to 66 • C in vacuum. It can be inferred that ε/κ-phase is stable up to 1100 • C for the given temperature ramp rate, while it starts to degrade after~5 min upon 1100 • C exposure. Note the shift in the 0006 diffraction toward lower angles during heating corresponds to thermal expansion of the lattice. The onset of the degradation is highlighted in Figure 6a, showing detail time evolution of the XRD pattern at annealing temperature of 1100 • C. For annealing time >5 min, the intensity of ε/κ-Ga 2 O 3 0006 diffraction starts to gradually decrease and diminishes after 20 min of annealing. Instead, the 603 diffraction of β-Ga 2 O 3 (~59 • ) with a much weaker intensity compared to the ε/κ-phase evolves after 10 min of the annealing and its intensity remains stable for the rest of the dwelling time as well as the cooling cycle. It is also interesting to compare the XRD patterns measured before and after the heating cycle shown in Figure 6b. These data confirm clear degradation of the ε/κ-phase and its partial recrystallisation to β-Ga 2 O 3 . In addition, a strong decrease in the XRD peak intensity between the as-deposited ε/κ-phase and degraded β-phase (by factor of~20) suggests a strong deterioration in the crystalline quality, also indicating notable amorphization of the degraded films. Based on the relatively fast onset of the film degradation annealed at 1100 • C, we performed prolonged annealing experiments at lower temperatures. As-deposited films were annealed at 700, 800, and 900 • C in a vacuum successively for 10, 20, and 40 min and were analyzed using ex situ XRD and AFM after each annealing step. The XRD results (summarized in Table 2) show that ε/κ-Ga 2 O 3 films remain stable during annealing at 700 and 800 • C for the entire annealing time examined. Samples annealed at 900 • C remained stable after the first annealing cycle (10 min), but degraded after the second annealing cycle (30 min cumulative time). The 2θ/ω scans measured before and after the second annealing cycle at 900 • C, shown in Figure 7, suggest that the film was transformed to β-Ga 2 O 3 with two dominant crystal orientations, namely 201 and 301. Similar to the high-temperature experiments, a strong drop in the diffraction peak intensity between the as-deposited and degraded films was observed (also by a factor of~20), indicating possible amorphization of the degraded films. Based on the relatively fast onset of the film degradation annealed at 1100 °C, we performed prolonged annealing experiments at lower temperatures. As-deposited films were annealed at 700, 800, and 900 °C in a vacuum successively for 10, 20, and 40 min and were analyzed using ex situ XRD and AFM after each annealing step. The XRD results (summarized in Table 2) show that ε/κ-Ga2O3 films remain stable during annealing at 700 and 800 °C for the entire annealing time examined. Samples annealed at 900 °C remained stable after the first annealing cycle (10 min), but degraded after the second annealing cycle (30 min cumulative time). The 2θ/ω scans measured before and after the second annealing cycle at 900 °C, shown in Figure 7, suggest that the film was transformed to β-Ga2O3 with two dominant crystal orientations, namely 2 01 and 3 01. Similar to the hightemperature experiments, a strong drop in the diffraction peak intensity between the asdeposited and degraded films was observed (also by a factor of ~20), indicating possible amorphization of the degraded films. AFM was used to investigate the surface morphology of the as-grown, annealed, as well as degraded, thin films. Figure 8 shows typical surface morphology of as-grown ε/κ- AFM was used to investigate the surface morphology of the as-grown, annealed, as well as degraded, thin films. Figure 8 shows typical surface morphology of as-grown ε/κ-Ga 2 O 3 layers (a), films annealed at 800 • C for 10 (b), 30 (c), and 70 min (d), and films annealed at 900 • C for 10 (e) and 30 min (f), i.e., degraded film. Additionally shown are selected AFM line scans performed along the solid white lines to better visualize the surface features formed during the annealing cycle. As-grown samples showed very smooth surface with root mean square (RMS) roughness~0.8 nm. Small line-shaped features (width of 50-200 nm) with a height of~1 nm can be observed on the surface. For a sample annealed at 800 • C, negligible changes in surface morphology were observed after 10 min of annealing time (Figure 8b). The prolongation of annealing time resulted in notable increase in the RMS roughness to 1.2-1.3 nm, which can be attributed to emergence of well-recognizable surface features for annealing time >10 min, having the step height of about 2 nm. Similar features were also formed for films annealed at 900 • C for 10 min, i.e., the non-degraded sample with ε/κ-phase. The height increased to about 3 nm. Increasing in the surface roughness with annealing time and temperature before film degradation can result from localized loss of oxygen, e.g., from interstitial sites or surface contaminants degasification. Interestingly, after degradation of the ε/κ-phase during annealing at 900 °C for 30 min (Figure 8f), the film became smoother (RMS roughness of ~1 nm) as compared to previous annealing. On the other hand, clear change in surface morphology was observed, where the surface striation can be inferred from the AFM image (Figure 8f). Based on the XRD results (Figure 7), the possible explanation of this effect can be predominant amorphization of the film and the partial recrystallization of epitaxial ε/κ into the polycrystalline β-Ga2O3 phase. Discussion The thermal stability of ε/κ-Ga2O3 films grown by MOCVD were studied by Xia et al. [14] and Fornari et al. [28] in N2 and N2 or O2 atmosphere, respectively. Xia et al. [14] reported ε/κ-phase stability up to 800 °C for furnace annealing in N2 for 30 min. A mixture of ε/κ-phase and β-phase was observed after annealing at 850 °C for 30 min and the films eventually transformed completely to pure β-phase when subjected to annealing at 900 °C. Fornari et al. [28] found similar behavior, where the XRD results suggest thermally stable films after 3-hour long annealing at 700 °C in N2 as well as O2 atmosphere, while complete conversion to pure β-phase with deteriorated crystal quality took place after annealing at 900 °C. Interestingly, 3-hour long annealing at 800 °C led to amorphization of the film, which was attributed to an intermediate disordered step of the crystal structure conversion. Finally, a detailed study using in situ differential scanning calorimetry Interestingly, after degradation of the ε/κ-phase during annealing at 900 • C for 30 min (Figure 8f), the film became smoother (RMS roughness of~1 nm) as compared to previous annealing. On the other hand, clear change in surface morphology was observed, where the surface striation can be inferred from the AFM image (Figure 8f). Based on the XRD results (Figure 7), the possible explanation of this effect can be predominant amorphization of the film and the partial recrystallization of epitaxial ε/κ into the polycrystalline β-Ga 2 O 3 phase. Discussion The thermal stability of ε/κ-Ga 2 O 3 films grown by MOCVD were studied by Xia et al. [14] and Fornari et al. [28] in N 2 and N 2 or O 2 atmosphere, respectively. Xia et al. [14] reported ε/κ-phase stability up to 800 • C for furnace annealing in N 2 for 30 min. A mixture of ε/κ-phase and β-phase was observed after annealing at 850 • C for 30 min and the films eventually transformed completely to pure β-phase when subjected to annealing at 900 • C. Fornari et al. [28] found similar behavior, where the XRD results suggest thermally stable films after 3-hour long annealing at 700 • C in N 2 as well as O 2 atmosphere, while complete conversion to pure β-phase with deteriorated crystal quality took place after annealing at 900 • C. Interestingly, 3-hour long annealing at 800 • C led to amorphization of the film, which was attributed to an intermediate disordered step of the crystal structure conversion. Finally, a detailed study using in situ differential scanning calorimetry revealed that the initial phase transformation took place already at 650 • C. Our results are fully in line with the previous studies, extending the thermal stability of ε/κ-Ga 2 O 3 studies also in vacuum ambient. In particular, it clearly demonstrates that the thermal budget (i.e. high temperature applied during certain time) rather than temperature itself is important in assessing the thermal stability of an epitaxial film with metastable crystal structure. While our film retained its structure only for 5 min at 1100 • C and 10 min at 900 • C, it can be expected that prolonged annealing at 800 • C would also lead to phase conversion. Further, vacuum annealing represents somewhat of a specific condition for thermal stability of metal-oxide films, as degasification of the film can occur. This can explain much lower crystal quality of the β-phase Ga 2 O 3 after phase transformation and strong amorphization of the film as compared to those reported previously [14,28]. It is also worth mentioning that thickness of the epitaxial film may affect its thermal stability. This was observed for α-Ga 2 O 3 MOCVD films, where thinner films show the onset of phase transformation/thermal degradation at higher temperatures compared to thicker films [38]. This was attributed to the thermal stress caused by the difference in the lattice thermal expansion coefficient of α-Ga 2 O 3 and sapphire substrate, where the thicker films reach the critical stress level at lower annealing temperatures than the thinner films. A similar effect can be expected to take place also for ε/κ-Ga 2 O 3 . Conclusions In summary, Si-doped ε/κ-Ga 2 O 3 layers were grown on c-plane sapphire using LI-MOCVD. As deduced from XRD and TEM, highly ordered films with several orientation variants of ε/κ-Ga 2 O 3 lattices were grown and the orientation relationships between the two phases and the c-sapphire substrate was established. It was demonstrated that the presence or absence of ε-Ga 2 O 3 cannot be proved by measuring the ϕ scans of diffractions of the hexagonal phase. All ϕ scans can be equally well interpreted within the framework of the orthorhombic κ-Ga 2 O 3 phase. On the other hand, a single-phased hexagonal phase can be identified by XRD, if the hexagonal maxima are detected only in a 1015 h ϕ scan, while no maxima are detected in the orthorhombic 122 o ϕ scan. Similarly, electron diffraction in conventional plan-view TEM can clearly identify the presence of a κ-Ga 2 O 3 phase by a more complex SAED pattern in comparison to one from the ε-Ga 2 O 3 phase. However, this method cannot exclude the ε-Ga 2 O 3 presence if six possible domain orientations (or minimally three of them) of the κ-Ga 2 O 3 phase are present in the sample. The prepared films show enhanced thermal stability; layer degradation via partial phase conversion to β-Ga 2 O 3 and possible amorphization of the film was observed after~5-min long annealing at 1100 • C, or 10-min annealing at 900 • C. These results are very promising and open new possibilities, e.g., towards growth of various III-N barrier layers on the Ga 2 O 3 channel layer for processing of heterostructure FETs. Conflicts of Interest: The authors declare no conflict of interest.
2022-12-24T16:35:31.250Z
2022-12-20T00:00:00.000
{ "year": 2022, "sha1": "70a592b965f6ffabb584255319cc1f86524cdfba", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/1996-1944/16/1/20/pdf?version=1671550571", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "41ed4eee1a3643ee6dd7280395abfde98251679f", "s2fieldsofstudy": [ "Materials Science", "Physics", "Engineering" ], "extfieldsofstudy": [ "Medicine" ] }
234017693
pes2o/s2orc
v3-fos-license
MEASURING HUMAN RESOURCE ATTITUDE USING ORGANISATIONAL THEORY OF RELATIONSHIP: THE WAY FORWARD This paper argues that classical socialisation theories generally discuss the organisational structures rather than the newcomer’s psychology of relationships in any organisation and contributes to the socialisation stage model. In doing so, this research proposes an Organisational Theory of Relationship (OTR) for understanding the relationships of human resources in any organisation in four stages, namely fascination, contention, adaptation and adoration. The four stages have been examined in an empirical setting based on the data collected from 270 participants. Using the structural equation modelling, the measurement model validity was ascertained and several hypotheses were tested. The findings reveal that all employees in any organisation, intentionally or unintentionally, undergo some or http://e-journal.uum.edu.my/index.php/ijms INTERNATIONAL JOURNAL OF MANAGEMENT STUDIES INTRODUCTION Attitudes and behaviours play pivotal roles in establishing organisational culture as shed by the available literature. The literature can be divided into two groups, namely social psychologists (Kinder & Sear, 1985;Krosnick & Alwin, 1989) and developmental psychologists (Sigelman & Shaffer, 1991). The authors particularly discuss the measures of attitude development, but many other researchers examine them explicitly in the framework of organisational behaviour or organisational socialisation (Arnold et al., 1992). This paper aims to establish a theory of relationship to socialise human resources in organisations. Boudreau and Ramstad (2003) argue that intellectual capital has become a sustainable competitive advantage for an organisation. According to Steel (2002), Lee and Mitchell (1994), Othman and Shkuri (2015), and Parsons (2018), the employee's turnover process was and the temporal interaction between work attitudes and an organisation was not fully captured. Meanwhile, Steel (2002) worked on the theory of attitude with relation to employee turnover. Such awareness can provide a better interplay between the employee retention and organisational settings. Employees' retention mostly relies on organisational socialisation (OS) theories. Organisational socialisation (OS) is defined as a process of learning the ropes (Schein, 1968, p.2), which enables the employees to acquire competence and knowledge necessary to improve their profession or organisation. This process is acquired through vicarious observations and active participation of new entrants in an organisation for becoming the active members (Parsons, 2018). Furthermore, OS is a pivotal process of communicating the organisational culture and acquisition (Harrison & Caroll, 1991;Schein, 1990). In addition, the OS is also recognised as an essential organisational function (Fogarthy & Dirsmith, 2001) that refers to the involvement in the organisational culture (Inzerille & Rosen, 1983;Meek,1988). OS is a way to produce full-fledged and productive human resources (Louis, 1980;Van Maanen & Schein, 1979). Furthermore, OS can increase the organiszational proficiency of employees towards understanding of the organisational culture, norms, roles, roles, expectations and responsibilities (Ashforth et al. 2007). Khalil et al. (2021) connect the relationship of age with socialisation, whereas Cai et al. (2020) investigate the impact of social media on the newcomers' socialisation. The authors approve that socialisation has a significant impact on adjusting new employees to the job, group and organisation (Bauer et al., 1998;Fisher, 1986;Moreland & Levine, 2001;Saks & Ashforth, 1997). Stages of Socialisation Many authors have developed the processes or stages of socialisation. This can be traced back to Feldman (1976), who first explained the three stages of socialisation, including anticipatory socialisation, accommodation, and role-management. The first stage, as per Feldman (1976), refers to the anticipation of a newcomer in which realistic expectations are required for getting in new talent into the organisation. The second stage suits a newcomer with the organisational work setting to set in. This stage even applies to organisational conflict resolution. This model explicitly outlines the processes of socialising a newcomer, but the discussion of counterparts' relationships in this model is scarce. Similarly, Buchanan (1974) depicts a three-stage early career model for training of employees from first to the fifth year of employment, but its emphasis is on training and development rather than relationship management as a tool of socialisation. Another remarkable theory was the three-stage entry model by Porter et al. (1975) in which they categorise the stages of socialising an employee into the pre-arrival, encounte, and change/acquisition stages. First, the prearrival stage encompasses the basic idea of socialisation that starts before the arrival of a newcomer in an organisation. The encounter stage starts from the first working day of an employee. This stage clarifies the difference between expectations and realities within the organisation. Dean et al. (1985) proclaim this stage as reality shock in which a newcomer's expectations come into conflict. Even the duration of this stage is not empirically defined, but Louis (1980) roughly estimates it as occurring from the first six to nine months. The last stage depicts mutual acceptance (Schein, 1978) or settling in (Feldman, 1976) while an employee gets the mastery of work and fulfils the demands of a job. Schein (1978) articulates another psychological three-stage model in which the stage entry refers to the arrival of a newcomer. At this stage, the newcomer's information is not based on personally observable facts. Therefore, a newcomer tries to get accurate information. After getting appropriate information, the newcomer moves up to the second stage of socialisation in which he/ she accepts the organisational reality, adjusts with personal conflicts and resistance to change. After successful completion of this stage, a newcomer moves to the third stage of mutual acceptance that leads to full organisation-employee acceptance for each other. Another notable socialisation theory is Wanous' (1980) integrative model of socialisation. It elaborates that a newcomer experiences four stages while becoming an active member of any organisation. The first stage, as shared by Wanous, is called confronting which is similar to the stage of encounter (Porter et al., 1975) and entry (Schein, 1978). At this stage, the new entrant confronts the organisational reality and confirms or disconfirms its own expectations with the organisational reality. At this first stage, unlike Porter et al's (1975) and Schein's (1978), conflict arises while the newcomer compares the personal and organisational values and climates. At the same place, the new employee discovers the rewards or punishment behaviours. The second stage in this model is achieving the role clarity in which a newcomer copes with resistance to change, defines the interpersonal relations and copes with organisational structure and ambiguities. A third stage twitches a newcomer to locate himself in the organisational context by learning the behaviours of others according to organisational expectations. At this stage, conflicts are normally resolved, work problems lead to organisational commitments and newcomers establish interpersonal relationships by embracing new organisational beliefs. Effective socialisation is the last stage in which organisational trustworthiness and commitment are improved, high organisational satisfaction is attained, and intrinsic motivation are enhanced (Wanous, 1980). Ashforth et al. (2007) rightly mention that even socialisation stage models have remarkable importance but have not received much attention and there is a need for valuable research on the stage modelling. Early research supports the stage models of socialisation, but there is a gap in developing a universal stage model. Fisher (1986) notifies explicitly the need for a comprehensive universal stage model that can be equally applicable for all organisations, jobs and employees irrespective of diverse organisational cultures. Furthermore, Bauer et al. (1998) notify that stage models are mainly focused on the characterising of each stage instead of discussing how the learning and adjustment changes occur. Moreover, we find that most of the stage models of socialisation mainly discuss the processes instead of discussing the level of relationship. We found a U-curve theory of adjustment by Lysgaard (1955) that was developed for an entirely different purpose of crosscultural adaptation of expatriates or sojourners. Even this model was designed specifically for the sojourners, but it provides an insight into the social relationship. Lysgaard's theory explains that an individual comes to an entirely new host culture, and this first stage is called the honeymoon stage, while the individual gets fascinated by a new culture, environment, and relations. After a while, the individual comes to a stage of cultural shock in which new entrant's internal frustration occurs. The third stage was named as an adjustment stage in which a person starts adopting a new culture and transforms the behaviour based on the requirements of the host country. The fourth and last stage is called the mastery stage in which a person increases the ability to function more effectively in a host country. The U-Curve theory and Cultural Shock theory by Oberg (1960) were developed in intercultural environments of expatriates and sojourners working abroad, but Coffman and Harris (1984) and Pedersen (1995) generalsze the adjustment stages to other social roles of cultural exchange including divorce, office jobs, retirement, nursing, bereavement, economic change, college life, office jobs, medical school, paraplegia, psychiatric residency, careers and haemodialysis. Black and Mendenhall (1991) relate this adjustment theory with the relation of marriage. This argument leads to the socialisation of newcomers in organisations in relation to U-curve theory focusing on the cross-national environment. Afterward, Lesser and Peter (1957) propose another three-stage model for adjustment, including stages of spectator, involvement, and coming-to-term stage. Even these models are for sojourners who work in a new country or region, but these adjustment theories provide a good insight on relationship development in entirely a different environment. In addition to the stage models, Rojas (2017) explicitly mentions that there are few socialisation-based functions and mechanisms involved in link between organisation and newcomer's socialisation. In his empirical experiments he supports the socialisation functions and mechanism over strict stages or states of socialisation. Bullis (1993) mentions that to date, most research on organisational socialisation is focused on either socialisation or individualiaation. She predicts that future studies should concentrate on the interplay of socialisation stages because it will enable researchers with a variety of integrated analyses and results. This paper is an attempt to focus on an interplay between socialisation and individualiaation of organisational newcomers in which each stage had been investigated tusing thestatistical method. This scarcity leads us to develop a new organisational theory of relationship (OTR) that can fill the gap of a new stage model as mentioned by previous authors, in particular, Fisher (1986). This model explicitly discusses the levels of relationships, not only the structures. Based on this model, the organisations may set strategies and tactics for each level of OTR. In 2020, the pandemic COVID-19 situation has changed the way we live, interact with others and specifically, has affected the organisational socialisation. Chadha (2020) is right to depict that organisations have to rapidly adjust their ways of working during the COVID-19 pandemic. Furthermore, during this pandemic the organisational cultures are under much strain due to the change of social relations and shift of work from workplace to home and vice versa. Authors are also finding tips and solutions to deal with the post-COVID-19 situation in relation to work from home. McDonough et al. (2020) mention three tips for keeping employees out of working burnouts including setting physical and social boundaries, maintaining temporal boundaries and focus on the most important work. This emphasis shows that modern authors are looking forward to the new ways for maintaining good organisational culture and socialisation. However, new studies show that honeymoon period of work from home is declining in post-COVID situations and employees are realising the uncertainty of this method (English, 2020;Boddy, 2020). In this context (both in pre and post COVID-19), the proposed model of OTR will be a good remedy for organisational socialisation. Organisational Theory of Relationship (OTR) The organisational theory of relationship (OTR) shows the better interplay between the organisational socialisation and individualism of its human resource. Furthermore, it will help to understand the organisational socialisation in relation to interpersonal relationship. It will also provide a foundation for making a better socialisation structure in an organisation. Fascination Stage This is the first stage of socialisation when an employee enters the organisation. Wanous (1980) calls this stage as confronting, Porter et al. (1975) refers to it as encounter, Schein (1978) labels it as entry and Lysgaard (1955), and Oberg (1960) address it from a cross-cultural perspective, that is, the honeymoon stage. Boswell et al. (2005) specifically propose that there is a honeymoon phase when a newcomer enters an organisation. They claim this a honeymoon effect while the early experiences of an employee in an organisation are mainly positive. At this stage, new employee experiences a new culture, system, structure, norms, people, hierarchy of command and environment. Van Maanen and Schien (1979) notifies that usually, organisations show their most favourable side to the candidates during the recruitment process. At first instance, newcomers have the motivation to get information and knowledge of organisation for reducing their uncertainties about the job and professional accommodation (Bauer et al., 2007;Kim et al., 2005). For making their place in the organisation, newcomers have a high level of interest to contribute using their professional expertise (Kristof-Brown et al., 2005). Walker et al. (2013) argue that newcomers use the tactic of social interaction that enables an employee to get the trustworthiness of the employer and it leads to influence the work attitudes. From the perspective of cultural-shock theorists, at this stage, everything is new, exciting and fascinating (Irwin, 2007). At first glance of the organisation, the newcomers get fantasised. They have a feeling of surprise and wishful thinking of the work setting and job. New entrants believe that the job and organisation meet their values and expect positive experiences from the other side (Louis, 1980). This surprise could be positive or negative. Just like a newly married couple, a newcomer has a glamour to see a new culture, work settings, norms, people, chain of relations and command and symbols whereas the 'natives' or previous employees are courteous, polite and welcoming. Frese (1982) mentions that the newcomers get fantasised about the new job, want to get more information about the job and organisation and adjust themselves with this new environment and culture. This is also a stage of enthusiasm and fascination (Ward et al., 1998, p.278), where newcomers are ambitious to prove that their selection is right. This stage can also be described as the tourist's experience. In a cross-cultural environment, the tourists mostly fall in this stage where they get fascinated, excited and enthusiastic, and this stage ends when they complete their short stay, but the longerstay people may move to other states. In organisational perspectives, the employees who move to other organisations in a short period for training or visit purpose may face the same fascination. Few other authors describe the fascinating stage as the stage of having new exciting sights and sounds with excitement (Black & Mendenhall 1991, p. 226), a high level of satisfaction and trust (Klineberg & Hull, 1979), euphoric stage (Oxenfeldt & Kelly, 1968). Levinthal and Fichman (1988) relate this stage of fascination or honeymoon with the commitment and belief that this excitement leads to employee commitment. Harris (2017) claims that the newly hired employees fall in a honeymoon phase where they have limitless possibilities, receive tons of attention and learn a lot from their new organisation and co-workers. Boswell et al. (2005) mention that the level of honeymoon effect may be lower if an employee moves to a more mediocre organisation or there is a lower poorer-fit with the new job or organisation. Moreover, people who have plentiful job opportunities in the industry might exhibit a stronger honeymoon effect. Their initial euphoria increases by the perception that their new job achieved an excellent alternative from others. They also mention that the timing of a job change may also impact on increase or decrease of honeymoon effect. Furthermore, they studied the honeymoon effect during a job switching in the same organisation with relation to internal transfers or promotions. This stage may be affected by some job or organisation-specific factors, including the change of position, job or place. The findings of Bowell et al. (2005) were explicitly for high-level-managers, but there is a need to investigate it among other levels of employees. Kaplan (1995) discusses the fascination explicitly as stimuli that attract ones' interest. He argues that fascination forces involuntary attention (James, 1892) and there is no need for directed attention if the fascination process is appropriate and adequate. James (1892) argues that involuntary attention (Kaplan claims it as fascination) is the best attention because it is effortless, otherwise directed attention. As Berto et al. (2010) mention, it creates attentional fatigue. Therefore, fascination inherently grips the attention and people effortlessly adopt it. Kaplan (1995) further divides fascination into two parts; soft and hard fascination. Soft fascination characterises with the natural settings such as clouds, sunsets, snow, leaves in the breeze and ocean waves (Kaplan & Kaplan, 1989) On the other hand, hard fascination mainly entails loud noises, fast motion and/or another strong stimulus those attract the attention (Kaplan & Berman, 2010). Fascination should have significant relationship with socialisation as indicated in the above literature. Hartig et al. (1996) and Laumann et al. (2001) developed instruments for measuring fascination but those are mainly related to general socialisation. Hence, we developed hypothesis to test the relationship between fascination and socialisation. Hypothesis 1: There is a positive relationship between fascination and socialisation. Contention Stage Authors claim that entering a new organisation and adjusting with a new culture is a stressing exercise (Nelson, 1987). Stressors may be positive, including potential rewards, challenges and opportunities or harmful, including tension-producing transition, feeling of loneliness and social isolation (Nelson, 1987;Katz, 1978, Van Maanen & Schein, 1979. Schein (1971) claims such stressor as a performance anxiety whilst Allen et al. (1999) and Kammeyer-Mueller et al. (2012) viewed it as burnout and ambiguity respectively that affects the task-mastery of newcomers. Organisational stress emerges due to organisational demands and coping techniques mismanagement (Bhagat & Beehr, 1984;Quick & Quick, 1984). Jackson et al. (1987) investigate that there is uncertainty involved with new tasks, job, roles and organisational relationship that produces stress in newcomers. Similarly, Saks et al. (2007) and Bauer et al. (1998) relate the organisational stress of a newcomer to the roles of conflict and role ambiguity. Nifadkar and Bauer (2016) confirm that there was limited literature available on examining the effect of relationship conflict on a newcomer. They contribute to specifically discussing the adjustment model for handling relationship conflict in context to newcomer's adjustment. After the encounter with an organisation, the newly appointed employees become involved with acquiring new information that can clarify their roles and adjusting behaviours with the expectations of organisation. During this involvement, the newcomers interact with the organisational members and work settings (Yang, 2008). Furthermore, they may act proactively and this behaviour may change the status quo (Crant, 2000) that may surprise others and increase the possibilities or occurrence of a conflict. Louis (1980) explicitly illustrates this stage by discussing the psychological context of surprise. This surprise is associated with a conflict between the employees' expectations and organisational realities. Pondy (1969) believes that even the conflict among relations is unpleasant, but it is inevitable in organisational relations. Even Pondy accepts the dysfunctional conflict as a reality and this should be accepted as a necessary condition. Furthermore, Pondy (1992) mentions that conflict provides a higher quality of decision making with diverse opinions. This argument defines that naturally, there is a chance of conflict in all relations; therefore, it should be taken as a natural phenomenon. These conflicts arise due to asymmetric interdependence of the groups (Kumar et al. 1995), personal incompatibility (Peterson & Behfar, 2003) stress and threats (Thomas, 1992), disputant behaviours with relation to jealousy, hatred, anger and frustration (Ross & Ross, 1989). The stage of fascination no longer exists when newcomers face new types of relations, the hierarchy of command, work settings and culture. Similarly, they move to a condition of unrest, unpleasant feelings, stress, or conflict while the glamour of job and organisation moves towards the realities. In the stage of fascination, all mountains look green and the newcomer has a feeling of honeymoon-hangover effect (Boswell et al., 2005). Nevertheless, these feelings last temporarily and newcomer's idealistic attitudes bump with the realities. The example can be illustrated by referring tom a newly married couple with each person has romantic and glamourised feelings for each other and fantasizes love and affection. After a while, however, there are few differences are found in the new couple such as society, fashion, lifestyle, and these differences directly impact the idealistic feelings of each other. Similarly, there are differences between the idealism of the newcomer and the realities of an organisation. Some newcomers leave the job and organisations due to these differences (Griffeth et al., 2000;March & Simon, 1958;Steers & Mowday, 1981), some spend a long time enduring such experiences and feelings even for the whole organisational tenure, with these conflicting feelings and some move to the next stage of adaptation. Adaptation Stage In the discussion of adjustment of sojourners, Oberg (1955) describes that after the honeymoon period, a person moves to the stage of adjustment in which newcomer removes the anxiety, although there are moments of strain. This strain removes after a complete grasp on social intercourse. At this stage, a person not only accepts the new culture with food, habits and customs but starts to enjoy them. In an organisational context, the adjustment is typically a process of individual acquiring knowledge about the job and organisation and to adjust the work settings (Fisher, 1986;Van Maanen & Schein, 1979). The previous studies show that during this stage of adaptation, newcomers' behaviours of resistance are controlled by others, specifically the administrators of new organisations (Callister & Wall, 2001;Ferlie et al., 2005;Zabusky & Barley, 1997). Many authors believe that this stage is an adjustment, adaptation, or rehabilitation of previous contention or conflict stage, in which newcomers give full professional expertise. Furthermore, these authors emphasise the organisations to adopt the best possible options for making this process smooth to enable socialisation objectives to be accomplished (Katz, 1978;Nelson, 1987;Oberg, 1960;Schein, 1971;Van Maanen & Schein, 1979). Nevertheless, there are few strains (Oberg, 1960) and one strain is the fear of rejection by the organisation, thus the employees in such conditions keep silent on conflicting matters (Deutsch & Gerard, 1955). This stage can be better understood with the example of a newly-wed couple who crossed the honeymoon period from the stage of fascination, resolved the conflicts and differences at the stage of contention and now adjusting with each other. There are differences between them, but they have ways to avoid conflicting matters, understand the limitations, and compromise whether anyone finds any weakness in another party. Job adaptive theories, more specifically, explain this stage. Adaptive behaviour or performance characterises as a person's ability to adapt the dynamic workplace situations. (Hesketh & Neal, 1999). This behaviour requires employees to adjust to work settings (Pulakos et al., 2000). Therefore, the authors highlight the importance of adaptive behaviour in different ways (Allworth & Hesketh, 1996;Charbonnier-Voirin & Roussel, 2012;Hollenbeck et al., 1996;London & Mone, 1999). Pulakos et al. (2000) revolutionise the adaptive behavioural theories with a new scientific model of eight dimensions of adaptive performance, including learning new tasks, handling work stress, demonstrating interpersonal adaptability, technologie, and procedures, cultural and physically oriented adaptability. Different authors later developed many scales of measuring employees' adaptation. Nevertheless, there was a need to understand the other side of adaptive theories. Hulin (1991) argues that job adaptive behaviour emerges in response to the unpleasant and non-satisfactory working conditions. Moreover, Boswell et al. (2014) believe that there is a correlation between adaptive work behaviour and job insecurity. They also argue that the adaptive behaviours, for the employee and organisation, may have negative consequences and an individual's responses may lead to defensive behaviour. The literature leads us to focus on the psychological adjustment process of newcomers in organisations. It was hypothesised that there is a correlation between the stage of adaptation and socialisation. Hypothesis 3: There is a positive relationship between adaptation and socialisation. Rattner and Danzer (2006) argue that adoration, admiration, or reverence are essential tools for the growth and development of a human personality. Many authors use the admiration and adoration as mixtures of more essential elements of emotion. Father of Evolution theory, Darwin (1890Darwin ( /2007 connects admiration with astonishment and claims that this feeling of astonishment is associated with the feeling of pleasure and a sense of acceptance. The authors claim adoration as the elevation of morale (Algoe & Haidt, 2009;Haidt, 2003), a form of respect (Li & Fischer, 2007), a feeling of someone superior and exemplary (Solomon, 1976(Solomon, /1993, a stage of being inspired by (Thrash & Elliot, 2004), intense and passionate attachment, strongly associated with romantic love (Schindler et al., 2013), emotional relation and happiness (Mees, 1985), predominantly outer focus (Smith, 2000), transcendent emotions (Peterson & Seligman, 2004) and a stage of actualisation (Ortony et al., 1988). Few authors connect the relation of admiration with physiological changes (Immordino-Yan et al., 2009;Immordino-Yang & Sylvan, 2010). There is scarce literature available on discussing adoration and admiration in organisational context. Only very few studies are available on leadership commitment (Burns, 1978;Carlton-Ford, 1992;Conger et al., 2000). Adoration Stage Love theories also support the stage of adoration and authors cover the association between organisational love and employee engagement. Aron and Aron (1996) relate the adoration and admiration with love. Baer (2007) describes love as care for others and commitment to their welfare and claims it as real love. Koestenbaum (2002) claims the surrender of one's freedom for the other party as love. Tasselli (2018) defines explicitly that love can shape the behaviour of employees ontologically that directly impacts on their organisational roles. He further emphasises that employees change their behaviours after understanding the organisation whilst love can provide them a continuous and authentic realisation of themselves and their association with others and the organisation. Similarly, the researchers mentioned that adjustment and socialisation are correlated. We developed a construct of adaptation including all aspects of adjustment and adaptive behaviour for hypothesising the correlation for testing this model. Hypothesis 4: There is a positive relationship between adoration and socialisation. The socialisation authors believe that there is a correlation between adoration and socialisation and we hypothesised to find the relation in organisational framework specifically on the newcomers. (Bauer et al., 1998;Ashforth et al. 2007;Fisher, 1986). Figure 1 represents the conceptual framework of the study. Population and Sampling This research was conducted on the employees working in Pakistani organisations from different sectors. For the sampling population, we chose the students of executive MBA from four higher education institutions (three universities and one institute). These students were selected as convenience samples who study during the evenings or weekends and work in the different sectors of industry for the purpose of examining stages of employees. They were able to provide the best possible concerns of their population. Using the Google form as research instrument, the survey was distributed through Facebook, LinkedIn and Twitter accounts (mainly the on-job students). This survey was conducted during the spring and fall semesters of 2019. METHODOLOGY opulation and Sampling his research was conducted on the employees working in Pakistani organisations from different sectors. r the sampling population, we chose the students of executive MBA from four higher education stitutions (three universities and one institute). These students were selected as convenience samples ho study during the evenings or weekends and work in the different sectors of industry for the purpose examining stages of employees. They were able to provide the best possible concerns of their pulation. Using the Google form as research instrument, the survey was distributed through Facebook, nkedIn and Twitter accounts (mainly the on-job students). This survey was conducted during the spring d fall semesters of 2019. Among 300 completed questionnaires, thirty (30) were omitted due to several asons; because they were only students (not working anywhere), entrepreneurs, unemployed, they bmitted incomplete responses or the persons had a job during less than two years (rejection ratio was %). For our research, the respondents had few years experience and faced many stages of socialisation ithin the organisations. In responding to the statements of the independent variable called Fascination, ey conveyed that they should keep their first six to nine months in mind since Louise (1980) implies at this period lasts for six to nine months. Hence, the rejection ratio of invalid responses was 5.5 rcent. easures and Measurements Among 300 completed questionnaires, thirty (30) were omitted due to several reasons; because they were only students (not working anywhere), entrepreneurs, unemployed, they submitted incomplete responses or the persons had a job during less than two years (rejection ratio was 10%). For our research, the respondents had few years experience and faced many stages of socialisation within the organisations. In responding to the statements of the independent variable called Fascination, they conveyed that they should keep their first six to nine months in mind since Louise (1980) implies that this period lasts for six to nine months. Hence, the rejection ratio of invalid responses was 5.5 percent. Measures and Measurements The measures of the construct for socialisation were adapted by Haueter et al. 2003. The construct contains ten items as S1, S2, S3, S4, S5, S6, S7, S9 and S10. The measures for fascination were derived from Hartig et al. (1996) and Laumann et al. (2001). Hartig et al. (1996) developed a perceived restoration scale (PRS) that provides a comprehensive set of descriptive measures for the construct of fascination. Among all sixteen items of Hartig et al. (1996), we adapted nine descriptive items for our questionnaire. Secondly, we found another set of measures by Laumann et al. (2001). Among all twenty-two descriptive, we found five items most suitable for our questionnaire. The measures of both authors were essentially designed to general sociology and we needed to focus on the organisational framework, therefore, we only selected those which were most suited to the organisational environment. The construct had a total of fourteen items, all measured on the Likert scale of 1 to 5. A set of statements was compiled for measuring the attitudes of employees within their first six months (as suggested by Louise, the period of honeymoon lasts for six months). Hence, the construct for the stage of contention, was adapted from Nifadkar and Bauer (2016), while sixteen items were adapted from Charbonnier-Voirina and Roussel (2012) for the construct of Adaptation and eight items were adapted from Schindler et al. (2013) for the construct of adoration. We employed PLS-Structural equation modelling (PLS-SEM) to analyse the hypothesised relationships. This method is suitable for the theory confirmation; however, it requires normality of the variables at the individual and group level (Hair et al., 2016). RESULTS The reliability and validity of measurement models were determined through Confirmatory Factor Analysis (CFA). Hair et al. (2016) mention that the factor loading, average variance extracted and composite reliability are essential tests for evaluating the convergent validity and internal reliability of a measurement model. Hence, the threshold values for factor loading, AVE and CR were 0.70, 0.50 and 0.70 respectively. Nevertheless, factor loading value can be relaxed up to 0.50, keeping in view the AVE (Hair et al., 2016). Furthermore, Fornell-Larcker (FL) criteria by Fornell and Larcker (1981) were employed for examining the discriminant validity of the construct. The criteria compare the AVE's square-rooted values with the interconstruct correlation. The value of square rooted AVE should be higher than the inter-construct correlation to obtain the discriminant validity. Our results for convergent validity and internal reliability, showed that all construct had the values of greater than the abovementioned threshold values and showed the convergent validity. Likewise, Fornell-Larcker (FL) criteria showed that all constructs had discriminant validity. For ascertaining the overall model fitness, we used the techniques of SRMR and CFI. The value of SRMR was 0.061 (par <0.08) and CFI was 0.912 (par >0.90) that showed a strong model fitness. Whereas, the R-value of this model was 0.63 that showed the strong goodness-fit of the model. (See Table 1, Table 2, and Table 3). DISCUSSION AND IMPLICATION The objective of the study was to test the applicability of a fourstage model in the context of Pakistan. For doing so, we employed the structural equation modelling on the data collected from the 270 samples from service and manufacturing sector. The results confirm the arguments of Boswell et al. (2005), Van Maanen & Schein, 1979) and Kaplan (1995) that the newcomers in the organisation, encounter the stage of fascination that is also known as the honeymoon period. They get fascinated to be a part of an organisation that they envisaged for. New culture, welcoming managers and co-workers, set of relations, work settings etc. initially everything inspires and fascinates them. The stage of fascination can be increased if a person's dissatisfaction with the previous organisation was high, if faced a longer period of unemployment or if he or she dreamed about being a part of new organisation. This stage can be unrealistic or euphoric. Human resource departments use the induction, orientation and introductory programmes for providing realistic knowledge to enable them to exchange their unrealistic approach to something realistic. At this stage, organisational knowledge, co-worker's cooperation, clear organisational culture can be useful remedies for adjusting a newcomer. The period of fascination ends soon while the difference of newcomer's expectations and organisation occurs. Our results affirm Lahana et al.'s (2019), Gross and Guerrero's (2000), Bauer et al.'s (1998) and Van Maanen and Schein's (1979) 's claims on the stage of contention. At this time, a newcomer starts to disagree or mentally opposes the differences. Thus the stage of mind changes to an internal conflict or crisis. Simultaneously, a newcomer has two choices; leaving the organisation or start arguing on the differences. We observe that many people in organisations may struck in the stage of conflict that is because of opposition, critics, blaming or whistle-blowing attitude. Most of the time these people do not leave the organisation due to unavailability of new opportunities and keep working with the stress and conflicting relations. (2012) that the stage of conflict or contention leads to the stage of adaptation in which a person gets few give-ways for dealing with conflict-related matters. At this stage, newcomers get successful in removing the anxiety, stress, conflict, unrest and adjust with new conditions along with some personal grievances. On the other hand, some of them start thinking differently. At this point, they start extracting positives from negatives and move to the stage of adoration. This is stage of absolute love where a person is psychologically attached and ready to work in out-of-box situations. Finally, this research also confirms the impact of adoration and admiration on the socialisation of newcomers as mentioned by Schindler et al. (2013) and Rattner and Danzer (2006). The results reveal that all the four stages Fascination, Contention, Adaptation, Adoration exert the significant positive impact on socialisation. The employees entering in an organisation get adjusted in any of these four stages and spend the majority of their organisational life in the particular stage(s). The results show the importance of recognising the socialisation stage of newcomers, their expectations for others and their psychological bonding with the organisational culture. Thus, a practical implication of this model is that the organisations should focus on the social components in dealing with the newcomers. The organisation and newcomer may benefit if such interactions transpire among employees. In addition, socialisation tactics can work better if documented well and mechanism are adequately defined. This study sheds light on the relationship between psychological contracts of employees and socialisation tactics and recommends future research for further examination of such concern. CONCLUSION This paper investigate whether four (4) stages of fascination, contention, adaptation and adoration play any role in developing and maintaining employees' organisation socialisation. Even the role of theses stages, particularly for shaping behaviours of individual employees, has often been neglected in favour of more cognitive explanations. The literature review of this paper also explored a theoretical gap that addressed in this research. This research revealed that the four stages shared earlier play important roles in socialising newcomers to accept new experiences and meanings which either reinforce or change their values, goals and identities. In sum, the organisations and their departments of human resources are working hard to hire and maintain best talents. The model putforth in this paper can help classify the employees into four stages and set their organisational strategies to make them more productive. The model reflects the psychology of newcomers as it demonstrates the significance of understanding the newcomers' psychological stages of socialisation, their aspirations towards others and their relations with the organisation.
2021-05-10T00:04:30.941Z
2021-01-27T00:00:00.000
{ "year": 2021, "sha1": "128d3333e2591402ace05105e242a6fefcf46127", "oa_license": "CCBY", "oa_url": "http://e-journal.uum.edu.my/index.php/ijms/article/download/ijms.28.1.2021.9409/2902", "oa_status": "GOLD", "pdf_src": "MergedPDFExtraction", "pdf_hash": "4001214f220e94a147b4056db09a7809ef3f4209", "s2fieldsofstudy": [ "Business" ], "extfieldsofstudy": [ "Psychology" ] }
259751366
pes2o/s2orc
v3-fos-license
POULTRY PRODUCTION NEXUS-FEW: A STUDY ABOUT THE EFFECT OF PUBLIC AND PRIVATE INVESTMENTS ON THE EFFICIENT USE OF WATER Purpose: This paper aims to analyse how investments in innovation, research and development in poultry production affect the water's efficient use. Theoretical framework: To understand water stress, the literature review addresses content related to the analysis of the balance between water and food production (Nexus-Few), as well as those that support the analysis of the effects of investments in research and development in poultry production and improving the efficient use of water. Method/design/approach: The data were collected from the FAOSTAT database, where were analysed the investments in research and development (government and private), poultry production, water stress and efficient water use (US$/m3). The period of analysis is from 2005 to 2019, and priority was given to countries with the highest production of poultry, based on available data (Brazil, India, Mexico, Myanmar and Russia). Results and conclusion: the results reveal that there is interdependence between water and poultry production. On the one hand, factors that can lead to water stress (climate, competition for water, food production) can also affect poultry production, as the more water stress, the greater the risk for poultry production that depends on these resources. On the other hand, public and private investments in research and development (R&D) can be crucial to reduce water stress, improve water use efficiency and poultry production performance. Research implications: The study shows to poultry farmers that water stress (demand greater than supply) interferes with poultry production, and that private or public investment in R&D is important for the efficient use of water. The paper revewls to governments and companies what actions are necessary to encourage the sector to seek the Nexus Few (balance between water and food production) INTRODUCTION The rational use of water resources has been the subject of the United Nations (UN) Global Agenda 2030, in the face of the challenge of water availability for the survival of the inhabitants of the Earth (Vanham et al., 2013). At the same time, water stress (relationship between water demand and availability) is identified as one of the main bottlenecks for sustainable economic growth (Moro et al., 2018;Grejo & Lunkes, 2022). The sixth UN Sustainable Development Goal (SDG-6) suggests as a global goal to substantially increase water use efficiency in all sectors and ensure sustainable withdrawals and fresh water supply for all (UN, 2015). This goal requires countries and production systems to commit to the efficient use of water, to do so, one must change processes and products to consume this resource efficiently. The relevance of achieving the SDG-6 targets is justified by the urgency of the environmental problems experienced today. Empirical evidence reveals that the consequences of unbalanced development have led the world to suffer from physical factors such as extreme weather and natural risks, both in the supply of water and energy, and in food production (Challinor et al., 2010;Bandara & Cai, 2014;Famiglietti, 2014;Schmitt et al., 2020). Specifically, the poultry production chain is water-intensive, from breeding to processing (SASB, 2021). In addition, companies in the sector generate wastewater, or effluents, and may face higher operating costs or loss of revenue due to water scarcity, increased per capita consumption, poor water management, climate change, and changes in regulations (Govoni et al., 2021). What is known is that inefficient water consumption and the discharge of effluents into rivers and streams have led to diminished supply of surface water (rivers) and groundwater (groundwater table) in various parts of the world (Rahmani et al., 2023). In this sense, the relationship between food production, climate effects and the management of social and economic systems affect the availability of water and hinder economic development (Govoni et al., 2021;Brito, 2018;Lathuillière et al., 2018). On the other hand, investments in innovation, research and development (R&D) have contributed to the improvement of sustainable water resource management, as they make processes and products more efficient Rosa et al., 2020;Rahmani et al., 2023). Thus, investments in R&D are needed to reduce pressure on water resources, either in reducing demand or increasing water supply (Michetti et al., 2019). However, there is an emerging literature on Nexus-Few (nexus between food, energy, and water) understood as a systems-based approach that explicitly recognizes food, energy, and water subsystems as interconnected and interdependent (Bazilian et al., 2011, Wolfe et al., 2016, Foran, 2015Hoekstra, 2017;Cai et al., 2018;Rosa et al., 2021). The difficulty pointed out in the literature lies precisely in the fact that food production depends on water throughout the value chain, while at the same time it generates tensions for natural resources, either by the amount of clean water needed for production or by the load of effluents that this activity releases into the natural environment (Lawford et al., 2013;Liu et al., 2016;Cai et al., 2018;Rosa et al., 2021), which generates water stress, and puts into question the capacity of the natural environment to provide clean water and regenerate. As well as, the capacity of the productive system to provide food in the quantity desired for human consumption (Hoekstra, 2017;Cai et al., 2018;Rosa et al., 2021), which makes water stress understandable (Govoni et al., 2021;Brito, 2018;Lathuillière et al., 2018) and the ability to intervene to reduce this tension (Rahmani et al., 2023), which is crucial for sustainable development. From this context emerges the research question: How do investments in innovation, research and development (R&D) in the production of broiler birds affect the efficient use of water? To identify the extent to which investments have boosted water stress management and reduction, this research aims to analyze how investments in innovation, research and development (R&D) in poultry breeding affect water efficiency. This study makes a practical contribution by pointing out to the managers the impact of water stress on the performance of poultry production, as well as seeking to identify how public and private investments in innovation -research and development (R&D) boost the efficient use of water in productive systems. It is hoped, therefore, to better understand the limits and possibilities between the natural and productive system, and to support managers in the 4 decisions on investments that lead to a reduction in water stress and contribute to sustainable development. The result of this study allows the government to draw up public policies to promote innovation through investment in R&D in the poultry sector, considering that these investments can improve water efficiency, that is, they can be determinant for improving the relationship between the capacity of the natural environment to make water available and the demand from the productive sector in the use of water. Furthermore, private investments in R&D can contribute to the efficient use of water. Thus, investing in new technologies that promote the rational use of water (e.g. drinking fountains, reuse of water for cleaning, use of remote sensing and information technology) allows for the improvement of water management practices and processes. Production of Cutting Birds The production of poultry plays an important role in the global economy, being the main source of protein for poor and emerging countries. Its derivatives are consumed in various cultures, promoting food and nutrition security to the world population (Mottet & Tempio, 2017). According to data from FAOSTAT (2016), there are a total of 21 billion birds on the planet, this number represents approximately 3 birds per capita. Currently, the largest producers are the United States of America with 20 million tons per year, followed by China, the European Union and Brazil, with 18, 13 and 13 million tons per year respectively. The sector has been growing at an average rate of 5% per year for the last 50 years, making it the culture with the highest growth in the last decades. The per capita consumption of broiler birds reflects this figure, rising from 2.88 kg in 1961 to 14.13 kg in 2010. Figure 1 shows the growth of production by region, in the period of 50 years. Mottet & Tempio (2017) The main cause for the increase in productivity was the technological changes of the last decades, which transformed the production of bred animals loose for confinement in aviaries, where the control of feed, temperature and diseases caused the increase of the poultry population (Narrod & Tiongco, 2012 5 Much of the world's production of poultry comes from specialized producers (92%), which make intensive use of technology, infrastructure and equipment. Only 8% of the production is carried out in the domestic mode, where there is the predominance of subsistence culture and local trade (FAOSTAT, 2016). Most of the funding in this sector is made up of private investments. However, there is a growing public interest about the impacts on the environment and collective health. Poultry production requires large amounts of water, land and food and contributes to climate change by emitting greenhouse gases, either directly (through poultry and dung production) or indirectly (through feed production, deforestation, soil recovery) (Mekonnen & Hoekstra, 2014;Mottet & Tempio, 2017). The production of feed for the use of the poultry industry still causes the expansion of arable land, causing a loss of biodiversity. The use of natural resources in conjunction with pollution caused by pesticide use, soil nutrient losses and climate change are challenges for the poultry sector requiring the creation of public policies and regulations. Nexus-FEW Economic and population growth in modern economies is confronted with environmental constraints. Demand for water, food and energy is estimated to increase by 40%, 35% and 50% respectively by the year 2030, causing serious supply problems and fostering the use of more efficient technologies for food production (National Intelligence Council, 2012; Shariff et al., 2022). The Nexus-FEW (NF) has been debated by the academic community since the 1970 Oil Crisis. This concept represents the interaction between food, energy and water as finite and interdependent resources, as Figure 1 (Zhang et al., 2019). Water resources are needed for the production of almost all types of energy. Energy resources are essential for the transportation and treatment of water and both (energy and water) are indispensable for food production (Sukhwani et al., 2019). Yu et al. (2021) In 1986, with the organization of the second international conference of the United Nations on Ecosystems and Food-Energy Nexus, the debate on the topic was intensified, introducing economic, political and social aspects in the context of the NF. However, studies of this era considered the dual aspect of nexus, analyzing the relationship between water and energy or water and food (Zhang et al., 2019). Only in the last decade has the discussion on the interrelationship between water, food and energy taken on international proportions, with the conference entitled "Security Nexus FEW -Solutions for a Green Economy", encouraging more than three hundred initiatives around the world, during the period 2011 to 2015 (Bonn, 2011). More recently, research on NF focuses on food, energy and water security through emerging technologies and policy tools (Zhang et al., 2019). Water Stress and Performance in Poultry Production The interdependence of food production with water has aroused the interest of the scientific community and government regulators as critical themes for global sustainable development. Some literature has treated this topic as Nexus-Few (Hoekstra, 2017;Zhang et al., 2018). At the World Economic Forum (2008), the systemic view of Nexus gained popularity, where global challenges related to economic development were recognized from the perspective of the water-energy-food Nexus (Nexus-Few). This system is understood as the interactions between different subsystems (Sanders & Webber, 2012) that interact and compete at the same time, as they are coupled in their supply, processing, distribution and use of the natural resource for food production (Lawford et al., 2013;Liu et al., 2016;Cai et al., 2018;Rosa et al., 2021). Among the different concerns presented in the previous studies on NF are those that discuss how water stress affects and is affected by food production (Hoekstra, 2017;Cai et al., 2018;Rosa et al., 2021). And these studies recognize that one of the concerns arising from the understanding of this interdependence between water and food lies in the understanding of how water stress occurs and can be minimized (Rahmani et al., 2023). Water stress is known as the intensity of water withdrawal, identified by the ratio of the total freshwater withdrawn by the main economic sectors to the total renewable resources of fresh water available (SDG-6, UN, 2022), when demand is greater than water supply can increase the risk related to the availability of drinking water. The imbalance between water withdrawal and availability, turns water stress into one of the most serious environmental problems of the present time (Van Beck et al., 2011). Agricultural displacement and climate change in recent decades have led to increased water stress, specifically with reduced rainfall patterns and amounts, water quality, river flows, and water retention (Govoni et al., 2021). The production of poultry can also contribute to water stress, due to water consumption and emissions of liquid effluents (Brito, 2018;Lathuillière et al., 2018), that is, poultry production affects and is affected by water supply (Mekonnen & Hoekstr, 2014). With this interdependence, it is fundamental to recognize the water challenges faced by the sector, since water stress (lack of natural recharge of aquifers and surface waters) can negatively affect the performance of poultry production (Rahmani et al., 2023). This is because the reduction in the supply of water for poultry production leads to increased bird mortality, reduced bird size and increased costs for water treatment etc. (Govoni et al., 2021;Xu et al., 2021). Given the understanding that water stress affects poultry production, the first hypothesis of the research emerges: H1: Water stress negatively influences performance in poultry production. Public and Private Investments and Efficient Water Use Recognizing that the availability of water resources and food production interact and compete, there is a concern for how to manage this relationship and ensure better resource efficiency (Bazilian et al., 2011, Wolfe et al., 2016, Foran, 2015Hoekstra, 2017;Cai et al., 2018;Rosa et al., 2021). What is known is that the practices and technologies that drive water resource efficiency and the resulting productivity gains are commonly due to investments in research and development (R&D), supported by appropriate strategies and implementation practices that meet the interests and priorities of the food sector (Yunusa et al., 2018). This is because public investments make it possible to carry out actions that are considered important for the prosperity of society, with the objetive of detecting, evaluating and mitigating environmental and biological risks, contesting technical barriers, and subsidizing the formulation of public policies (Baa & Chattoraj, 2022;Bassi et al., 2013), such as, for example, the expansion of water reserves, the increase of checks on irregular discharges of effluent or the unauthorized use of water, the expansion of fiscal and financial incentives. Private investments (e.g. improvement of processes to reduce the generation of effluents, development of new products to reduce water consumption), in innovation or R&D, make it possible to predict risks, reduce waste, reduce water demand or increase water supply (Bassi et al., 2013;Hoekstra, 2017;Zhang et al., 2018;Michetti et al., 2019). Given the understanding that public and private investments contribute to the efficient use of water, we have the following research hypotheses: H2a: Public investments have a positive effect on water efficiency H2b: Private investments have a positive effect on water efficiency METHOD To test the three hypotheses of this study, multiple linear regression models A, B and C were developed. The data were obtained from the database of the Food and Agriculture Organization of the United Nations, from the period 2005 to 2019. The sample considered the countries with the highest production of broiler birds and which had the data available in the FAOSTAT database (Brazil, India, Mexico, Myanmar and Russia). In all, 29 observations were obtained related to investments in governmental research and development, being used for models A and B, and 12 observations containing data of investments in research and development carried out by the private sector, used to test the hypothesis of model C. On the FAOSTAT platform, agricultural and food production indicators were compiled from 5 different data sources. The variables used, data sources and descriptive statistics are available in Table 1. 8 Looking at the minimum, maximum, average and standard deviations we can see a large variability of the data, indicating different behavior between the countries that make up the sample. Data analysis was performed by multiple linear regression with support from the Jupyter Notebook and with the Python programming language. The production variable of brood birds was characterized as dependent on model A and was therefore used to test the H1 hypothesis. The Water Efficiency Index (IEUA) was used as a dependent variable of the B and C models, and was used to test the H2a and H2b hypotheses, respectively. The independent variables included water stress (models A, B and C), the production of poultry (Models B and C) and investments in government research and development (Models A and B) and private (Model C). Finally, the area of the country (Area) was used as the control variable. In the treatment the data underwent logarithmic transformation, aiming to reduce the influence of outliers. Figure 1 presents the equations of regression models: Results of Template A The first model sought to test whether water stress negatively influences performance in the production of poultry. In the analysis of the results, model A was found to comply with the assumptions of multiple linear regression (no heterocedasticity, no autocorrelation and normal residues). A high coefficient of determination was also observed (R²=0,927). Table 2 presents the results of the statistical analysis. Corroborating with H1, it could be observed that the variable hydric stress negatively affects the production of poultry in the countries that make up the sample. It was also possible to verify the positive impact of government investment in the production of poultry for cutting. 9 The following figure presents the equation with the angular coefficients made available by multiple linear regression: ln ( ) = 29,36 − 0,11 * ln( ) + 0,24 * ln( ℎ3 ) + 0,02 * ln( ) + Figure 4. Equation of regression model A Source: Prepared by the authors (2022) The coefficients of Model A allow us to conclude that a 1% increase in the level of water 8 stress, brings about a 0.11% decrease in the production of poultry for cutting. An increase of 1% in public 9 R&D investments, on the other hand, results in an increase of 0.24% in poultry production levels. The results show that a high level of water stress, poultry production can be directly affected, but with government intervention the problem can be minimized. That is, when in the poultry production region there is a greater demand than the supply of water can raise the level of risk in the availability of drinking water for all (SDG-6, UN, 2022), and this can lead to a reduction in the supply of water for poultry production, with consequences in increasing mortality of birds, in reducing the size of birds or in increasing the costs for water treatment etc. (Govoni et al., 2021;Xu et al., 2021). Thus, water and food production are two interrelated factors as suggested in the literature of Nexus Few (Sanders & Webber, 2012;Lawford et al., 2013;Liu et al., 2016;Hoekstra, 2017;Cai et al., 2018;Rosa et al., 2021). The government's understanding and management of this factor can be decisive, especially when this understanding leads to specific investments to reduce this water stress (Yunusa et al., 2018). Results of Template B In the analysis of Model B results, high coefficient of determination (0.957) was observed, data without heterocedasticity, without autocorrelation and normal residues. Table 3 presents the results of the statistical analysis, considering data on investment in government research and development. From the FAO data it was observed that the variables: water stress, poultry production and government innovation (spending on research and development in agriculture, livestock and forest recovery) positively affect the efficient use of water. The equation with the angular coefficients made available by multiple linear regression is rewritten below: ( ) = −78,42 + 3,38 * ( ) + 2,42 * ( ) − 0,09 * ( ) * ( ) + 0,08 * ( ℎ3 ) − 0,65 * ( ) + Figure 5. Equation of regression model B Source: Prepared by the authors (2022) The coefficients of multiple linear regression allow us to conclude that a 1% increase in the level of water 10 stress leads to a 3.38% increase in the water use efficiency index (IEUA). A 1% increase in poultry 11 production results in a 2.43% increase in the USA and, finally, a 1% increase in R&D 12 resources leads to a 0.08% increase in the USA index, at the 95%, 99% and 95% confidence level, respectively. However, when there is an interaction between the production variables of poultry and water 13 stress, a negative coefficient (-0.0976) is noted, probably because of the difficult management of water resources under these two conditions. This result may indicate that an increase in the two variables may also modify the angular coefficient of the regression line, decreasing the positive effect on the level of efficiency of water resources. It can be inferred that the presence of public investments may improve water efficiency, or that is to say, the investments or incentives of the government in R&D may be determinant for improving the relationship between the capacity of the natural environment in making water available and the demand from the productive sector in the use of water. Table 4 presents the results of the linear regression model, considering data on investment in research and development by the private sector. The effect of investments in R&D by the private sector on the efficiency of water use is even more significant, in which a 1% increase in R&D 14 resources leads to a 3.53% increase in the IUSA index. Considering the angular coefficients derived from multiple linear regression with investments in R&D by the Private sector, equation 3 is rewritten: 10 The p-value of the hydric stress variable is significant at the level of 10% significance; 11 The p-value of the bird production variable shows that it is significant at levels 1%, 5% and 10% of significance; 12 The variable that represents R&D resources is significant at the levels of 5% and 10% significance; 13 The p-value of the interaction between the production variables of poultry and water stress shows that the coefficient of this interaction is significant at the level of 10% significance. 14 The coefficient that represents the effect of investments in R&D by the private sector on the efficiency of water use is significant at the levels of 5% and 10% of significance (2022) Therefore, with the FAO data, one can diagnose the positive impact of investments in research and development (R&D) on the efficient use of water (US$/m3), in countries with a large production of poultry (Brazil, India, Mexico, Myanmar and Russia). Results of Template C Private investments were larger (β=3,53) compared to public investments (β=0,08). This result may indicate a characteristic of the sector of agriculture, cattle raising and forest recovery, with the participation of large private companies, which make intensive use of technology. Thus, the use of private investments can lead to the efficient use of water, with the adoption of new technologies that promote the rational use of water (e.g. drinking fountains, water reuse for cleaning, remote sensing and information technology use), which allow to improve practices and processes used in water management (Bassi et al., 2013;Hoekstra, 2017;Zhang et al., 2018;Michetti et al., 2019). FINAL CONSIDERATIONS This study aims to analyze how investments in innovation, research and development in the production of broiler birds affect the efficient use of water. To this end, data from Brazil, India, Mexico, Myanmar and Russia were analyzed in the period from 2005 to 2019. The data was collected from the FAOSTAT platform on investments in research and development (government and private), poultry production (more precisely those carried out by the government sector and those disbursed by the private sector) and water stress. The results reveal that there is an interdependence between water and poultry production, and the stress of water (greater demand than supply) can affect poultry production. And that public and private investments in research and development (R&D) can be determinant for reducing water stress, improving efficiency in the use of water, and improving performance in poultry production. This study may be important for poultry farmers to take into consideration the need to better understand the need to care about the natural resource "water", as this is finite, and when water stress occurs the production of poultry may be affected. Furthermore, it can serve as a warning to companies in the sector and to the government, which constantly needs to innovate in poultry breeding equipment, processes and methods for producing, so that water stress is reduced and at the same time to guarantee the production of poultry. This calls for a constant look at the balance between water and food production. This study presents limitations that provide opportunities for future research. FAOSTATA data has more general information and is based on estimates for each country. Furthermore, among the various elements considered in the Nexus-Few literature, research is limited to those linked to water and food production, but issues related to the use of energy for food production and marketing can also affect resource efficiency and food productivity. In addition, data on the total value of investments can be better explored, broadening the look to also analyze the quality and objectives of these investments. For future studies, it is suggested to explore what innovations are carried out and environmental practices adopted by integrated industries, poultry farmers and abattoirs, since from the understanding on the influence of all the links of the productive chain on the efficient use of water it is possible to advance towards sustainability.
2023-07-12T06:21:36.479Z
2023-05-09T00:00:00.000
{ "year": 2023, "sha1": "ed69cbac57cf1c25821362308869ce3f160a6cdc", "oa_license": "CCBY", "oa_url": "https://rgsa.emnuvens.com.br/rgsa/article/download/3352/896", "oa_status": "HYBRID", "pdf_src": "ScienceParsePlus", "pdf_hash": "44dcacc40fdd9d99e6be890dcd87c180b0d58e59", "s2fieldsofstudy": [ "Environmental Science", "Agricultural and Food Sciences", "Economics" ], "extfieldsofstudy": [] }
221838775
pes2o/s2orc
v3-fos-license
Reliability of Load-Velocity Profiling in Front Crawl Swimming The purposes of this study were to establish test-retest reliability of calculating load-velocity profiles in front crawl swimming using five and three different external loads, and if outcome results were comparable between calculation methods for monitoring performance over time. Fifteen swimmers at either national or international competition level (seven females and eight males) participated in this study. The subjects performed 25 m of semi-tethered swimming with maximal effort with five progressive loads (females 1, 2, 3, 4, and 5 kg and males 1, 3, 5, 7, and 9 kg) as well as 50 m maximal front crawl on 2 different days. The mean velocity during three stroke cycles in mid-pool was calculated and plotted as a function of the external load. Relationship between the load and velocity was expressed by a linear regression line and established for each swimmer. The intercepts between the axes of the plot and the established regression line were defined as theoretical maximum velocity (V0) and load (L0). In addition, L0 was also expressed as a percentage of body mass (rL0). The coefficient of determination (R2) and the slope (Slv) of the linear load-velocity relationship were calculated. The intra-class correlation coefficient (ICC) showed excellent agreement (ICC ≥0.902) for all variables. The coefficient of variation was ≤3.14% and typical error was rated as “good” in all variables. A difference was found between day 1 and 2 in V0 for three- and five-load calculations and for 50 m front crawl time (p < 0.05). No difference was found between the load-velocity profile outcomes variables compared between the three- and five-trial protocols on neither day 1 nor 2. The Bland-Altman plots showed a small bias across all resistance conditions for five loads, L0: 0.04 kg, rL0: 0.13%, V0: −0.03 m/s, and Slv: 0.003 −m/s/kg and for three loads, L0: −0.24 kg, rL0: −0.27%, V0: −0.04 m/s, Slv: 0.002 −m/s/kg. In conclusion, the load-velocity profile for front crawl swimming can be calculated with high reliability from both five and three external loads and comparable results in outcome variables were established. These methods can be used to monitor performance parameters over time, and to investigate and compare swimmers’ velocity and strength capabilities to allow for individualized training prescription to improve performance. The purposes of this study were to establish test-retest reliability of calculating loadvelocity profiles in front crawl swimming using five and three different external loads, and if outcome results were comparable between calculation methods for monitoring performance over time. Fifteen swimmers at either national or international competition level (seven females and eight males) participated in this study. The subjects performed 25 m of semi-tethered swimming with maximal effort with five progressive loads (females 1, 2, 3, 4, and 5 kg and males 1, 3, 5, 7, and 9 kg) as well as 50 m maximal front crawl on 2 different days. The mean velocity during three stroke cycles in mid-pool was calculated and plotted as a function of the external load. Relationship between the load and velocity was expressed by a linear regression line and established for each swimmer. The intercepts between the axes of the plot and the established regression line were defined as theoretical maximum velocity (V 0 ) and load (L 0 ). In addition, L 0 was also expressed as a percentage of body mass (rL 0 ). The coefficient of determination (R 2 ) and the slope (S lv ) of the linear load-velocity relationship were calculated. The intra-class correlation coefficient (ICC) showed excellent agreement (ICC ≥0.902) for all variables. The coefficient of variation was ≤3.14% and typical error was rated as "good" in all variables. A difference was found between day 1 and 2 in V 0 for threeand five-load calculations and for 50 m front crawl time (p < 0.05). No difference was found between the load-velocity profile outcomes variables compared between the three-and five-trial protocols on neither day 1 nor 2. The Bland-Altman plots showed a small bias across all resistance conditions for five loads, L 0 : 0.04 kg, rL 0 : 0.13%, V 0 : −0.03 m/s, and S lv : 0.003 −m/s/kg and for three loads, L 0 : −0.24 kg, rL 0 : −0.27%, V 0 : −0.04 m/s, S lv : 0.002 −m/s/kg. In conclusion, the load-velocity profile for front crawl swimming can be calculated with high reliability from both five and three external loads and comparable results in outcome variables were established. These methods can be used to monitor performance parameters over time, and to investigate and compare swimmers' velocity and strength capabilities to allow for individualized training prescription to improve performance. Keywords: accuracy, semi-tethered, strength, performance, testing, ICC, Bland-Altmann analysis, multiple trial method INTRODUCTION Force-velocity profiles of locomotive patterns, such as sprint running, have been used to understand how these two performance indicators interact (Cross et al., 2018a;Jiménez-Reyes et al., 2019. Such an approach can also be useful in sprint swimming. However, even though the net-force in the swimming direction (i.e., the sum of the propulsive and resistive forces) can be obtained by the inverse-dynamics approach (Sanders et al., 2015), measuring the propulsive and resistive forces in the water separately is complex due to unsteady manners of the water flow around swimmer's body (Samson et al., 2018). One way of overcoming the complexity is to apply a fully tethered swimming approach, in which a swimmer is attached to an inelastic cord -the other end of which is attached to a fixed force transducer (Amaro et al., 2014(Amaro et al., , 2017. With this method, a tested swimmer does not move forward; thereby, the measured force can be interpreted as the force the swimmer produced for a propulsive purpose. However, since the swimmer do not produce any forward velocity, fully tethered swimming is not applicable for establishing the force-velocity profile. An alternative method is a semi-tethered swimming approach, in which a swimmer is required to swim with a known external load applied by a pully system (Dominguez-Castells et al., 2013;Cuenca-Fernández et al., 2020), a floating object (Kolmogorov and Duplishcheva, 1992;Morais et al., 2020), or a resistance device (Gonjo et al., 2020). This method allows researchers to conduct a similar assessment as bespoken on-land force-velocity studies. Consequently, employing the multiple trial method using different external resistive loads instead of force (Cross et al., 2018a,b) with a series of semi-tethered swim tests is an alternative to force-velocity testing (Gonjo et al., 2020). Outcome variables such as maximum load at zero velocity (L 0 ), maximum velocity at zero load (V 0 ), steepness of the linear slope for the load-velocity relationship (S lv ) can be used to understand (Cross et al., 2017b), monitor (Jiménez-Reyes et al., 2019, and prescribe training programs (Cross et al., 2018a). Despite the potential, semi-tethered swimming approaches have mostly been applied to assess the net swimming power that is the product of the net-force and the swimming velocity (Shionoya et al., 1999;Dominguez-Castells et al., 2013;Kimura et al., 2013), and there is only one study employing the method to investigate swimming load-velocity profiling (Gonjo et al., 2020). Furthermore, the reliability of the method has not been investigated. It has been reported that fully-tethered swimming is highly reliable to assess the maximum and mean tether force (Amaro et al., 2014), which implies that swimmers can produce stable swimming force and motion in a tethered condition. However, given the differences between fully-and semi-tethered swimming approaches such as single and multiple trials and potential technical changes due to distinct relative flow velocity around the body, it is still unclear whether or not a semitethered swimming testing is also a reliable method. Therefore, establishing the reliability of swimming load-velocity profiling with a semi-tethered swimming approach is essential to utilize the method as a test to understand individual and group level performance, monitoring performance over time, and prescribe training programs as done in sprint running (Cross et al., 2017b(Cross et al., , 2018aJiménez-Reyes et al., 2019). Ensuring the reliability would also expand possibilities for researchers to conduct a biomechanical or physiological investigation with the method. The number of trials to be used should also be considered carefully as it will decrease time of testing and minimize the effect of fatigue that could lead to an overestimation of V 0 (Gonjo et al., 2020), especially when performing a heavy load trial (Driss and Vandewalle, 2013). However, it is unclear if a different number of trials affect the reliability of the measurement. The purpose of this study was therefore to explore test-retest reliability of load-velocity profile outcome variables in front crawl swimming and to compare calculations using five and three different loads. Subjects Sixteen swimmers at either national or international competition level [mean ± standard deviation (SD): 17.3 ± 1.5 y, 178.0 ± 8.8 cm, 68.9 ± 7.7 kg, 690 ± 77.7 FINA Points and 50 m front crawl personal best time 26.1 ± 1.9 s] including eight females (mean ± SD: 17.6 ± 1.2 y, 171.2 ± 6.1 cm, 64.8 ± 7.2 kg, 689.6 ± 91.4 FINA Points and 50 m front crawl personal best time 27.8 ± 0.9 s) and eight males (mean ± SD: 17.0 ± 1.8 y, 184.9 ± 4.6 cm, 73.0 ± 6.4 kg, 690.4 ± 67.8 FINA Points and 50 m front crawl personal best time 24.5 ± 1.0 s) volunteered for the study. The highest number of achieved Fédération internationale de natation (FINA) points for each swimmer regardless of distance and stroke within the last year was used. The swimmers were recruited from a local swimming performance high school and the junior and youth national team. Inclusion criteria were set to minimum 5 years of participation in competitive swimming, training at least seven times and 15 h per week, competing at the national level in any stroke or distance and no current medical conditions. No subjects met any of the exclusion criteria: heart disease (high blood pressure and high cholesterol), diabetes, vertigo, balance disorders, sick, or injured during the week prior to testing from past and current medical conditions. However, one subject was excluded during the data analysis for not containing a 1 kg trial on day 1 of the experiment, which resulted in a total analyzed subject number of fifteen (seven females and eight males). The study was approved by the local Ethical committee and the National Data Protection Agency for Research in accordance with the Declaration of Helsinki. Prior to participation, all subjects completed a questionnaire including details on training activity, injuries, sicknesses and family history. The subjects were given detailed verbal and written explanation of the purpose, procedures and risks associated with participation. No nutritional recommendations were imposed on the subjects outside of their daily routines and they were instructed to abstain hard physical training for the last 24 h prior to testing. Subjects or the legal guardian (for minors) provided written informed consent prior to participation. Procedures The experiment was performed in a 25 m indoor swimming pool with water and air temperature of 27 and 28 • C, respectively. The subjects first performed their individual standardized warm-up procedure on land and in water as they do before a competition for ∼45 min. After warm-up, the subjects performed a 50 m front crawl with maximal effort, in which the finishing time was recorded by an automatic timing system (Omega, Bienne, Switzerland). Following a 10-20 min rest, subjects were required to perform five 25 m front crawl sprints with maximal effort with different loads. The loads for the female swimmers were 1, 2, 3, 4, and 5 kg, while the loads for the male swimmers were 1, 3, 5, 7, and 9 kg (assessed in ascending order). These external loads were decided based on pilot testing to avoid large decelerations throughout the heaviest load (Gonjo et al., 2020) and to impose a reduction in velocity (%V dec ) compared with the lightest load to be categorized as heavy resistance (>30%V dec ) (Petrakos et al., 2016). Each sprint was initiated from a pushoff start followed by surface swimming at the 5 m mark. In order to attempt total recovery between each sprint, recovery time was ∼6 min (Hancock et al., 2015). During the experiment the swimmers were not blinded from each other or excluded from interacting with each other as this would not be feasible in future studies or assessments of swimmers. To provide data for the reliability calculations, the same procedures were undertaken 1-5 days later at the same time of the day. A portable robotic resistance device 1080 Sprint (1080 Motion AB, Lidingö, Sweden) featuring a servo motor (200 RPM OMRON G5 Series Motor, OMRON Corporation, Kyoto, Japan) was used to measure the swimming velocity and to add an external load on swimmers. The device was positioned on the starting block 1 m above the water surface to minimize the disruption of swimming technique (especially kicking) by a fiber cord connecting the device and swimmer (Amaro et al., 2017). Subjects were instructed to wear a S11875BLTa swim belt (NZ Manufacturing, OH, United States) around their pelvis to connect the cord. An illustration of the experimental set-up can be found in Gonjo et al. (2020). The settings for the 1080 Sprint were; isotonic resistance mode, gear 1, eccentric and concentric velocity of 0.05 and 14 m/s, and load parameters (kg) presented previously. Data was acquired with a sampling frequency of 333 Hz from the 5.0 to the 20.4 m mark for each 25 m trial. Data was imported to MATLAB R2019b (MathWorks, Natick, MA, United States) as text files for further processing. Three stroke cycles around the middle of the pool that generated the highest R 2 value were selected to calculate the parameters of the load-velocity profile. The window was chosen to avoid the impulse from the wall push-off and the speed decrease at the end of each trial (Dominguez-Castells et al., 2013). Since the cord that was used for the velocity measurement was not aligned with the swimming direction, the following equation was used to obtain the horizontal velocity component (Gonjo et al., 2020). V and V adj are the measured velocity by the machine and the horizontal component of the velocity, respectively. 1.00 is the height (m) above the water surface where the cord is stretched out from the device, and L w is the length of the cord (m) from the device to the swimmer at each sampling time. The mean V adj from the three stroke cycles was plotted as a function of the corresponding external load (kg). A linear regression line was established for each subject (Dominguez-Castells and Arellano, 2012) based on the load-velocity plot for three and five trials. For all calculation modelings using three different loads, female swimmers had 1, 3, and 5 kg, while males had 1, 5, and 9 kg. V 0 and L 0 were predicted from the regression line by obtaining the intercepts of the line with the vertical and horizontal axes, respectively. Coefficient of determination (R 2 ) and S lv (the steepness of the linear slope for the load-velocity relationship, computed as S lv = -V 0 /L 0 ) were calculated. L 0 was also expressed as a percentage of body mass (rL 0 ). Statistical Analyses The Statistical Package for Social Sciences (SPSS) version 24.0 (IBM Corp, Armonk, NY, United States) and Excel for Microsoft 365 (Microsoft Corp, Redmond, WA, United States) were used for all statistical computations. The Shapiro-Wilk test was used to check the normal distribution of the data, which was met for all variables. Descriptive analysis of the load-velocity profile parameters (L 0 , V 0 , rL 0 , R 2 , and S lv ) are reported as mean and standard deviation. Test-retest reliability of each parameter was assessed using intra-class correlation (ICC) with a two-way random single-measure model (Bartko, 1966), absolute error (AE), typical error (TE), coefficient of variation (CV), standard error of measurement (SEM), and minimal detectable change based on a 95% confidence interval (MDC). The ICC was classified as <0.5: poor, 0.5-0.75: moderate, 0.75-0.9: good, and >0.9 excellent agreement (Koo and Li, 2016). A SEM smaller than, similar to or larger than the MDC was rated as "good, " "ok, " or "marginal, " respectively (Buchheit et al., 2011). It is important to assess the change in the mean other than within-participant variation and retest correlation (Hopkins, 2000). Therefore, a paired sample t-test was used to compare the outcome parameters between the two sessions as well as between the three and five loads calculations. Because females and males were assessed with different external loads, a Mann-Whitney U-test was used to compare their response in %V dec for the protocols between the lightest and heaviest load. The level of statistical significance was set at p < 0.05. Furthermore, Bland-Altman analysis was used to display the within-subject variation as well as the systematic change between the sessions: bias (mean difference), standard deviation (SD) and upper and lower limits of agreement (defined as MD ± 1.96 × SD) were calculated (Bland and Altman, 1986). RESULTS Test-retest reliability for load-velocity profiling parameters from five and three different load conditions are displayed in Tables 1, 2, respectively. The ICC showed excellent agreement for all variables (L 0 , rL 0 , V 0 , and S lv ) with both five and three loads calculations. CV was 3.14% for S lv with five loads and < 3% L 0 , estimated maximum load from the load-velocity slope; rL 0 , estimated maximum load as a percentage of body mass; V 0 , estimated maximum velocity from the loadvelocity slope; S lv , teepness of load-velocity regression line; R 2 , coefficient of determination of the load-velocity regression line; FC, front crawl; SD, standard deviation; AE, absolute error; TE, typical error; CV, coefficient of variation; ICC, intra-class correlation coefficient; CI upper95% , upper bound of 95% confidential interval of Mean; CI lower95% , lower bound of 95% confidential interval of Mean; SEM, standard error of measurement; MDC, minimal detectable change. for all remaining variables with five and three loads calculations. SEM was rated as "good" in all variables when compared to MDC. A significant difference was found between day 1 and 2 in V 0 for both three-and five-loads calculations and for 50 m front crawl time (p < 0.05). No difference was found between the loadvelocity profile outcome variables compared between the three and five loads calculations on either day 1 or 2 (p > 0.05). There was no difference between the protocols for females and males in terms of %V dec between the lightest and heaviest load, day 1 (females 40.6 ± 7.8% and males 55.5 ± 16.9%, p = 0.09) and day 2 (females 41.7 ± 8.9% and males 53.0 ± 14.3%, p = 0.12). Distribution of the load-velocity profile variables is presented in The average curve and range for the load-velocity profile of the 15 subjects are presented in Figure 2. L 0 mean were 16.8 and 17.1 kg (five loads) and 16.9 and 16.7 kg (three loads) for males, and 11.1 and 10.8 kg (five loads) and 11.2 and 10.9 kg (three loads) for females in day 1 and 2, respectively. V 0 mean were 1.8 and 1.8 m/s (five loads) and 1.8 and 1.8 m/s (three loads) for males, and 1.6 and 1.6 m/s (five loads) and 1.6 and 1.5 m/s (three loads) for females in day 1 and 2, respectively. DISCUSSION The aim of this study was to determine the test-retest reliability of load-velocity profile outcome measurements derived from front crawl semi-tethered swimming with five and three different external loads. Overall, the method has excellent reliability for both the three and five trial approach and no difference was observed for any outcome measurements between the three and five load calculations. The observed reliability of the load-velocity variables obtained in the present study are similar or better to those observed in multiple trial resisted sprints (Cross et al., 2017a;Cahill et al., 2019Cahill et al., , 2020 and cycling (McCartney et al., 1983;Dore et al., 2003). The observed ICC values were excellent for both the three (range: 0.902-0.981) and five (range: 0.923-0.980) trial calculations, which are greater than those reported for multiple trials resisted sprinting (Cross et al., 2017a;Cahill et al., 2019Cahill et al., , 2020. The observed CV range: for three (1.4-2.6) and for five (1.4-3.1) loads are comparable to what has been reported in resisted sprinting (Cross et al., 2017a;Cahill et al., 2019Cahill et al., , 2020 and in cycling (Dore et al., 2003). Considering the marginal influence on reliability outcome variables from the calculations with five and three trial method, the three-trial method is sufficient to assess load velocity profiles of semitethered freestyle swimming. The heaviest load (5 kg for women and 9 kg for men) gave a reduction in velocity compared with the lightest load (1 kg) by around 49% V dec , which is categorized as heavy resistance (>30%V dec ) (Petrakos et al., 2016). In addition, there was no difference in %V dec between females and males on either day 1 (p = 0.09) or day 2 (p = 0.12). One could argue that using heavier loads might be better for some subjects to ensure assigning trials FIGURE 1 | Bland-Altman plots of the difference between test and retest (y-axis) vs. mean of measurements (x-axis) of load-velocity profile parameters L 0 , rL 0 , V 0 , and S lv with zero difference (dashed line), bias (thick line) and with lower and upper limits of agreement (dotted lines). The sample size for each Bland-Altman plot is n = 15 with seven female and eight male subjects marked with red and blue dots, respectively. L 0 , estimated maximum load from the load-velocity slope (kg); rL 0 , estimated maximum load as a percentage of body mass (%/kg); V 0 , estimated maximum velocity from the load-velocity slope (m/s); S lv , slope of load-velocity regression line (−m/s/kg). with a load close to L 0 . However, the use of up to 5 or 9 kg load in the present study is justified by the average R 2 values of the loadvelocity profiles 0.99 for all conditions. They are comparable to findings from resisted sled sprints (0.99) with < 60%V dec (Cross et al., 2017a), and other studies using multiple trial method on semi-tethered swimming (0.97-0.99) (Dominguez-Castells and Arellano, 2012; Gonjo et al., 2020). This shows a clear and robust linear relationship between the velocity and load parameters in semi-tethered swimming, indicating that more than three loads with extremely heavy loads are unnecessary. A reduction in the number of trials is important as it could minimize the effect of fatigue when predicting L 0 and V 0 (Driss and Vandewalle, 2013). For example, failing to perform with maximal swimming velocity at a heavy load due to fatigue would lead to a steeper S lv than it should be, and therefore causes an overestimation of V 0 (Gonjo et al., 2020). Given the excellent ICC for both five and three load calculations (0.948 and 0.962, respectively) and CV (3.1 and 2.6%, respectively), S lv can be used as an index of the individual balance between velocity and load (strength) capabilities of each swimmer. A steep S lv , expressed by a large negative value, would indicate that the swimmer is "velocity oriented, " and vice versa (Morin and Samozino, 2016). An example from the present study is two male swimmers, both with a V 0 of 1.83 m/s. The L 0 was 19.80 and 12.49 kg, generating a S lv of −0.09 and −0.15, respectively. This indicates that the first swimmer would be load dominant while the second is velocity dominant. While there is currently no optimal load-velocity profile established for front crawl swimming as there is for ballistic movements (Samozino et al., 2012), this value could be utilized to identify swimmers who are velocity or load dominated and subsequently used to prescribe training programs to target an imbalance and thereby enhance swimming performance. Despite the good and excellent reliability suggested by the analyzed variables, V 0 was significantly lower (around 2%) in the second day compared with the first day for both five and three loads calculations. A systematic change in a test-retest investigation design can be due either to the participants (e.g., physical and psychological condition) or the setting of the equipment used. Several results of the current study imply that the bias was probably due to the participant rather than the testing equipment or setting; firstly, 50 m front crawl time was also significantly slower in the second than the first day, which means that swimmers' condition was slightly worse in the second day; secondly, the systematic change was not observed in L 0 , rL 0 , and S lv (p > 0.05), suggesting that the error was not due to the equipment setting or preparation since the three outcome measures are not entirely independent but related to each other (i.e., if there was a systematic error due to the equipment, there should have been at least two variables with a systematic change). Therefore, it should be noted that the reliability in V 0 might have been underestimated because of the systematic bias. Nevertheless, even with the systematic bias, the test-retest error in V 0 in the current study was similar to or better than on-land loadvelocity profiling Cahill et al., 2019), which reinforces the reliability of swimming load-velocity profiling. Investigating the load-velocity profile in swimmers with different distance specialities could be practically useful. Even though (Rodríguez and Mader, 2011) sprint and distance swimmers exhibit a similar motion in both sprint and distance paces (McCabe et al., 2011;McCabe and Sanders, 2012), longdistance swimmers are characterized by a greater percentage of Type I muscle fiber than sprint swimmers, suggesting less muscular power capabilities than sprint athletes (Gerard et al., 1986). Therefore, despite the similarity in the motion, it is probable that swimmers would exhibit a different load-velocity profile depending on their speciality due to the neuromuscular difference. It would also be of interest to compare load-velocity profile between competitive swimmers and triathletes or open water swimmers. Distance competitive swimmers, triathletes, and open water swimmers could all be categorized as endurance athletes; therefore, neuromuscular differences between those athletes might not be as evident as the difference between sprint and distance swimmers. However, contrary to the similar kinematics among competitive swimmers, competitive swimmers and triathletes show distinct kinematic characteristics (Millet et al., 2002), which might affect the load-velocity profile. In all variables obtained in this study, V 0 , L 0 , rL 0 , and S lv showed smaller SEM than MDC. This demonstrates lower error in the measurement compared to detecting an actual change in performance or in the parameter of interest. However, this study had a heterogeneous population including both women and men at different performance levels, and therefore MDC should be interpreted with caution. For example, the interpretation of MDC of 1.46 kg for L 0 with three loads would be quite different for subjects scoring L 0 of around 9 kg or over 23 kg. The MDC value in this study should foremost be understood as a variable used to assess the reliability, and further studies are necessary to establish MDC values to predict changes in performance for subjects of different genders, ages and performance levels. Therefore, another approach for detecting a change in performance for subjects with low L 0 can be calculated using 1.5-2.0 times the TE (Hopkins, 2000). This approach would then yield a change in L 0 between 0.47 and 0.62 kg (using three loads) to identify that a change in performance has occurred. Limitations One limitation of the present study was a mixed-gender in the limited number of analyzed samples. As clearly seen in Figure 2, male and female swimmers showed different V 0 and L 0 , implying that obtained results based on the absolute numerical outcomes (such as AE, TE, SEM, and MDC) might have been biased due to the gender difference. However, the current study also investigated relative test-retest error as CV, and the test-retest agreement was also checked using ICC. Despite the difference in the absolute numerical values between genders, these variables should support the reliability of the testing method if both males and females responded to the testing in a comparable manner. Given that no difference was observed between females and males in terms of %V dec from the lightest to heaviest load trials, it is probable that swimmers responded to the prescribed loads similarly regardless of the gender. Therefore, despite the possibility of the gender effect on the results, it can still be concluded that the current study established the reliability of swimming load-velocity profiling method. Devices that put a constant load on swimmers might not be available for all practitioners and there are some time requirements involved to properly secure and set-up the device on the starting block. Simpler alternatives could therefore be explored (e.g., parachutes together with a stopwatch). A critical matter for alternative methods would be to standardize the external loads and obtain accurate time measurements. PRACTICAL APPLICATIONS Load-velocity profiling with three different loads can be a practical and time efficient performance test that allows coaches and practitioners to investigate and compare velocity and strength capabilities of swimmers. The outcome parameters of a load-velocity profile can allow the prescription of individualized training for improving performance. This reliable performance test will also be of help to establish requirements for performance at different levels. Future research should therefore examine the load-velocity profile relationship with swimming performance for different strokes, distances and genders. It would also be of interest to compare load-velocity profile between competitive swimmers and triathletes or open water swimmers as well as for other swimming-based sports. Attempts should also be made for establishing optimal S lv 's to determine the preferred balance between velocity and strength capacities as well as training intervention. CONCLUSION The load-velocity profile for front crawl swimming can be calculated with high reliability using both five and three different loads. This means that load-velocity profiling can be used to assess swimming specific strength and velocity capabilities related to performance over time. It enables practitioners to investigate and compare swimmers' velocity and strength capabilities allowing individualized training prescriptions. DATA AVAILABILITY STATEMENT All datasets generated for this study are included in the article/Supplementary Material. ETHICS STATEMENT Studies involving human subjects were reviewed and approved by the local ethical committee at the Norwegian School of Sport Sciences, reference number 47 -060218-200318. The subjects provided their written informed consent to participate in this study.
2020-09-23T13:10:02.648Z
2020-09-23T00:00:00.000
{ "year": 2020, "sha1": "24927b02badf24a99065b8ce7aa752bedb7e66bc", "oa_license": "CCBY", "oa_url": "https://www.frontiersin.org/articles/10.3389/fphys.2020.574306/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "24927b02badf24a99065b8ce7aa752bedb7e66bc", "s2fieldsofstudy": [ "Engineering" ], "extfieldsofstudy": [ "Mathematics", "Medicine" ] }
14182423
pes2o/s2orc
v3-fos-license
Necrotizing Encephalitis Caused by Disseminated Aspergillus Infection after Orthotopic Liver Transplantation Liver transplantation is the only available treatment for some patients with end-stage liver disease. Despite reduction in mortality rates due to advances related to surgical techniques, intensive medical management and immunosuppressive therapy, invasive fungal infections remain a serious complication in orthotopic liver transplantation. We report the case of an 18-year-old male diagnosed with autoimmune cirrhosis in 2009 who was assessed and listed for liver transplantation for massive variceal hemorrhage. One year after listing a successful orthotopic liver transplantation was performed. Uneventful early recovery was achieved; however, he developed pulmonary and neurological Aspergillus infection 23 and 40 days after surgery, respectively. Antibiotic therapy with voriconazole and amphotericin was started early, with no major response. Neuroimaging revealed multiple right frontal and right parietal lesions with perilesional edema; surgical management of the brain abscesses was performed. A biopsy with periodic acid-Schiff and Gomori stains revealed areas with mycotic microorganisms morphologically consistent with Aspergillus, later confirmed by culture. The patient developed necrotizing encephalitis secondary to aspergillosis and died. Necrotizing encephalitis as a clinical presentation of Aspergillus infection in an orthotopic liver transplant is not common, and even with adequate management, early diagnosis and prompt antifungal treatment, mortality rates remain high. Introduction Orthotopic liver transplantation is the only definitive therapeutic option for patients with end-stage liver disease. However, invasive fungal infections are an important cause of post-transplantation mortality in solid organ recipients, and their incidence, particularly for candidiasis and aspergillosis, varies from 1.4 to 42% of cases [1]. In solid organ transplant recipients, disseminated fungal infections by Candida spp. account for 59.0%, by Aspergillus spp. for 24.8%, by Cryptococcus spp. for 7.0% and by other molds for 5.8% [2]. Despite prompt diagnosis and early management, mortality due to fungal infections depends on the type of transplant and can range between 3 and 100% [1,3]. Fungal infections frequently occur in the first month post transplantation [4], and their incidence differs in frequency and specific etiology according to the type of transplanted organ, procedure and transplantation center [5]. The clinical presentation of fungal infections can range from asymptomatic to disseminated and the clinical presentation of central nervous system infection may be subtle and difficult to diagnose, with life-threatening infections [6]. Previous chest X-rays were normal. However, during surgery the patient experienced desaturation and we immediately performed an X-ray, which detected right superior and inferior lobar atelectasis; the symptoms improved with positive pressure therapy. On-site bronchoscopy found purulent tracheal secretions. On the fifth postoperative day he developed clinical sepsis, needing management at the intensive care unit. Because of increased cholestasis, endoscopic retrograde cholangiopancreatography and biliary stenting were performed; additionally, we performed a cardiothoracic focus search because his clinical past, apart from transesophageal echocardiography, was normal. A month after liver transplantation, because of persistent tracheal secretions, sputum cultures were taken, finding Aspergillus spp. A thoracic CT scan showed multiple nodes in both lungs, with imaging patterns of tree-in-bud opacities, suggesting pulmonary Aspergillus. Despite fungal treatment including posaconazole, caspofungin, amphotericin B and aciclovir, the patient developed neurological symptoms including left hemiplegia, severe headache and mental status changes. Magnetic resonance imaging of the brain showed multiple right frontal and right parietal lesions with perilesional edema ( fig. 1). The patient underwent biopsy of brain lesions under stereotactic guidance. Despite treatment, he developed multiple organ failure and he died 48 days after liver transplantation. The explanted cirrhotic liver revealed abundant copper deposits ( fig. 2a, b). Complementary studies including aldehyde fuchsin and periodic acid-Schiff without diastases revealed areas of hepatic parenchyma with significant deposits of copper and its binding protein, mostly at periseptal location, in some foci a very small amount and in others complete absence, minimum focal intracanalicular cholestasis and no Mallory bodies or major ballooning. These findings together with the clinical suspicion of a metabolic disease were consistent with Wilson's disease; what is striking, however, is the absence of other histologic findings characteristic of this condition in the active state, including Mallory bodies, ballooning and prominent nuclear pseudoinclusions, among others. Trucut liver biopsy obtained 4 days after transplantation showed a hepatic parenchyma with hepatocanalicular cholestasis from zone 3 to zone 2 and focally in zone 1. Apoptotic hepatocytes, Kupffer cells with abundant pigment and sinusoidal congestion were also observed. Trichrome staining showed no fibrosis, Kupffer cells with rich resistant periodic acid-Schiff-diastase (PAS-D)-positive material and some intracytoplasmic iron deposits (+/++++). Histochemical study for cytomegalovirus was negative. The described findings corresponded to a mild acute cellular rejection, Banff score 4/9 (portal swelling 2, ductulitis 1, endotheliitis 1) associated with hepatocanalicular cholestasis. From the cerebral collection (frontal right abscesses), H&E and histochemical staining with PAS-D, Ziehl-Neelsen, Gram and Gomori were performed. Fragments of brain parenchyma showed edema, extensive necrosis, neutrophil infiltration in abundant quantity, apoptotic cells and presence of abundant septate hyphae angled at 45°, morphologically compatible with Aspergillus spp. (fig. 2c, d), corroborated by the PAS-D and Gomori stains. The Ziehl-Neelsen and Gram stains were negative for acid-alcohol-resistant bacilli and bacteria, respectively. A diagnosis of necrotizing encephalitis with organisms morphologically compatible with Aspergillus structures was made. Discussion Fungal infections are among the most common complications in transplantation. Candida and Aspergillus account for 70-90% of all cases [5]. In liver transplantation both organisms explain the vast majority of invasive fungal infections and are considered as a significant cause of morbidity and mortality. When the central nervous system is compromised by an abscess, Aspergillus is the most common etiology and has a tendency to spread to any other organ through hematogenous dissemination [7]; however, these infections usually grow very slowly, making the clinical information decisive for the presumptive diagnosis. For both bacterial and fungal agents, fever, headache, vomiting and altered sensorium are the presenting symptoms of intracranial abscess, with mortality rates varying between 10 and 15% [8]. Our patient presented altered state of consciousness, measles, tachycardia and fever 15 days after transplantation. Unfortunately the outcome was consistent with some descriptions that established the patient's neurological status at presentation as a significant predictor of outcome, with an increased mortality rate in those who present with altered mental status and rapid neurological deterioration [9,10]. Despite correct surgical techniques, immunosuppressive therapy and advanced medical treatment, fungal infections remain a significant cause of post-transplantation morbidity and mortality [11]. In case of clinical suspicion of fungal infection, cultures will confirm the infectious agent; however, the sensitivity of fungal cultures is relatively low, thus some authors suggested that measuring Aspergillus antigens such as galactomannan in clinical samples such as plasma, serum, bronchoalveolar lavage fluid or cerebrospinal fluid would be useful for diagnosis [12]. In our case, suspicion of infection justified the bronchoalveolar lavage that indeed revealed the primary Aspergillus infection. The current recommendation in cerebral Aspergillus infection includes surgical excision and debridement of the abscess, antifungal therapy and reduction in the immunosuppressive regime after liver transplantation [12]. This last issue was difficult to achieve in our case due to the progressive cholestasis, altered liver function tests and the mild acute cellular rejection proven with the transplant biopsy. Similarly, our patient was not at high risk for fungal infection; antifungal prophylaxis does not reduce overall mortality, and its beneficial effect has been predominantly associated with the reduction of C. albicans infection and mortality [13]. It is therefore essential to have in place an effective approach focus in prevention and based on predicted infection risk, including local antimicrobial resistance patterns and surveillance of specific risk factors [14]. Conclusion Regardless of adequate surgical techniques and immunosuppressive therapies, this case presents the challenges in diagnosing and treating fungal infections in solid organ transplantation. Even though brain abscesses caused by Aspergillus are not common in orthotopic liver transplantation, we emphasize that in immunocompromised patients who develop mental status changes, seizures or focal neurological findings, despite prompt medical intervention there are predominantly worse outcomes among liver and heart transplant recipients [2,10]. Certainly, prospective studies are needed to accurately assess the risk of fungal infections and antifungal prophylaxis prior to transplantation to help informed decisionmaking focus on effective prevention and treatment.
2017-09-17T01:55:30.263Z
2015-01-14T00:00:00.000
{ "year": 2015, "sha1": "453a0fe9cb8d80ca93656bceddb72ae8fbe20398", "oa_license": "CCBYNC", "oa_url": "https://www.karger.com/Article/Pdf/371541", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "18bf51cba6f98b9eef46aba487d4909df1997169", "s2fieldsofstudy": [ "Medicine", "Biology" ], "extfieldsofstudy": [ "Medicine" ] }
266782425
pes2o/s2orc
v3-fos-license
Performance Evaluation Method for Intelligent Computing Components for Space Applications The computational performance requirements of space payloads are constantly increasing, and the redevelopment of space-grade processors requires a significant amount of time and is costly. This study investigates performance evaluation benchmarks for processors designed for various application scenarios. It also constructs benchmark modules and typical space application benchmarks specifically tailored for the space domain. Furthermore, the study systematically evaluates and analyzes the performance of NVIDIA Jetson AGX Xavier platform and Loongson platforms to identify processors that are suitable for space missions. The experimental results of the evaluation demonstrate that Jetson AGX Xavier performs exceptionally well and consumes less power during dense computations. The Loongson platform can achieve 80% of Xavier’s performance in certain parallel optimized computations, surpassing Xavier’s performance at the expense of higher power consumption. Introduction In recent years, with the increasing demand for space exploration, future space science missions will use larger sensors, higher sampling frequencies, and more accurate instruments compared to those in existing space science missions.Tasks such as autonomous exploration of robots and active debris removal require highly autonomous operations, which require more powerful on-board processing capabilities [1][2][3][4].Traditional processors used for space missions cannot meet these high-performance requirements, and the process of redeveloping, producing, and certifying space-grade processors is lengthy, complex, and costly.Existing commercial off-the-shelf (COTS) devices have powerful parallel computing capabilities, and the application of artificial intelligence algorithms in space missions is becoming increasingly widespread.Commercial devices have high floating-point computational performance and excellent neural network accelerators, which can improve the efficiency and accuracy of space mission processing.Therefore, the aerospace field is currently searching for and evaluating processors that can meet high-performance requirements from existing commercial devices to alleviate these challenges.Some government aerospace agencies [5][6][7][8] and private enterprises [9,10] have conducted extensive research on performance evaluation of COTS devices.Some of these works focus on radiation studies of certain products, while others focus on balancing the performance of various architecture processors and determining which ones are suitable for space missions. Related Work Studies in references [11,12] evaluate the performance of COTS devices such as CPU, DSP, GPU, and FPGA based on data provided by suppliers.Using a set of established device metrics, different processor architectures are quantitatively analyzed and compared in terms of performance, power, efficiency, memory bandwidth, input-output bandwidth, and other aspects.However, vendor-supplied data and established metrics are often limited and lack consideration of deeper application directions such as space autonomous computing tasks.The Institute of Computing Technology of the Chinese Academy of Sciences, led by Luo Chunjie, has developed the mobile embedded device benchmark suite A IoT Bench [13], which aims to evaluate the artificial intelligence capabilities of mobile and embedded devices in image classification, speech recognition, converter translation, and micro-processing loads.The Embedded Microprocessor Benchmark Consortium (EEMBC) provides many benchmarks, such as ADAS Mark and ML Mark.The ADAS Mark benchmark is a performance measurement and optimization tool for building nextgeneration advanced driver assistance systems (ADAS) [14].The image processing in this benchmark is close to what is carried out on spacecrafts, but it does not fully map to specific image processing tasks used in space applications.ML Mark is a benchmark for edge machine learning (ML) tasks [15].ML workloads are trained based on space-independent ILSVRC2012 and COCO2014 datasets, and they do not consider typical sensor sizes used in space missions. In the embedded CPU industry, the following three performance testing standards are widely recognized: Coremark, Dhrystone, and Coremark-Pro.Currently, Coremark-Pro has been used in the space domain, and examples of single-core and multi-core LEON processor (maintained by Gaisler Research) Coremark-Pro results are provided in [16,17].Spacebench [18] encompasses a range of typical computations involving integers and floating-point numbers that are commonly employed in the realm of space computing.The open-source project GPU4S (GPU for Space), supported by the European Space Agency, provides a benchmark test that covers representative space algorithms in different space domains, used to evaluate GPU programming models for space payload processing [19].OBPMARKs (On-Board Processing Benchmarks) was developed based on GPU4S and defines a set of benchmark testing methods covering common typical applications in space missions.Performance evaluations are carried out for high-performance processors such as ARM Mali G-72 (Cambridge, UK), NVIDIA Xavier NX (Santa Clara, CA, USA), and NVIDIA TX2 [20]. The computational performance requirements for space missions are constantly increasing, so a reasonable method for effectively characterizing the performance of newgeneration COTS processors is essential.Based on the aforementioned investigation and analysis, the objective of this paper is to construct benchmarks for space applications and perform performance evaluations on currently available high-performance processors.The structure of the remaining part of this paper is as follows: Section 3 introduces the basic principles of performance evaluation, and determines benchmark testing methods and performance metrics; Section 4 presents the evaluation targets and experimental results of performance evaluation; finally, Section 5 provides a summary of the entire paper. Overall Approach The purpose of this evaluation is to find a high-performance processor suitable for space applications.Therefore, the constructed benchmark tests need to be relevant to the space domain, using representative input data while covering as many space domains as possible.This ensures the correctness of the benchmark tests and that the implementation follows relevant standards, with corresponding reference outputs to check the correctness of the target platform's output.This performance evaluation includes the CPU and GPU of the device, and it is conducted from multiple perspectives, as shown in Figure 1. • The algorithms and input configurations in the benchmark need to cover existing spaceborne processing applications, as well as future scenarios and their performance requirements in different space domains; • The benchmark tests should not be limited to a given processor, such as supporting a single programming model or architecture, but should be able to run on multiple platforms; • The platform-independent parts of the benchmark should be the same for all platforms to enable fair comparisons; • It is necessary to use common performance metrics for comparison.The performance metrics used in this evaluation include commonly used metrics such as total execution time, throughput, and FLOPS. • The algorithms and input configurations in the benchmark need to cover existing spaceborne processing applications, as well as future scenarios and their performance requirements in different space domains; • The benchmark tests should not be limited to a given processor, such as supporting a single programming model or architecture, but should be able to run on multiple platforms; • The platform-independent parts of the benchmark should be the same for all platforms to enable fair comparisons; • It is necessary to use common performance metrics for comparison.The performance metrics used in this evaluation include commonly used metrics such as total execution time, throughput, and FLOPS. Figure 1. Basic principles of evaluation. Common Benchmark-CoreMark-Pro Benchmark First, this paper selects CoreMark-Pro benchmark tests for the pre-evaluation of processor performance.CoreMark-Pro benchmark tests include five popular integer workloads and four popular floating-point workloads.The integer workloads include JPEG compression, ZIP compression, XML parser, SHA-256 secure hash algorithm, and a memory-intensive version of the original CoreMark.The floating-point workloads include fast Fourier transform (FFT), linear algebra routines derived from LINPACK, a significantly improved version of the Livermore loop benchmark, and a neural network algorithm for pattern evaluation. High-Performance Computing Benchmark The complexity of current space missions is increasing, and high-performance computing is more and more widely applied to space missions, such as remote sensing data processing [21] and satellite visual navigation [22].This work builds a diverse set of HPCbased benchmark tests; each benchmark is implemented in a parameterized way, and they are specified during the build configuration with CMake's -D flag.The first parameter selects the type of benchmark to be compiled, and supports standard programming languages such as C, CUDA, and OpenMP; it then defines the data type, which, for all the benchmarks tested in this section, supports floating-point and double-precision floatingpoint numbers, and the last parameter defines the block size, which applies only to the GPU version of the code (this parameter is not required for OpenMP). Fast Fourier Transformation Benchmark (FFT) Fast Fourier transform (FFT) is an algorithm that is more efficient in computing discrete Fourier transform (DFT).It is primarily used in communication and 2D image analysis.For example, in the Automatic Dependent Surveillance Broadcast (ADS-B) system, Common Benchmark-CoreMark-Pro Benchmark First, this paper selects CoreMark-Pro benchmark tests for the pre-evaluation of processor performance.CoreMark-Pro benchmark tests include five popular integer workloads and four popular floating-point workloads.The integer workloads include JPEG compression, ZIP compression, XML parser, SHA-256 secure hash algorithm, and a memoryintensive version of the original CoreMark.The floating-point workloads include fast Fourier transform (FFT), linear algebra routines derived from LINPACK, a significantly improved version of the Livermore loop benchmark, and a neural network algorithm for pattern evaluation. High-Performance Computing Benchmark The complexity of current space missions is increasing, and high-performance computing is more and more widely applied to space missions, such as remote sensing data processing [21] and satellite visual navigation [22].This work builds a diverse set of HPCbased benchmark tests; each benchmark is implemented in a parameterized way, and they are specified during the build configuration with CMake's -D flag.The first parameter selects the type of benchmark to be compiled, and supports standard programming languages such as C, CUDA, and OpenMP; it then defines the data type, which, for all the benchmarks tested in this section, supports floating-point and double-precision floating-point numbers, and the last parameter defines the block size, which applies only to the GPU version of the code (this parameter is not required for OpenMP). Fast Fourier Transformation Benchmark (FFT) Fast Fourier transform (FFT) is an algorithm that is more efficient in computing discrete Fourier transform (DFT).It is primarily used in communication and 2D image analysis.For example, in the Automatic Dependent Surveillance Broadcast (ADS-B) system, FFT is applied in a sliding window of 128 points to implement automatic monitoring technology for aircraft position determination through satellite navigation [23]. In this paper, the FFT benchmark first calculates a batch of predefined sizes of 1D FFT.The size parameter can be used to specify the size of these FFTs.A batch of FFTs is used to increase the overall execution time of the benchmark test, to reduce measurement errors, and to better utilize the kernel pipeline.The benchmark kernel is based on the cuFFT library provided by NVIDIA, with slight modifications to allow execution on the corresponding experimental platform, using complex single-precision floating-point values for computation.The configuration of the cuFFT library is carried out using FFTplan, where a plan defines a single transformation operation to be performed.With the plan, memory and computational resources can be pre-configured based on the size of the input data, allowing the processor to achieve optimal performance during actual computation. Finite Impulse Response Benchmark (FIR) In the field of space signal processing, there is a growing demand for real-time and fast signal processing.Finite impulse response (FIR) filters are a type of filter structure that can be used to implement almost any type of frequency response digitally.FIR filters achieve filtering by using a series of delays, multipliers, and adders to create the filter's output.The relationship between the output of an FIR filter with length N and the input time sequence x[n] is given by a finite convolution form, as shown below: (1) where x[n] is the input signal; y[n] is the output signal; b i represents the filter coefficients that constitute the impulse response, also known as tap weights. N is the filter order; an Nth order filter has N + 1 terms on the right side.x[n − i] is commonly referred to as a tap, and based on the structure of tap delay lines, it provides delayed input to the multiplication operation in many implementation methods or block diagrams. Local Response Normalization Benchmark Local response normalization (LRN) is a technique mainly used during deep learning training to improve accuracy, presented in 2012 by AlexNet [24]; its purpose is to perform local normalization on convolution values, and the specific calculation method can be found in Equation (3).LRN mimics the activity of biological neurons by creating a competitive mechanism, introducing competition between feature maps generated by adjacent convolution kernels.This makes significant features in feature maps more prominent in set A while being suppressed in adjacent feature maps, reducing the correlation between feature maps generated by different convolution kernels and enhancing the model's generalization ability.The operation of LRN involves normalizing the pixel values at that point in the channel. Using a i x,y to denote the activity of a neuron computed by applying kernel i at position (x, y) and then applying the ReLu (rectified linear unit) nonlinearity, N is the the total number of kernals.The constants k, n, α, and β are hyper-parameters, the values of which are determined using a validation set; we used k = 2, n = 5, α = 10 −4 , and β = 0.75. Matrix Multiplication Benchmark (GEMM) Matrix multiplication is applied in many fields.In space exploration and astronomy, matrix multiplication is used to describe the motion of celestial bodies and analyze their orbits.Through matrix multiplication, the position, velocity, and orbital parameters of celestial bodies can be calculated for planetary orbit analysis, spacecraft navigation, and celestial dynamics research.The study in [25] documents the linear algebra benchmark performance of space processors using a 1024 × 1024 matrix. The GEMM benchmark kernel is designed for matrix multiplication based on the Basic Linear Algebra Subroutines (BLAS) library and has been simplified.It is an optimized implementation of GPU matrix multiplication (GEMM) that is compatible with a wider range of devices.The GEMM benchmark tests the multiple applications of matrix multiplication; it calculates C = α•A•B + β•C, where A, B, C ∈ R n×n and α, β ∈ R. The floating-point operations of this benchmark are calculated as 2•n 3 . The GPU implements GEMM by dividing the output matrix into tiles and then assigning small tiles to thread blocks.When calling cuBLAS with specific GEMM dimensions, the internal heuristic methods of cuBLAS can choose the tiling option expected to perform the best.The general tiling outer product method for matrix multiplication is shown in Figure 2. matrix multiplication is used to describe the motion of celestial bodies and analyze their orbits.Through matrix multiplication, the position, velocity, and orbital parameters of celestial bodies can be calculated for planetary orbit analysis, spacecraft navigation, and celestial dynamics research.The study in [25] documents the linear algebra benchmark performance of space processors using a 1024 × 1024 matrix. The GEMM benchmark kernel is designed for matrix multiplication based on the Basic Linear Algebra Subroutines (BLAS) library and has been simplified.It is an optimized implementation of GPU matrix multiplication (GEMM) that is compatible with a wider range of devices.The GEMM benchmark tests the multiple applications of matrix multiplication; it calculates = • • + • , where , , ∈ × and , ∈ .The floating-point operations of this benchmark are calculated as 2 • 3 . The GPU implements GEMM by dividing the output matrix into tiles and then assigning small tiles to thread blocks.When calling cuBLAS with specific GEMM dimensions, the internal heuristic methods of cuBLAS can choose the tiling option expected to perform the best.The general tiling outer product method for matrix multiplication is shown in Figure 2. Convolution_2D and Correlation_2D Benchmark Convolution_2D and Correlation_2D are used in combination with 2D images and form the foundation for implementing convolutional neural networks (CNNs), which can be used for visual-based navigation and image processing.In image processing tasks, the convolutional layer performs convolution operations on the input image by sliding a small matrix called a convolution kernel or filter, extracting specific feature information [26]. In this experiment, convolution and 2D correlation operations are implemented on the GPU using the CUDA Deep Neural Network (cuDNN) library provided by NVIDIA.cuDNN is a GPU acceleration library specifically designed for deep convolutional neural networks.By calling functions provided by cuDNN, such as cudnnConvolutionForward(), and cudnnPoolingForward(), and passing the corresponding handles, input data, weights, and output data descriptors, the corresponding convolution operations can be executed. Convolution_2D and Correlation_2D Benchmark Convolution_2D and Correlation_2D are used in combination with 2D images and form the foundation for implementing convolutional neural networks (CNNs), which can be used for visual-based navigation and image processing.In image processing tasks, the convolutional layer performs convolution operations on the input image by sliding a small matrix called a convolution kernel or filter, extracting specific feature information [26]. In this experiment, convolution and 2D correlation operations are implemented on the GPU using the CUDA Deep Neural Network (cuDNN) library provided by NVIDIA.cuDNN is a GPU acceleration library specifically designed for deep convolutional neural networks.By calling functions provided by cuDNN, such as cudnnConvolutionForward(), and cudnnPoolingForward(), and passing the corresponding handles, input data, weights, and output data descriptors, the corresponding convolution operations can be executed. Max Pooling and CIFAR-10 Benchmark Max pooling is commonly used in various fields such as image processing, neural networks, and signal processing.It is also included in the soft-max and max-pool functions of the cuDNN library.The max pooling operation divides the entire image into nonoverlapping blocks of the same size, and only retains the maximum value within each block, resulting in an output that maintains the original planar structure after discarding other nodes. CIFAR-10 uses a 10-layer neural network for inference and is trained on the CIFAR-10 dataset [27].Each layer is constructed by reusing neural network building blocks from various benchmarks.In the convolutional neural network model, data augmentation is applied to the training dataset, including random flipping, random cropping, and the normalization of input images, to generate more samples.L2 regularization is applied to the training weights, and a BN (batch normalization) layer is used after each convolutional layer to enhance the model's generalization ability.Specifically, the BN operation transforms the activation values of each hidden layer neuron as follows: x (k) of a certain neuron in layer t does not refer to the original input, but rather to the linear activation function, of the neuron in layer t.During the training process, the distribution of input values in the internal layers of the network also changes continuously due to the change in parameters.Batch normalization (BN) normalizes the distribution of input values for any neuron in each layer of the neural network to a standard normal distribution with a mean of 0 and a variance of 1 through certain normalization methods. Typical Application Benchmarks in Space We define a set of benchmark testing methods that cover common typical applications in space missions.This article selects image calibration applications and AES encryption applications to construct the benchmark, reusing optimized parallel kernel implementation methods such as FFT and FIR filtering from the GPU4S benchmark. Image Calibration Application Benchmark Image calibration has many important applications in the field of remote sensing and other space domains.Calibration can reduce noise, artifacts, and geometric distortions in images, and improve the color consistency and geometric accuracy of remote sensing images, thereby improving the quality and accuracy of remote sensing images and helping classification algorithms better accomplish their corresponding recognition and classification tasks.The thermal infrared band in remote sensing images can be used to estimate surface temperature.Image calibration can eliminate radiometric disturbances and nonlinear sensor responses in images, thereby improving the accuracy of surface temperature estimation. In remote sensing applications with panchromatic sensors, image calibration is needed for images captured using imaging instruments on deep space exploration telescopes that require long exposures.Typically, to overcome the limitations of the sensor, multiple frames of images need to be acquired from the front end, and then they are overlaid and summed to form the final image.Before stacking the images, each frame of acquired images needs to undergo related preprocessing operations, namely image calibration testing.The specific steps are as shown Figure 3.When executing the benchmark experiments, three basic parameters of the image, "IMAGE_FRAMES, IMAGE_WIDTH and IMAGE_HEIGHT", need to be entered. AES Encryption Application Benchmark The AES (Advanced Encryption Standard) encryption algorithm has many applications in the field of aerospace and satellite, and is usually used to protect the security of communication data, storage data, and key exchange.For example, spacecraft and satellites will regularly send remote test data, such as posture information, temperature, and battery status.Using the AES encryption algorithm can ensure the security of these data during the transmission process, prevent data from being eavesdropped or tampered with, at the same time protecting the communication between the ground station and the spacecraft or satellites, and prevent unauthorized access and attacks.In addition, some AES Encryption Application Benchmark The AES (Advanced Encryption Standard) encryption algorithm has many applications in the field of aerospace and satellite, and is usually used to protect the security of communication data, storage data, and key exchange.For example, spacecraft and satellites will regularly send remote test data, such as posture information, temperature, and battery status.Using the AES encryption algorithm can ensure the security of these data during the transmission process, prevent data from being eavesdropped or tampered with, at the same time protecting the communication between the ground station and the spacecraft or satellites, and prevent unauthorized access and attacks.In addition, some spacecraft and storage devices on satellites include sensitive information, such as navigation data, images, and videos, using the AES encryption algorithm to ensure that these data are protected during the storage process. The standard AES algorithm is a symmetric-key encryption algorithm that encrypts the grouped plaintext by executing the same polling function 10 times, ultimately generating a ciphertext, which requires the same key for encrypting and decrypting the plaintext.The unit of processing in AES is the byte, and the input 128-bit plaintext (P) and the key (K) are divided into 16 bytes.In general, the grouped plaintext is described by a square matrix in bytes, called the state matrix.In each round of the algorithm, the content of the state matrix keeps changing and the final result is output as the ciphertext.The bytes in this matrix are arranged in order from top to bottom and left to right.The flow is shown in Figure 4 below.When executing the benchmark experiment, we need to input two basic parameters, "DATA_LENGTH and KEY_LENGTH", which are the length of data to be encrypted and the length of the key. AES Encryption Application Benchmark The AES (Advanced Encryption Standard) encryption algorithm has many applications in the field of aerospace and satellite, and is usually used to protect the security of communication data, storage data, and key exchange.For example, spacecraft and satellites will regularly send remote test data, such as posture information, temperature, and battery status.Using the AES encryption algorithm can ensure the security of these data during the transmission process, prevent data from being eavesdropped or tampered with, at the same time protecting the communication between the ground station and the spacecraft or satellites, and prevent unauthorized access and attacks.In addition, some spacecraft and storage devices on satellites include sensitive information, such as navigation data, images, and videos, using the AES encryption algorithm to ensure that these data are protected during the storage process. The standard AES algorithm is a symmetric-key encryption algorithm that encrypts the grouped plaintext by executing the same polling function 10 times, ultimately generating a ciphertext, which requires the same key for encrypting and decrypting the plaintext.The unit of processing in AES is the byte, and the input 128-bit plaintext (P) and the key (K) are divided into 16 bytes.In general, the grouped plaintext is described by a square matrix in bytes, called the state matrix.In each round of the algorithm, the content of the state matrix keeps changing and the final result is output as the ciphertext.The bytes in this matrix are arranged in order from top to bottom and left to right.The flow is shown in Figure 4 below.When executing the benchmark experiment, we need to input two basic parameters, "DATA_LENGTH and KEY_LENGTH", which are the length of data to be encrypted and the length of the key. Experimental Hardware Platforms Internationally, the commercial high-performance processor market is mainly dominated by three companies: NVIDIA, AMD (Santa Clara, CA, USA), and Intel (Santa Clara, CA, USA).In terms of processors for high-performance computing, AMD's latest Instinct MI250X processor has a double-precision floating-point operation capability of up to 95.7 TFlops (trillions of floating-point operations per second), and NVIDIA's latest Jetson Xavier series high-performance processors also achieve peak performance in the tens of the TFlops range [28], while the 4th Gen Intel Xeon Scalable processors will deliver improved performance, efficiency, and cost savings for targeted workloads of built-in accelerators.Among these three companies, NVIDIA has a more complete software and hardware ecosystem, providing the CUDA (Compute Unified Device Architecture) platform for developers to create parallel computing programs.In addition, NVIDIA provides various acceleration libraries for intelligent computing.High-performance processors mainly include Loongson series processors, Shenwei series processors, and Veiglo series processors.The Loongson processor is widely used in computer, server, and embedded systems, and has shown good performance and stability in certain specific application scenarios. This article selects NVIDIA Jetson AGX Xavier, the Loongson platform (Beijing, China), and ASUS laptop processors (Taipei, Taiwan) for system evaluation and analysis.Among them, the ASUS laptop is mainly used as a reference for basic performance results.NVIDIA Jetson AGX Xavier is a heterogeneous SoC (system-on-chip), including CPU, GPU, and several other accelerators; the Loongson experimental platform consists of the Loongson 3A4000 CPU and the Veiglo BI-V100 GPU.Based on the benchmark testing methods constructed in Section 3, this performance evaluation experiment is carried out.The basic performance parameters of the experimental platform are shown in Table 1.From the table above, we can see the performance parameters of the three experimental platforms, including CPU, GPU architecture, GPU memory, power consumption modes, etc. Comparing the performance parameters of each platform from the table, we can see that the GPU architecture of Jetson AGX Xavier is more advanced, supporting Tensor Cores, which can better meet the needs of large-scale computing.In addition, Jetson AGX Xavier has an optional 10W power consumption mode, consuming less power compared to the other two platforms.The Loongson platform has the highest number of CUDA cores and 32 GB HBM2 GPU memory, but its power consumption is several times that of other platforms. CoreMark-Pro Benchmark Experiment Results The CoreMark-Pro benchmark test measures the computational performance of a system by performing a series of basic arithmetic and logical operations.This benchmark can highlight the strengths and weaknesses of the processor.In this experiment, the configurations on the three platforms all use four multi-core CPUs for calculations, and the final CoreMark-PRO score is generated by processing them through Perl scripts.The ratio of single-core and multi-core experimental results is then calculated.The test results are shown in Table 2, where the scores represent the number of algorithm executions performed by the system in a fixed time.A higher score means that the processor can perform more computing tasks in a given time, reflecting its stronger performance.From the table above, it can be seen that for multi-core experimental results, the ASUS FL8000 platform has the highest CPU score, followed by the Jetson AGX Xavier platform.The multi-core experimental scores of the Loongson platform are about half and one-third of that of the Jetson AGX Xavier platform and the ASUS FL8000 platform. High-Performance Computing Benchmark Experiment Results Based on the benchmark methods described in Section 3.3, experiments are carried out on each of the three platforms, where GPU computing is implemented in the CUDA framework and CPU computing is implemented in the OpenMP framework, meanwhile, with the following parameters being set: "DATATYPE = FLOAT" and "BLOCKSIZE = 32".It also includes their respective optimization implementations.In the CUDA framework, CUDA's data parallelism and model parallelism techniques are used to accelerate the training process, while relevant acceleration libraries (cuDNN, cuBLAS, etc.) are called for optimization.In the OpenMP framework, critical computational loops are marked as parallel regions using compile directives to achieve thread-level parallel computation, and data locality optimization is used to improve computation speed. Test Results on GPUs The CUDA framework has advantages such as parallel computing capability, highbandwidth memory, flexible kernel function programming, and optimized GPU acceleration libraries, making it capable of providing more efficient and faster computing capabilities for space artificial intelligence applications.The results in Table 3 are the average statistics after 20 experiments.The table displays the total execution time of 12 computational benchmark modules on the experimental platforms.The GPUs used in the three experimental platforms are NVIDIA Volta GV10B, NVIDIA 940MX, and Veiglo BI-V100.From the table, it can be seen that for some simple computational modules, the execution time difference among the three platforms is relatively small.However, for complex computational modules like the CIFAR-10 inference chain, the computational power of Jetson AGX Xavier is fully utilized, and the execution time is 4 times faster than that of the ASUS FL8000 laptop and 10 times faster than that of the Loongson platform. Figure 5 shows that Jetson AGX Xavier always has the fastest execution time, demonstrating its powerful computing performance.The Loongson experimental platform performed poorly in the inference of cifar_10.However, after optimized processing, the execution time of convolution_2D and correlation_2D operations was significantly shortened, and the difference with Jetson AGX Xavier was significantly reduced.This indicates that the Loongson platform also has certain advantages in some specific operations.The GPUs used in the three experimental platforms are NVIDIA Volta GV10B, NVIDIA 940MX, and Veiglo BI-V100.From the table, it can be seen that for some simple computational modules, the execution time difference among the three platforms is relatively small.However, for complex computational modules like the CIFAR-10 inference chain, the computational power of Jetson AGX Xavier is fully utilized, and the execution time is 4 times faster than that of the ASUS FL8000 laptop and 10 times faster than that of the Loongson platform. Figure 5 shows that Jetson AGX Xavier always has the fastest execution time, demonstrating its powerful computing performance.The Loongson experimental platform performed poorly in the inference of cifar_10.However, after optimized processing, the execution time of convolution_2D and correlation_2D operations was significantly shortened, and the difference with Jetson AGX Xavier was significantly reduced.This indicates that the Loongson platform also has certain advantages in some specific operations. Test Results on CPUs OpenMP (open multi-processing) is an open parallel computing framework that can be used for multi-core CPU systems with shared memory architecture.It achieves the parallelization of tasks by using compiler directives and provides a set of API interfaces for writing parallel programs in a multi-threaded environment.The OpenMP framework has advantages such as simplicity, cross-platform compatibility, automated parallelization, and performance scalability.It can provide convenience and efficiency in CPU development for space applications, accelerate the computation process, and improve application performance. Optimized implementations of OpenMP were also performed, and the average results of 20 experiments are shown in Table 4.The table shows the total program execution time of 12 computing benchmark modules on the experimental platforms.The CPUs of the three experimental platforms are 8-core Nvidia Carmel, 4-core Intel i7-8550U, and Loongson 3A4000.This experiment used four multi-core CPUs for computation on all three platforms.From the experimental results in Table 4, it can be observed that the performance of the Nvidia Carmel CPU is very close to that of the Intel i7-8550U CPU, and overall, the performance of the Intel i7-8550U CPU is slightly stronger. To present the above results more intuitively, the experimental results of the OpenMP framework were plotted into a bar chart using MATLAB (2018.a)software, as shown in Figure 6.From the graph, it can be seen that the execution time of the Loongson platform is relatively longer, at about five times longer than that of the other two platforms, and there is still a certain gap even after optimization.The main reason for this difference may be that the CPU of the Loongson platform is designed for desktop and mobile computing, focusing on comprehensive performance and general computing capabilities.On the other hand, Jetson AGX Xavier is designed specifically for embedded devices and edge computing, emphasizing low power consumption and high efficiency. Typical Application Benchmark Experimental Results Based on the two typical benchmarks introduced in Section 3.4 of this paper, experiments were conducted on three platforms, including implementations based on the CUDA framework on GPU processors, the OpenMP framework on CPU processors, and statistics on the throughput and power consumption of each platform during benchmark testing. Image Calibration Benchmark Test Results The throughput and power results of the image calibration experiment are shown in Tables 5 and 6.To improve the accuracy of the experiments and avoid exceptional cases, an average of 20 experimental results was taken for each platform under the same experimental conditions, measured in Mpixels/s.Five standard image sizes were used in the experiment: 1024 × 1024, 2048 × 2048, 4096 × 4096, 8192 × 8192, and 10240 × 10240.Fixed seed-generated quasi-random data were used as input data. Typical Application Benchmark Experimental Results Based on the two typical benchmarks introduced in Section 3.4 of this paper, experiments were conducted on three platforms, including implementations based on the CUDA framework on GPU processors, the OpenMP framework on CPU processors, and statistics on the throughput and power consumption of each platform during benchmark testing. Image Calibration Benchmark Test Results The throughput and power results of the image calibration experiment are shown in Tables 5 and 6.To improve the accuracy of the experiments and avoid exceptional cases, an average of 20 experimental results was taken for each platform under the same experimental conditions, measured in Mpixels/s.Five standard image sizes were used in the experiment: 1024 × 1024, 2048 × 2048, 4096 × 4096, 8192 × 8192, and 10240 × 10240.Fixed seed-generated quasi-random data were used as input data.In Table 5, the throughput of the CUDA framework is greater than that of the OpenMP framework for each experimental platform when inputting an image of the same size, and the difference in the execution results is more significant for the Loongson platform.Under the OpenMP framework, the throughput of the Jetson AGX Xavier platform is always the largest, but under the CUDA framework, the throughput of the Loongson platform surpasses that of Jetson AGX Xavier when the input image size increases to 4096 × 4096, though the power consumption of the Loongson platform is higher as well.In Table 6, when the input image size is 10,240 × 10,240, the power consumption of the Loongson platform is 4.67 times that of Jetson AGX Xavier (in the CUDA framework).Figure 7 presents a comparison that is more user-friendly and easier to understand.In Table 5, the throughput of the CUDA framework is greater than that of the OpenMP framework for each experimental platform when inpu ing an image of the same and the difference in the execution results is more significant for the Loongson platform.Under the OpenMP framework, the throughput of the Jetson AGX Xavier platform is always the largest, but under the CUDA framework, the throughput of the Loongson platform surpasses that of Jetson AGX Xavier when the input image size increases to 4096 × 4096, though the power consumption of the Loongson platform is higher as well.In Table 6, when the input image size is 10,240 × 10,240, the power consumption of the Loongson platform is 4.67 times that of Jetson AGX Xavier (in the CUDA framework).Figure 7 presents a comparison that is more user-friendly and easier to understand. AES Encryption Algorithm Benchmark Test Results The AES encryption algorithm experiment with encrypting data of different lengths using three key lengths: 128-bit, 192-bit, and 256-bit.To improve the accuracy of the experiments and avoid exceptional cases, an average of 20 experimental results was taken for each platform under the same experimental conditions.The throughput and power results of the AES encryption algorithm experiment on three platforms are shown in Tables 7 and 8 below. AES Encryption Algorithm Benchmark Test Results The AES encryption algorithm experiment with encrypting data of different lengths using three key lengths: 128-bit, 192-bit, and 256-bit.To improve the accuracy of the experiments and avoid exceptional cases, an average of 20 experimental results was taken for each platform under the same experimental conditions.The throughput and power results of the AES encryption algorithm experiment on three platforms are shown in Tables 7 and 8 below.According to the results shown in Tables 7 and 8, during the benchmark testing of the AES encryption algorithm, it can be observed that for the OpenMP framework, the throughput of the Jetson AGX Xavier platform is 23 times higher than that of the Loongson platform when the encryption data size is 16,777,216 bytes.On the other hand, for the CUDA framework, the Loongson platform exhibits the highest throughput, with a throughput that is 3.5 times higher than that of the AGX Xavier experimental platform when the encryption data size is 67,108,864 bytes; however, the power is also 6.7 times that of the Jetson.A more visual comparison is illustrated in Figure 8.According to the results shown in Tables 7 and 8, during the benchmark testing of the AES encryption algorithm, it can be observed that for the OpenMP framework, the throughput of the Jetson AGX Xavier platform is 23 times higher than that of the Loongson platform when the encryption data size is 16,777,216 bytes.On the other hand, for the CUDA framework, the Loongson platform exhibits the highest throughput, with a throughput that is 3.5 times higher than that of the AGX Xavier experimental platform when the encryption data size is 67,108,864 bytes; however, the power is also 6.7 times that of the Jetson.A more visual comparison is illustrated in Figure 8. Data Transmission Time Analysis In this paper, the data transmission time of the image calibration experiment and AES encryption experiment results of the Jetson AGX Xavier and Loongson platforms were analyzed.The GPU of the experimental platform is NVIDIA Volta GV10B and Veiglo BI-V100. Figure 9a shows the data transmission time comparison of the image calibration experiment, and Figure 9b shows the data transmission time comparison of the AES encryption experiment.D2H (device to host) represents the transfer of data from the device side to the host side, and H2D (host to device) represents the transfer of data from the host side to the device side.When the size of the transferred data is small, the data transmission time of the two platforms is similar.However, as the transferred data increase, the data transmission time of the Jetson AGX Xavier platform is much shorter than that of the Loongson platform.NVIDIA provides the interface of GPUDirect RDMA for professional-grade GPUs [29], which allows direct access to the bus address of GPU memory.When performing large-scale data transmission and computation, the Jetson AGX Xavier platform has advantages over the Loongson platform. experiment, and Figure 9b shows the data transmission time comparison of the AES encryption experiment.D2H (device to host) represents the transfer of data from the device side to the host side, and H2D (host to device) represents the transfer of data from the host side to the device side.When the size of the transferred data is small, the data transmission time of the two platforms is similar.However, as the transferred data increase, the data transmission time of the Jetson AGX Xavier platform is much shorter than that of the Loongson platform.NVIDIA provides the interface of GPUDirect RDMA for professionalgrade GPUs [29], which allows direct access to the bus address of GPU memory.When performing large-scale data transmission and computation, the Jetson AGX Xavier platform has advantages over the Loongson platform. Conclusions This study investigates benchmark tests for different application scenarios in ground and space missions.The benchmark tests for high-performance processors in space applications provided by the European Space Agency (ESA), GPU4S, and OBPMARK, were selected.The performance of the Jetson AGX Xavier embedded processor, Loongson platform, and Asus FL8000 laptop was systematically tested.We conducted tests on representative computational modules and application scenarios on the CPU and GPU of the three experimental platforms, including fast Fourier transform, matrix multiplication, CIFAR-10 inference chain, image calibration processing, AES encryption, etc.The performance of the Jetson AGX Xavier and Loongson platforms on different computationally intensive operators was analyzed in detail. Through experimental research on the implementation ability of these architectures and the advantages of different architectures in space missions, this study provides important references for gradually adopting more advanced architectures to be er meet the computational requirements of future space missions. Figure 2 . Figure 2. The layout method of the general matrix multiplication method. Figure 2 . Figure 2. The layout method of the general matrix multiplication method. Figure 5 . Figure 5.A comparison of the total execution time of the computational benchmark module based on the CUDA framework on the three experimental platforms, also including the optimization implementation, where (a) is the cifar_10 benchmark, (b) is the cifar_10_multiple benchmark, (c) is the convolution_2D benchmark, and (d) is the correlation_2D benchmark. Figure 5 . Figure 5.A comparison of the total execution time of the computational benchmark module based on the CUDA framework on the three experimental platforms, also including the optimization implementation, where (a) is the cifar_10 benchmark, (b) is the cifar_10_multiple benchmark, (c) is the convolution_2D benchmark, and (d) is the correlation_2D benchmark. Figure 6 . Figure 6.A comparison of the total execution time of the computational benchmark modules based on the OpenMP framework on the three experimental platforms, also including the optimization implementation, where (a) is the cifar_10 benchmark, (b) is the cifar_10_multiple benchmark, (c) is the convolution_2D benchmark, and (d) is the correlation_2D benchmark. Figure 6 . Figure 6.A comparison of the total execution time of the computational benchmark modules based on the OpenMP framework on the three experimental platforms, also including the optimization implementation, where (a) is the cifar_10 benchmark, (b) is the cifar_10_multiple benchmark, (c) is the convolution_2D benchmark, and (d) is the correlation_2D benchmark. Figure 7 . Figure 7.A comparison of the throughput computation results in Mpixels/s for the image calibration benchmarks executed on the three experimental platforms, where (a) is the OpenMP framework and (b) is the CUDA framework, computed by averaging the entire processing pipeline over 20 iterations. Figure 7 . Figure 7.A comparison of the throughput computation results in Mpixels/s for the image calibration benchmarks executed on the three experimental platforms, where (a) is the OpenMP framework and (b) is the CUDA framework, computed by averaging the entire processing pipeline over 20 iterations. Figure 8 . Figure 8.A comparison of throughput computation results in Mbytes/s for the AES encryption algorithm benchmark executed on three experimental platforms, where (a) is the OpenMP framework and (b) is the CUDA framework, which is computed by averaging the entire processing pipeline over 20 iterations. Figure 8 . Figure 8.A comparison of throughput computation results in Mbytes/s for the AES encryption algorithm benchmark executed on three experimental platforms, where (a) is the OpenMP framework and (b) is the CUDA framework, which is computed by averaging the entire processing pipeline over 20 iterations. Figure 9 . Figure 9.A comparison of the data transfer time (device to host; host to device) for the benchmarks on the experimental platform.where (a) is the Image calibration benchmark and (b) is the AES encryption algorithm benchmark. Table 1 . Basic performance parameters of experimental platforms. Table 5 . Throughput of each experimental platform under Image Calibration Benchmark (unit: Mpixels/s). Table 5 . Throughput of each experimental platform under Image Calibration Benchmark (unit: Mpixels/s). Table 6 . Power Consumption of each experimental platform under Image Calibration Benchmark (unit: W). Table 7 . Throughput of each experimental platform under AES Encryption Algorithm Benchmark (unit: Mbytes/s). Table 8 . Power Consumption of each experimental platform under AES Encryption Algorithm Benchmark (unit: W).Note: For input data of 67,108,864 bytes and 104,857,600 bytes, it exceeded the maximum computational capacity of the ASUS FL8000 notebook CUDA framework, so no statistical results were obtained. Table 8 . Power Consumption of each experimental platform under AES Encryption Algorithm Benchmark (unit: W).
2024-01-06T16:30:40.992Z
2023-12-27T00:00:00.000
{ "year": 2023, "sha1": "dc8f0c00fec35b99c45cf8dc7575badfd23934d8", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/1424-8220/24/1/145/pdf?version=1703651922", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "04e9820c6a653040d848a729a212f0d06e7bace7", "s2fieldsofstudy": [ "Engineering", "Computer Science", "Physics" ], "extfieldsofstudy": [ "Medicine", "Computer Science" ] }
260231619
pes2o/s2orc
v3-fos-license
Letter in response to the case report: “Recalcitrant generalized granuloma annulare treated successfully with dupilumab” inflammation. These findings are consistent with similar observations in other inflammatory skin disorders, emphasizing the necessity for clinical trials utilizing systemic agents to specifically target systemic inflammation and effectively manage generalized GA To the Editor: We are grateful to share our experience with you concerning a generalized form of granuloma annulare (GA) treated with dupilumab offlabel.Following your case report 1 and based on the article published by Min et al, 2 which emerged that an inflammatory component of Th2 types is also present in GA, we decided to treat our patient with dupilumab. We report the case of an 81-years-old woman affected by a generalized form of GA that appeared 3 years prior, initially localized on the upper limbs and then extending to the entire trunk and a portion of the lower limbs (Fig 1).The diagnosis was histologically confirmed by a pathologist in 2019 (Fig 2).Anamnesis was negative for atopic dermatitis and allergic comorbidities. After several therapy failures such as infliximab, doxycycline, and methotrexate, dupilumab was administered at a loading dose of 600 mg at baseline and subsequently 300 mg every 2 weeks [Figs 3, A and 4, A]. At her 4-weeks follow-up, the lesions had become less erythematous and infiltrated.At her 16-weeks follow-up, the signs of inflammation nearly disappeared, with the lesions resolving and the presence of postinflammatory hyperpigmentation.At her 24weeks follow-up, we achieved the resolution of most of the lesions with some hyperpigmentation (Figs 3, B and 4, B). GA is a chronic inflammatory, noninfectious granulomatous skin disease with an unknown etiology.Localized GA typically resolves on its own, while the generalized form, which accounts for approximately 15% of cases, 3 can be more resistant and challenging to treat. Morphological similarities to other forms of granuloma suggest that GA is caused by a Th1 inflammatory reaction, leading to the use of drugs that inhibit this Th1 activation, such as tumor necrosis factor-alpha inhibitors. 4Despite best efforts, therapy often fails, suggesting that an alternative pathway may be involved in the development of GA, as in the case presented.The response to Th1 or Th2 inhibition may depend on the stage of granuloma formation. 1In a case with a long-standing GA history unresponsive to tumor necrosis factor-alpha inhibitors, Th2 signaling likely predominated, explaining the positive response to dupilumab. Moreover, according to the article by Min et al, 2 it is evident that in cases where the patient lacks atopic comorbidities, there is still an upregulation of Th2related markers in nonlesional skin affected by GA, indicating the presence of ongoing systemic inflammation.These findings are consistent with similar observations in other inflammatory skin disorders, emphasizing the necessity for clinical trials utilizing systemic agents to specifically target systemic inflammation and effectively manage generalized GA. 2 In conclusion, through this letter, we aim to emphasize the significance of Th2 skewing in GA, characterized by pronounced upregulation of interleukin 4 and elevation of JAK3.These observations point toward a potential role for targeted treatments such as dupilumab, which may soon be recognized as a valuable addition to the armamentarium for managing refractory forms of GA.IRB approval status: Not applicable. Patient consent: Consent for the publication of all patient photographs and medical information was provided by the authors at the time of article submission to the journal stating that all patients gave consent for their photographs and medical information to be published in print and online and with the understanding that this information may be publicly available. Fig 1 . Fig 1. Patient at baseline: granuloma annulare on the trunk and upper limbs. Fig 2 . Fig 2. Histopathology image of granuloma annulare: granulomatous inflammatory pattern situated within the superficial and mid dermis; the dermis in granuloma annulare reveals histiocytes arranged in an interstitial pattern, the presence of multinucleate giant cells, and a mild perivascular lymphocytic infiltrate. Claudia Paganini, MD, a Marina Talamonti, MD, b Elena Campione, MD, a,b Luca Bianchi, MD, a,b and Marco Galluzzo, MD a,bFrom the Department of Systems Medicine, University of Rome ''Tor Vergata'', Rome, Italy a ; and Dermatology Unit, Fondazione Policlinico Tor Vergata, Rome, Italy.b Funding sources: None. Fig 3 . Fig 3. A, A particular of upper chest at baseline.B, A particular of upper chest after 24 weeks of dupilumab.
2023-07-28T15:10:35.091Z
2023-07-01T00:00:00.000
{ "year": 2023, "sha1": "8075d5d0fd75e24ab2b2d6f98d2f39fdc668b855", "oa_license": "CCBY", "oa_url": "http://www.jaadcasereports.org/article/S2352512623002679/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "0dc41fe973514652c2051e69dff28ec03a327517", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [] }
256701305
pes2o/s2orc
v3-fos-license
The role of transcranial magnetic stimulation in treating depression after traumatic brain injury This research did not receive any specific grant from funding agencies in the public, commercial, or not-for-profit sectors. Traumatic brain injury (TBI) is defined as the brain dysfunction occurring after an individual sustains trauma to the cerebrum [1]. Various incidents may cause TBI, such as motor vehicle accidents (MVA), sports injuries, or violent events [1]. Depending on the location and severity of impact, patients may experience relatively mild, to severe, lifelong symptoms. The presence of symptoms following TBI may be diagnosed as post-concussion syndrome (PCS) [2]. Many patients have acute depression symptoms after TBI which persist as medication refractory post-concussion depression. TBI survivors are at increased life-time risk of developing pharmaco-resistant major depressive disorder, bipolar disorder, dysthymia, or other psychiatric disorders along with increased risk of seizures. Repetitive transcranial magnetic stimulation (rTMS) is a noninvasive, outpatient therapy that is FDA approved to treat major depressive disorder (MDD) [3]. Whereas rTMS therapy has been growing in the field of psychiatry, there is limited research on the use of rTMS for treatment of neurological issues, such as postconcussion syndrome. Low-frequency right-sided rTMS, an inhibitory protocol, has demonstrated a mild, positive impact on TBI-related depression [4]. To our knowledge, the current study is the first to assess excitatory rTMS of the left dorsolateral prefrontal cortex as a treatment for depression in individuals who experienced TBI and PCS. We hypothesize that the rTMS protocol will significantly improve depression symptoms in patients. In this retrospective, open-label, uncontrolled study, the utility of rTMS in patients suffering from TBI was examined. The study was approved by the Texas Christian University Institutional Funding This research did not receive any specific grant from funding agencies in the public, commercial, or not-for-profit sectors. Traumatic brain injury (TBI) is defined as the brain dysfunction occurring after an individual sustains trauma to the cerebrum [1]. Various incidents may cause TBI, such as motor vehicle accidents (MVA), sports injuries, or violent events [1]. Depending on the location and severity of impact, patients may experience relatively mild, to severe, lifelong symptoms. The presence of symptoms following TBI may be diagnosed as post-concussion syndrome (PCS) [2]. Many patients have acute depression symptoms after TBI which persist as medication refractory post-concussion depression. TBI survivors are at increased life-time risk of developing pharmaco-resistant major depressive disorder, bipolar disorder, dysthymia, or other psychiatric disorders along with increased risk of seizures. Repetitive transcranial magnetic stimulation (rTMS) is a noninvasive, outpatient therapy that is FDA approved to treat major depressive disorder (MDD) [3]. Whereas rTMS therapy has been growing in the field of psychiatry, there is limited research on the use of rTMS for treatment of neurological issues, such as postconcussion syndrome. Low-frequency right-sided rTMS, an inhibitory protocol, has demonstrated a mild, positive impact on TBI-related depression [4]. To our knowledge, the current study is the first to assess excitatory rTMS of the left dorsolateral prefrontal cortex as a treatment for depression in individuals who experienced TBI and PCS. We hypothesize that the rTMS protocol will significantly improve depression symptoms in patients. In this retrospective, open-label, uncontrolled study, the utility of rTMS in patients suffering from TBI was examined. The study was approved by the Texas Christian University Institutional Review Board. Adults diagnosed with TBI or PCS and treated using rTMS between January 1, 2015 and August 31, 2022 were included in the study. Prior to rTMS treatment, a Patient Health Questionnaire-9 (PHQ-9) was administered to assess severity of depression [5]. The TMS devices used in this study were the 2017 Neurosoft Cloud TMS and the 2016 Magstim Rapid 2. Patients underwent a series of either 15 to 16 or 30 to 38 rTMS sessions determined by their authorization status. The FDA-approved DASH excitatory protocol (10 Hz) was used for all patients with 11 second rest times after each 4 second treatment [6]. One session lasted 18.5 minutes. The total number of pulses per session was 3000. The magnet was positioned at the left dorsolateral prefrontal cortex. The power setting was 120% of the patient's motor threshold. Patients underwent five rTMS sessions per week, followed by a tapering schedule for the last six sessions. After completion of the full treatment series, PHQ-9 was used to measure post-rTMS depression. The Hamilton Rating Scale for Depression (HAM-D) and Beck's Depression Inventory-II (BDI-II) were administered as further confirmation of post-test diagnosis. Data was analyzed using Statistical Package for the Social Sciences statistics software by International Business Machines. Fifty-nine patients were included in the study. Average age was 47 years (SD ¼ 12), and 44% (n ¼ 26) were male. All patients were diagnosed with TBI or PCS and had PHQ-9 data. Over half (n ¼ 34) of patients had brain magnetic resonance imaging (MRI) performed prior to rTMS treatment. All patients with brain MRI results displayed findings consistent with TBI. Of the 59 patients, 27 had 30 to 38 rTMS sessions, whereas 32 had 15 to 16 rTMS sessions. On average, PHQ-9 indicated moderately severe depression in patients prior to TMS treatment and mild to moderate depression after rTMS treatment. Likewise, BDI-II and HAM-D scores indicated mild depression after treatment. PHQ-9 scores decreased significantly in the 15 to 16 session cohort from baseline (M ¼ 15.67, SD ¼ 5.65) to the final session (M ¼ 9.05, SD ¼ 6.37, t(38) ¼ 5.82, p < 0.001) (Fig. 1). The effect size of the mean difference was large (d ¼ 0.93). A similar outcome occurred in the 30 to 38 session cohort. Depression scores decreased from baseline (M ¼ 17.41, SD ¼ 4.64) to their final session (M ¼ 10.26, SD ¼ 6.43, t(26) ¼ 5.36, p < 0.001) (Fig. 1). The effect size of this mean difference was also large (d ¼ 1.03). An independent-samples t-test was conducted to determine differences in change of PHQ-9 between the 15 to 16 session cohort to the 30 to 38 session cohort as well as between male and female patients. The results indicated a non-significant difference between To determine whether there were any sex or age-group related differences in depression at any time during the study, independent samples-t-tests were computed on baseline, 15 to 16 session, 30 to 38 session, and final PHQ-9 scores. None of the mean differences in sex or age groups were statistically significant. This study analyzed an excitatory rTMS protocol as a treatment for post-concussion depression following TBI. Primary results supported the hypothesis that TBI patients experienced decreased depression following rTMS treatment. With a Cohen's d of 0.932 in the 15 to 16 session group and 1.031 in the 30 to 38 session group, effect was large. Results suggested that rTMS was an effective treatment for depression in patients with PCS. rTMS is minimally invasive and safe relative to pharmaceutical and electroconvulsive therapies. It is important to note limitations that arise from the retrospective, open-label, uncontrolled nature of this study. The ongoing assessment of rTMS as a treatment for post-concussion depression would benefit from a controlled study comparing patients undergoing rTMS treatment to patients undergoing alternative treatments or no treatment. With the only longitudinal measure of depression in this study being PHQ-9, a future study could include surveying of other aspects of PCS. This study suggests that rTMS is a potential treatment option for depression following TBI. Both 15 to 16 session and 30 to 38 session cohorts showed significant decreases in depression as measured by PHQ-9 following rTMS treatment. These findings support the use of rTMS in post-concussion depression treatment and highlight the need for more research on rTMS therapy following TBI. Declaration of competing interest The authors declare the following financial interests/personal relationships which may be considered as potential competing interests: Author Dr. Harpreet Singh owns the medical practice where the study took place. He declares no other financial conflicts of interest. All other authors declare no financial conflicts of interest.
2023-02-10T14:27:05.245Z
2023-02-01T00:00:00.000
{ "year": 2023, "sha1": "7a40a038d78b23f6b52472dbf5c951bd1c0cffd2", "oa_license": "CCBY", "oa_url": "https://doi.org/10.1016/j.brs.2023.02.005", "oa_status": "GOLD", "pdf_src": "Elsevier", "pdf_hash": "4d5c77b2cf22e9e26e5cd343fca53d3d800cdf68", "s2fieldsofstudy": [ "Medicine", "Psychology" ], "extfieldsofstudy": [ "Medicine" ] }
234454626
pes2o/s2orc
v3-fos-license
Influence of Drainage Layer Thickness on the Preconsolidation Rate of Dredged Marine Soils: A Lab Simulation This study conducted to investigate the improved consolidation rate of dredged marine soils used as backfills in land reclamation, with the aim of making the material’s reuse more favourable on site compared to disposal as in the normal practice. In order to quicken the dissipation of excess pore water under loading, efficiency of the drainage layers sandwiching the dredged marine soil plays an important role. With modification of a large oedometer, the present study examines the efficiency of three granular materials, i.e. sand (S), palm oil clinker (POC) and pavement milling waste (PMW) in two different drainage thicknesses for the effective discharge of pore water during consolidation of the dredged marine soils. With sand adopted as Control, it observed that thicker layers (100%) of the granular materials produced higher consolidation rates by 10% compared to 50% of thickness. Settlement reduction also found at POC and PMW as the drainage layers, i.e. 0.6% and 0.5% respectively in comparison with sand. Obviously, the improvement of DMS consolidation about 2% when implantation of granular layer. Introduction For past decades, dredging project has been an economical solution to the problem related to the siltation of channels, land reclamation and increasing ship sizes. However, this activity gives negative effect on the marine life and social impact on the fishery activities, recreation, and navigation [1]. The retraction of sediments using dredger machine produced higher water content in the range of 200% to 900%, which indicate that dredged sediments are in slurry/ flow state [2]. The dredged sediments known as dredged marine soils (DMS) with poor engineering properties and commonly disposed and not reusable. Numerous researcher identified the beneficial reuse of DMS such as erosion control, shoreline stabilisation and construction purposes unless the contamination found to be excessive [3][4]. There was a lot of benefit from reusing the DMS rather than disposed of it, especially towards the marine ecosystems. Consolidation is the process of soil compression over time by dissipating excess pore pressure. When pressure being applied on saturated consolidating soil, the compression process results in water or air expulsion from void spaces, reduction in water content, deformation and relocation of soil particles. For single drainage, the soil sample rests on an impermeable base with an upward direction In this case, Berry and Reid [5] stated that the consolidation of the lower half of the soil layer is a mirror image of the upper half. The excess pore water pressure would occur at the centre and bottom of the soil under double and single drainage conditions respectively. The objective of this study to accelerate the consolidation by using the granular materials as drainage layer especially during the land reclamation. Granular materials can be very diverse in terms of its applications and types. The disposal of waste products from different industries is a growing challenge these days. The waste materials could be reuse as backfill materials, road pavements or concrete materials. Yet, the development of advancing the applications by reusing the waste as partial reinforcement in composite materials is still ongoing. Using recycled materials in construction applications will reduce the negative impact on the environment. It also provides an alternative to reduce the usage of natural aggregates, which is one of the key issues in the construction industry [6]. Materials and Method The dredged marine soils were collected from Kuala Perlis during the dredging works by Malaysian Marine Department. The soil samples dredged at a depth of 4 -6 m from the seabed using backhoe dredger. All the DMS samples retrieved must be handle carefully with double lined the layer of heavyduty sampling bags, keep it stored inside the white pails and tightly sealed. This to make sure the moisture loss during the transportation back to laboratory at Johor. Materials for the drainage layer i.e. sand, palm oil clinker (POC) and pavement waste materials (PMW) used in this study. POC is a waste material of biomass from the incineration of palm oil shells at the palm oil mill in Kluang, Johor. PMW was collected during the maintenance roadworks at Melaka. All the characterisation of materials were based on BS 1977 [7]. The soil specimen placed in the oedometer ring (100 mm x 100 mm) with porous stones at the top and bottom of the specimen with pressure applied in the vertical direction and deformation occurs. Two configurations as shown in figure 1 including control specimen (DMS). Granular materials such as sand, PMW and POC layer placed on top and bottom according to the thickness required in illustration Figure 1b and 1c. The thickness of drainage layer used in this study was 50% and 100% by dry weight of DMS, which 16 mm and 32 mm (total two layer on top and bottom) respectively. To prevent the penetration of granular into the DMS, a separator of non-woven geotextile are used. Noted that the specimens was the disturbed soil. The oedometer test assumes that the soil sample is fully saturated and the laterally confined to avoid the moisture loss. The sample kept fully submerged in water during the test. Every load increment placed after the sample reached the settlement occurred after t100, i.e. after full dissipation of excess pressures or equilibrium state reached. Each stress was doubled from the previous, i.e. 6.25, 12.5, 25, 50, 100, 200, 400 and 800 kPa. Characterisation of Materials According to the Unified Soil Classification System (USCS), DMS classified as high plasticity of clay (CH) with 61 % clay, 38 % silt and only 1% sand. The initial wc of DMS is 218% with the ratio wc/LL is 2.96. The main chemical compounds detected in the DMS are sodium oxide (Na2O) with 46.22% followed by 30.53% of silica oxide (SiO2) and 13.38% of alumina (Al2O3). Na2O found to be the highest constituent as the material came from the seabed. Silica oxide traced in the mineralogy as quartz, which can found in sand or aggregates. The XRD analysis of DMS observed to consist of peaks corresponding to phases of the specimen. As shown in Figure 2a, the plot shows the presence of quartz, halite, illite, montmorillonite, biotite and albite as the main crystalline phases in the DMS. The size of granular materials is in the range of 2.00 -2.36 mm. Based on USCS, sand and POC were classified as wellgraded sand (SW) and well-graded gravel (GW) respectively, while PMW was considered as poorgraded gravel (GP). The chemical compound from the XRF analysis found that PMW has 68.27% SiO2, 15.63% Al2O3, 5.9% Na2O and 3.53% K2O. Other components traced but lower than 2% only. Other impurities such as bitumen found on the aggregates. As well as POC, the major element compound detected consist of 73.04% SiO2, 7.895% Al2O3, 7.667% K2O, 3.1% MgO, 2.9% CaO and 2.2% Fe2O3. The morphology of PMW from FESEM (Figure 2b) shows the smooth, sharp-edged and flat shaped particles. XRD diffraction analysis used to trace the mineralogical properties of PMW. The peaks of the main trace phases, including quartz, albite, dolomite and mullite detected in the region of 22 -68° 2θ. Figure 2c shows the XRD and FESEM of POC. POC categorized as generally angular and irregular in various sizes and diameters. Irregularities clearly seen the flaky while some had with sharp edges and semi hexagonal type of pores on its surface. It shows micro pores of small, medium and extra-large sizes. Comparison of Drainage Layer Thickness During the Consolidation Void ratio-effective vertical stress (e-log σ'v) curves in figure 3 and figure 4 were measured by the modified large oedometer test as mentioned in section materials and method. A clear bending point corresponding to yield consolidation observed at 25 kPa. At the first stage of compression (6.25 -12.5 kPa), all the granular materials seem enveloped or overlapping with each other especially for 32 mm (100%) of drainage layer thickness. Contrary to the 16 mm (50%), where the PMW-50 and S-50 were the only overlapping but happens at 12.5 kPa until 50 kPa. At certain point, the behaviour of both drainage thickness regardless of the granular materials, the line curves show the workability of materials as good drainage used in this study. PMW-100 shows the less settlement followed by S-100 and POC-100 respectively. Even though PMW-100 is a waste material, does not mean that it has poor drainage ability compared to S-100 (clean sand). The bitumen coating at the PMW could be the reason because of the slippery surface of PMW make the water flow through the voids without any problem. However, the different gap between PMW and sand about 0.5% only. Therefore, both materials considered the best drainage materials compared to POC. From the morphology imaging, POC seem to have lot of voids on the surface and can absorb the water effectively during the incremental stress. Eventually, the voids of POC entrapped the water and the unintentional clogging happens due to the crushing of POC particles when the higher stress applied. DMS can be permeable when there have large voids within the soil, such as for gravels and sand, while fine soils such as clay have smaller voids resulting in lower permeability. Thus, the properties and characteristics of DMS influence the rate of consolidation. For example, when soft clay is subjected to incremental stress, it causes water to disperse from the soil slowly. This is because of the low permeability of clay soils as shown in figure 5. The plotted graph shows DMS as control without any additional granular materials took a longer time to finish the consolidation. Obviously, the 100% of granular materials propagated faster than the dissipation of the surcharge load induced excess pore pressure at early stage of consolidation by 10% compared to 50% layer thickness. Conclusions Based on the results, the oedometer test on DMS with two different drainage layer thickness that influenced the acceleration of consolidation rate of DMS, which is 50% and 100% by dry weight. DMS as control without any additional granular materials took a longer time to finish the consolidation. Thickness at 100% of granular materials propagated faster than the dissipation of the surcharge load induced excess pore pressure at early stage of consolidation by 10% compared to 50% layer thickness. The thicker of granular layer, the faster water to dissipate from the DMS and consequently accompanied with gradual reduction of compressibility. Thus, this lab simulation shows that the waste granular materials effectively used as drainage layers for accelerated excess pore water discharge for a backfilled embankment or reclaimed land.
2020-12-31T09:03:00.867Z
2020-12-30T00:00:00.000
{ "year": 2020, "sha1": "169c3ff641ad9bff42ff0534e82004172269e812", "oa_license": null, "oa_url": "https://doi.org/10.1088/1755-1315/616/1/012020", "oa_status": "GOLD", "pdf_src": "IOP", "pdf_hash": "b0a13486aadcf8f5bc76f5ec6825bdb5ca964c13", "s2fieldsofstudy": [ "Environmental Science", "Engineering" ], "extfieldsofstudy": [ "Physics", "Geology" ] }
49389111
pes2o/s2orc
v3-fos-license
The Contribution of EDF1 to PPARγ Transcriptional Activation in VEGF-Treated Human Endothelial Cells Vascular endothelial growth factor (VEGF) is important for maintaining healthy endothelium, which is crucial for vascular integrity. In this paper, we show that VEGF stimulates the nuclear translocation of endothelial differentiation-related factor 1 (EDF1), a highly conserved intracellular protein implicated in molecular events that are pivotal to endothelial function. In the nucleus, EDF1 serves as a transcriptional coactivator of peroxisome proliferator-activated receptor gamma (PPARγ), which has a protective role in the vasculature. Indeed, silencing EDF1 prevents VEGF induction of PPARγ activity as detected by gene reporter assay. Accordingly, silencing EDF1 markedly inhibits the stimulatory effect of VEGF on the expression of FABP4, a PPARγ-inducible gene. As nitric oxide is a marker of endothelial function, it is noteworthy that we report a link between EDF1 silencing, decreased levels of FABP4, and nitric oxide production. We conclude that EDF1 is required for VEGF-induced activation of the transcriptional activity of PPARγ. Introduction Peroxisome proliferator-activated receptor gamma (PPARγ) is a ubiquitous ligand-inducible transcription factor belonging to the nuclear receptor superfamily [1]. PPARγ, which is highly expressed in adipose tissue, is the master regulator of adipocyte differentiation and is fundamental for mature adipocyte function [2][3][4]. PPARγ is also implicated in glucose homeostasis as it upregulates genes involved in glucose uptake and controls the expression of adipokines, thereby having an effect on insulin sensitivity [3,5,6]. It is now clear that PPARγ plays an important protective role in the vasculature. Its activity has been proven both in smooth muscle cells, where it has a role in the regulation of vascular tone [7], and in endothelial cells (EC), where it exerts anti-inflammatory and antioxidant effects [8]. In endothelial-specific PPARγ −/− mice, loss of PPARγ contributes to endothelial dysfunction associated with enhanced production of free radicals and exacerbated inflammation [9]. Accordingly, human and animal studies indicate that thiazolidinediones (TZD), which are largely utilized as antidiabetic drugs and PPARγ activators, attenuate vascular diseases including atherosclerosis [10,11]. Post-translational modifications-including phosphorylation, acetylation, and sumoylation-are important in carving PPARγ-driven gene expression [12]. Another layer of control over PPARγ activity depends on its interactions with coactivators and corepressors [12], which regulate transcriptional activity by reshaping chromatin structure via histone deacetylases and histone acetyltransferases [13]. Indeed, upon activation by small natural lipophilic ligands or synthetic agonists, the conformation of PPARγ changes-corepressors are released and coactivators are recruited [3]-thus resulting in transcriptional activation. Endothelial differentiation-related factor 1 (EDF1), a highly conserved intracellular protein of 148 amino acids, has been identified as one of PPARγ's coactivators [14,15]. Initially, the role of EDF1 as a transcriptional coactivator was described in the silkworm Bombyx mori and in Drosophila melanogaster where EDF1 stimulates the activity of the FTZ-F1 nuclear receptor [16]. At the time, EDF1 was demonstrated to serve as a coactivator for several transcription factors [17,18]. This included some nuclear receptors implicated in lipid metabolism, such as steroidogenic factor 1, liver receptor homologue 1, liver X receptor α and, as mentioned above, PPARγ [14,15]. In particular, in 3T3-L1 preadipocytes, EDF1 is required for PPARγ-mediated differentiation and gene expression programs [15]. In human macrovascular EC EDF1 was described as a factor implicated in differentiation and spatial organization [19]. In these cells EDF1 is localized mainly in the cytosol where it binds calmodulin [20] under basal conditions. In response to various stimuli, it is translocated to the nucleus where it interacts with the TATA box-binding protein [20]. Apart from its pivotal role in vasculogenesis and angiogenesis, VEGF is essential for endothelial polarity and survival, thus contributing to the integrity of mature vessels [21]. This issue is relevant since ECs are key players in organogenesis as well as in promoting adult organ maintenance. To this purpose, it is noteworthy that VEGF is a critical component of the cross-talk between organs and tissues and the vessels [21]. Translocation of EDF1 to the Nucleus in Response to VEGF Initially, we evaluated whether VEGF modulates the total amounts of EDF1 and PPARγ in human umbilical vein endothelial cells (HUVEC). Confluent cells were treated with VEGF (50 ng/mL) for different times. We performed Real-Time PCR as well as western blot analysis and found no modulation in the levels of EDF1 and PPARγ after 8, 12, and 24 h exposure to VEGF (Figure 1 and Supplementary S1). Because EDF1 translocates to the nucleus when HUVEC are stimulated with the phorbol ester 12-O-Tetradecanoylphorbol-13-acetate (TPA) or with forskolin [20,22], we evaluated the subcellular localization of EDF1 in cells treated with VEGF (50 ng/mL) for different times. By immunofluorescence, EDF1 was detectable both in the cytosol and in the nucleus of unstimulated cells. After being treated with VEGF, EDF1 accumulated in the nuclei after 1 h and remained nuclear-associated for the following 24 h (Figure 2a and Supplementary S2a). Western blot on nuclear and cytosolic fractions isolated after 1 h treatment with VEGF confirmed these results (Figure 2b and Supplementary S2b). The total amounts of endothelial differentiation-related factor 1 (EDF1) and peroxisome proliferator-activated receptor gamma (PPARγ) in cells treated with VEGF. Human umbilical vein endothelial cells (HUVEC) were treated with 50 ng/mL of vascular endothelial growth factor (VEGF) for 0, 8, 12, and 24 h. (a) Real-Time PCR was performed on RNA samples. Two different experiments in triplicate were performed; (b) cell lysates were analyzed by western blot using antibodies against EDF1, PPARγ, and actin. A representative blot is shown. Figure 2. Subcellular localization of EDF1 in cells treated with VEGF. (a) HUVEC were treated with VEGF (50 ng/mL) for 1, 8, 12, and 24 h. Immunofluorescence was performed using anti-EDF1 immunopurified immunoglobulin G (IgGs) and rhodamine-conjugated anti-rabbit IgGs; (b) HUVEC were treated with VEGF (50 ng/mL) for 1 h. Western blot was performed on nuclear and cytosolic fractions using antibodies against EDF1. GAPDH and TBP were used as cytosolic and nuclear markers, respectively. A representative blot is shown. Interaction between EDF1 and PPARγ in HUVEC We evaluated the interaction between EDF1 and PPARγ in HUVEC treated with VEGF (50 ng/mL) for various times. Cell lysates were immunoprecipitated with antibodies against PPARγ. Western blot was performed on the immunoprecipitates to detect EDF1. EDF1 and PPARγ interacted in nonstimulated cells and VEGF did not significantly modulate this interaction at the time points tested ( Figure 3 and Supplementary S3). Immunofluorescence was performed using anti-EDF1 immunopurified immunoglobulin G (IgGs) and rhodamine-conjugated anti-rabbit IgGs; (b) HUVEC were treated with VEGF (50 ng/mL) for 1 h. Western blot was performed on nuclear and cytosolic fractions using antibodies against EDF1. GAPDH and TBP were used as cytosolic and nuclear markers, respectively. A representative blot is shown. Interaction between EDF1 and PPARγ in HUVEC We evaluated the interaction between EDF1 and PPARγ in HUVEC treated with VEGF (50 ng/mL) for various times. Cell lysates were immunoprecipitated with antibodies against PPARγ. Western blot was performed on the immunoprecipitates to detect EDF1. EDF1 and PPARγ interacted in nonstimulated cells and VEGF did not significantly modulate this interaction at the time points tested ( Figure 3 and Supplementary S3). Figure 3. The interaction between EDF1 and PPARγ in HUVEC treated with VEGF. HUVEC were treated with VEGF (50 ng/mL) for different times. Cell lysates were immunoprecipitated with monoclonal antibodies against PPARγ and analyzed by western blot using rabbit antibodies against EDF1 (upper panel). The filter was then probed with rabbit anti-PPARγ antibodies to verify the equal amounts of immunoprecipitated proteins (lower panel). Densitometric analysis was performed using ImageJ software. EDF1/PPARγ ratio was calculated on three blots from separate experiments ± standard deviation. Effect of Silencing EDF1 in VEGF-Induced PPARγ Activity To study if EDF1 contributes to PPARγ transcriptional activity in VEGF-treated HUVEC, we utilized HUVEC with stably silenced EDF1, denominated αs1 cells [23]. We used HUVEC transfected with a nonsilencing sequence [23] as the control (CTR). It is noteworthy that PPARγ did not change in αs1 cells compared to their controls as demonstrated by western blot (Figure 4a and Supplementary S4). The interaction between EDF1 and PPARγ in HUVEC treated with VEGF. HUVEC were treated with VEGF (50 ng/mL) for different times. Cell lysates were immunoprecipitated with monoclonal antibodies against PPARγ and analyzed by western blot using rabbit antibodies against EDF1 (upper panel). The filter was then probed with rabbit anti-PPARγ antibodies to verify the equal amounts of immunoprecipitated proteins (lower panel). Densitometric analysis was performed using ImageJ software. EDF1/PPARγ ratio was calculated on three blots from separate experiments ± standard deviation. Effect of Silencing EDF1 in VEGF-Induced PPARγ Activity To study if EDF1 contributes to PPARγ transcriptional activity in VEGF-treated HUVEC, we utilized HUVEC with stably silenced EDF1, denominated αs1 cells [23]. We used HUVEC transfected with a nonsilencing sequence [23] as the control (CTR). It is noteworthy that PPARγ did not change in αs1 cells compared to their controls as demonstrated by western blot (Figure 4a and Supplementary S4). Figure 3. The interaction between EDF1 and PPARγ in HUVEC treated with VEGF. HUVEC were treated with VEGF (50 ng/mL) for different times. Cell lysates were immunoprecipitated with monoclonal antibodies against PPARγ and analyzed by western blot using rabbit antibodies against EDF1 (upper panel). The filter was then probed with rabbit anti-PPARγ antibodies to verify the equal amounts of immunoprecipitated proteins (lower panel). Densitometric analysis was performed using ImageJ software. EDF1/PPARγ ratio was calculated on three blots from separate experiments ± standard deviation. Effect of Silencing EDF1 in VEGF-Induced PPARγ Activity To study if EDF1 contributes to PPARγ transcriptional activity in VEGF-treated HUVEC, we utilized HUVEC with stably silenced EDF1, denominated αs1 cells [23]. We used HUVEC transfected with a nonsilencing sequence [23] as the control (CTR). It is noteworthy that PPARγ did not change in αs1 cells compared to their controls as demonstrated by western blot (Figure 4a and Supplementary S4). ) and compared to HUVEC transfected with a scrambled nonsilencing sequence (used as control) (CTR). Cell lysates were analyzed by western blot using antibodies against EDF1, PPARγ, and actin. A representative blot is shown; (b) PPARγ activity was evaluated by luciferase assay in αs1 cells and compared to the control HUVEC; (c) Real-Time PCR was performed on RNA samples from αs1 cells and relative control, treated or not with VEGF (50/ng/mL) for 24 h. Three different experiments in triplicate were performed; (d) Nitric oxide (NO) release was measured using the Griess method for nitrate quantification. The values were expressed as the mean of three different experiments in triplicate ± standard deviation. * p < 0.05, ** p < 0.01, *** p < 0.001. We then transfected subconfluent αs1 cells and the control cells with a vector expressing luciferase under the control of a PPARγ responsive consensus (pDR1) [24]. After 4 h, the cells were treated with VEGF (50 ng/mL) and luciferase activity was measured after 24 h. While VEGF stimulated PPARγ transcriptional activation in control cells, this effect was prevented by silencing EDF1 (Figure 4b). To reinforce this finding, we analyzed the expression of a PPARγ downstream target gene, i.e., fatty acid-binding protein 4 (FABP4), which is known to be upregulated in HUVEC after 24 h exposure to VEGF [25]. We cultured HUVEC in the presence of VEGF (50 ng/mL) for 24 h. Using Real-Time PCR, we confirmed the overexpression of FABP4 RNA in control HUVEC treated with VEGF. In αs1 cells, which downregulate EDF1, the induction was significantly reduced (Figure 4c). Because HUVEC with silenced FABP4 produce lower amounts of nitric oxide (NO) than controls and are insensitive to the stimulatory effect of VEGF [26], we measured the release of NO in αs1 cells and their controls that were treated or not with VEGF for 24 h. Figure 4d shows that while VEGF induced NO secretion in control cells, it did not exert any significant effect in αs1 cells. Discussion ECs line the inner face of blood vessels and their integrity is fundamental for vascular homeostasis and circulatory function [27]. Indeed, ECs are implicated in maintaining blood fluidity, governing leukocyte trafficking and vascular tone, and in regulating immune response. Consequently, it is not surprising that endothelial dysfunction, which is characterized by a pro-oxidant and pro-inflammatory phenotype, orchestrates events leading to cardiovascular diseases. Moreover, healthy endothelial cells are crucial for the maintenance of normal energy metabolism and, therefore, physiologic function of all tissues [27]. There is now evidence that PPARγ is a key regulator of endothelial function [21,27] and, accordingly, PPARγ activators inhibit the expression of proinflammatory molecules and the synthesis of free radicals [27]. Many factors contribute to the integrity of the endothelium including VEGF, which is critical for endothelial survival and barrier function in mature vessels [21]. On these bases, we investigated whether VEGF activates PPARγ in HUVEC by recruiting the transcriptional co-activator EDF1. We found that while VEGF does not change the total amounts of EDF1, it rapidly induces EDF1 nuclear translocation, which is maintained for 24 h. These results are in accordance with previous data showing EDF1 nuclear accumulation in HUVEC treated with the phorbol ester 12-O-Tetradecanoylphorbol-13-acetate (TPA) and the forskolin, which raise intracellular cAMP [20,23]. Interestingly, TPA, forskolin, and VEGF all induce the phosphorylation of EDF1 [20,22,23], and we hypothesize that phosphorylation has a role in increasing the nuclear accumulation of EDF1. It should be noted that EDF1 does not have a nuclear targeting sequence, and it is likely that a shuttle protein drives EDF1 to the nucleus. If this is the case, we postulate that VEGF enhances this shuttle mechanism. In the nucleus, VEGF stimulates the transcriptional activity of PPARγ in an EDF1-dependent manner. Indeed, silencing EDF1 prevents VEGF-induced PPARγ activity. Since VEGF increases the transcriptional activity of PPARγ in endothelial cells without altering its interaction with EDF1, it is possible that VEGF induces a specific PPARγ ligand or inhibits a corepressor, thus altering the balance between various transcriptional corepressors and coactivators. These results highlight an important difference between adipocytes and endothelial cells. In 3T3-L1 preadipocytes, silencing EDF1 decreases the total amounts of PPARγ and, in parallel, its transcriptional activity [15]. By contrast, in HUVEC, silencing EDF1 affects only its transcriptional activity. We also evaluated the expression of a PPARγ-responsive gene in HUVEC with silenced EDF1. In particular, we focused on the modulation of FABP4, which encodes a cytoplasmic protein that has a role in endothelial fatty acid metabolism and free radical production. It also impairs proliferation and sprout elongation and impacts on nitric oxide synthesis [25,28]. Interestingly, pioglitazonean insulin-sensitizing thiazolidinedione and a PPARγ agonist-increases FABP4 levels. In this study, we show that the activation of PPARγ by VEGF induces FABP4 expression through the involvement of EDF1. Indeed, silencing EDF1 markedly inhibits the stimulatory effect of VEGF on FABP4 expression. It is known that VEGF induces FABP4 through the Delta-like ligand (DLL) 4/NOTCH1 pathway [25]. Our results indicate that also PPARγ contributes to VEGF induction of FABP4 in HUVEC. NO is a pivotal mediator of VEGF-induced responses and is essential for vascular function [29]. The regulation of NO production is very complex and it is noteworthy that both PPARγ and FABP4 are involved. Indeed, in endothelial cells, the activation of PPARγ increases NO release [30] while the downregulation of FABP4 reduces it [28]. To this purpose, we show that αs1 cells do not respond to VEGF by increasing NO release. We hypothesize that the downregulation of EDF1 impairs VEGF-induced activation of PPARγ with consequent reduction of FABP4 and NO synthesis. We conclude that VEGF induces EDF1 translocation to the nucleus where it acts as a transcriptional coactivator of PPARγ. The transcriptional activation of PPARγ increases the expression of FABP4, which is known to regulate NO production. Because NO is a marker of endothelial function, our results substantiate that PPARγ activation has a role in maintaining the integrity of vessels and highlight EDF1 as a novel player in the complex regulation of PPARγ transcriptional activity in the endothelium. On these bases, we propose that EDF1 makes an important contribution to maintain endothelial integrity, and this may be crucial in the prevention of cardiovascular diseases. Cell Culture HUVEC-widely accepted as a model of macrovascular EC-were obtained from the American Type Culture Collection (ATCC) and cultured in M199 containing 10% fetal bovine serum, 1 mM glutamine, endothelial cell growth factor (150 µg/mL), 1 mM sodium pyruvate and heparin (5 units/mL) on 2% gelatin-coated dishes [20]. In some experiments, we utilized HUVEC stably transfected to silence EDF1 (αs1), while their controls (CTR) were transfected with a scrambled, nonsilencing sequence as previously described [22]. Western Blot and Immunoprecipitation Western blot was performed as described with antibodies against EDF1 (AVIVA Systems Biology Corporation, San Diego, CA, USA), rabbit anti-PPARγ, and anti-actin (Sigma Aldrich, St. Louis, MO, USA) [15]. Secondary antibodies were labeled with horseradish peroxidase (GE Healthcare, Milano, Italy). The immunoreactive proteins were visualized with the SuperSignal chemiluminescence kit (ThermoFisher Scientific, Waltham, MA, USA). To coimmunoprecipitate EDF1 and PPARγ, lysates were immunoprecipitated using monoclonal antibodies against PPARγ. Nonimmune immunoglobulin G (IgGs) were used as controls (Supplementary S3b). After binding to protein G-Sepharose, the samples were processed for western blot with rabbit anti-EDF1 antibodies. Nuclear and cytosolic fractions were obtained as described [20] and processed by western blot using antibodies against EDF1, anti-glyceraldehyde-3-phosphate dehydrogenase (GAPDH), and anti-TATA Binding Protein (TBP) (Santa Cruz Biotechnology-Tebu Bio, Huissen, The Netherlands). All the experiments were repeated at least three times. One representative blot is shown in the figures. Densitometry was performed using ImageJ software (1.50i, National Institutes of Health, Bethesda, MD, USA) on three blots and expressed using an arbitrary value scale. Results are shown as the mean ± standard deviation of three separate experiments. Reporter Gene Assay To study PPARγ activity, subconfluent HUVEC with silenced EDF1 and their controls were transfected with plasmids pDR1-Luc (0.2 µg/cm 2 ), using Arrest-in transfection reagent (Invitrogen) as described [24]. Luciferase activity was measured after 24 h of treatment with VEGF (50 ng/mL) using a luminometer. The transfection efficiency was normalized against a cotransfected reporter plasmid phRL-TK encoding Renilla luciferase (5 ng/cm 2 ), by dividing the firefly luciferase activity by the Renilla luciferase activity according to the Dual-Luciferase Reporter Assay kit manual (Promega, Madison, WI, USA). The experiments were performed in triplicate and the results are shown as the mean ± standard deviation of three separate experiments. Real-Time-PCR Total RNA was extracted using the PureLink RNA Mini kit (Ambion, Thermo Fisher Scientific, Waltham, MA, USA). After quantification, equivalent amounts of total RNA were assayed by first strand cDNA synthesis using SuperScript II RT (Invitrogen, Carlsbad, CA, USA). Real-time PCR was performed at least two times in triplicate on the 7500 FAST Real-Time PCR System instrument using TaqMan Gene Expression Assays (Life Technologies, Monza, Italy). We analyzed FABP4 (Hs01086177_m1), EDF1 (Hs00610152_m1), and PPARγ (Hs01115513_m1) while the housekeeping gene GAPDH (Hs99999905_m1) was used as an internal reference gene. Relative changes in gene expression were analyzed by the 2 −∆∆Ct method. NO Release Griess assay was used to measure NO in cell culture media [23]. In particular, media were mixed 1:1 with fresh Griess solution and the absorbance was measured at 550 nm. The concentration of nitrites in the media were determined using calibration curve generated using known concentration of NaNO 2 solutions. The experiment was performed in triplicate and the results are shown as the mean ± standard deviation of three separate experiments. Statistical Analysis Statistical significance was determined using the Student's t test and set at p values less than 0.05. In the figures, * p < 0.05, ** p < 0.01, *** p < 0.001. Funding: This studied was sustained by intramural funds. Conflicts of Interest: The authors declare no conflict of interest. VEGF Vascular Endothelial Growth Factor EDF Endothelial Differentiation-related factor PPARγ Peroxisome proliferator-activated receptor γ EC Endothelial cells NO Nitric oxide
2018-07-03T23:06:14.836Z
2018-06-21T00:00:00.000
{ "year": 2018, "sha1": "b86b75b7d385ac782cf6f0254b689b4bb5b07b09", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/1422-0067/19/7/1830/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "b86b75b7d385ac782cf6f0254b689b4bb5b07b09", "s2fieldsofstudy": [ "Medicine", "Biology" ], "extfieldsofstudy": [ "Chemistry", "Medicine" ] }
232401889
pes2o/s2orc
v3-fos-license
A simple self-reflection intervention boosts the 1 detection of targeted advertising 2 4 Online platforms collect and infer detailed information about people and their 5 behaviour, giving advertisers an unprecedented ability to reach specific groups of 6 recipients. This ability to “microtarget” messages contrasts with people’s limited 7 knowledge of what data platforms hold and how those data are used. Two on8 line experiments (total N = 828) demonstrated that a short, simple intervention 9 prompting participants to reflect on a targeted personality dimension boosted their 10 ability to correctly identify the ads that were targeted at them by up to 26 percent11 age points. Merely providing a description of the targeted personality dimension did 12 not improve accuracy; accuracy increased when participants completed a short ques13 tionnaire assessing the personality dimension—even when no personalized feedback 14 was provided. We argue that such “boosting approaches,” which improve peoples’ 15 ability to detect advertising strategies, should be part of a policy mix aiming to 16 increase platforms’ transparency and give people the competences necessary to re17 claim their autonomy online. 18 rs, in practice, for most users this requirement fails to open the platforms' "black box". Achieving e↵ective transparency-that demonstrably enables users to understand what platforms do with their data and what users' choices imply, and to translate this knowledge § https://gdpr-info.eu/art-15-gdpr/¶ See, for example, https://myactivity.google.com/more-activityor https://www.facebook.com/your_information.into behavior-is an important step towards more acceptable business practices and to regaining autonomy for users (e.g., by prompting people to adjust their privacy settings 29 ). However, as reviewed above, most current transparency initiatives seem to be exercises in "nominal transparency" with no real regard for whether or not people actually read and digest the information or whether it has any e↵ect on their behaviour. Here we investigate a cognitive approach to counteract the information asymmetry, hich explicitly aims to help people to cope with the lack of transparency.It is inspired by research showing that people can be psychologically "inoculated" against misinformation. For example, explaining misleading argumentation techniques reduces the influence of subsequently presented misinformation 30;31 .In this study, we test whether it is possible to inoculate people against personality-based microtargeting 20 by alerting them to the personality dimension being targeted and thus increasing their ability to identify whether or not an advertisement is targeting them personally.If the success of the intervention depends primarily on people being aware of the personality dimension being targeted, then it may su ce to provide a description of that personality dimension.However, to the extent that people lack relevant self-knowledge 8 a Feedback screen shown to participants after completion of an 8-item personality questionnaire gauging their extraversion level (boosting condition), which includes feedback on their relative rank within n ag -matched norm population (from 33 ).b Instructions of the detection task and example stimulus (for the full set of stimuli, see Fig. S8).c Parallel experimental design of the boosting and control conditions-the only di↵erence is that the order of the two personality questionnaires (extraversion and A nity for Technology Interaction, ATI) and the corresponding feedback were sw ped i.e., before vs. after the detection task). In two preregistered online studies, we tested the e↵ectiveness of the inoculation approach to boost people's ability to identify ads targeted at their personality in terms of the extraversion-introversion spectrum (N = 828; recruited via Prolific Academic).We used ads developed and validated by Matz and colleagues 20 , and therefore recruited from the same po they did (i.e., female part ipan s from the UK between 18 and 40 years old).In Experiment 1, participants received feedback on their personality (including a general description of the personality dimension), in terms of either their age-matched relative extraversion score (relevant personality feedback, see Fig. 1A and Fig. S3; for full questionnaire, see Fig. S1; items were taken from Srivastava and colleagues 33 ) or their a nity for technology interaction (ATI 34 ; control feedback, not relevant to the personality dimension in question, see Fig. S4; for questionnaire, see Fig. S2).Participants were then presented with 10 beauty ads (taken from Matz et al. 20 ; see Fig. S8); half of which targeted extraverts and the other half introverts.Participants were asked to decide whether each ad was or was not targeted towards their personality (Fig. 1B).A comprehension check ensured that participants understood the instruction (see Fig. S7).However, the specific targeting strategy-that is, that it targeted extraverts vs. introverts-was not revealed to participants.The hypothesis here was: • H1: Participants who reflect on and receive feedback about their relative score on the relevant personality dimension (extraversion; boosting condition) are better able to identify ads that are targeted towards them than are participants who reflect on and receive feedback about their relative score on an unrelated personality di ension (ATI; control condition). Experiment 2 aimed to disentangle the mechanisms underlying these e↵ects: (1) implicitly hinting at the targeting strategy of the advertiser by describing the relevant personality dimension, (2) encouraging people to reflect on their own position on the rele-vant personality dimension by having them complete a questionnaire (without providing feedback), and (3) explicitly providing individual feedback on the relevant personality dimension (i.e., degree of extraversion vs. introversion).Experiment 2 was similar to Experiment 1, di↵ering in only two respects.First, half the participants saw only a general description of the relevant personality dimension prior to the detection task (see Fig. S5 and S6 for screenshots).Second, the other half completed the corresponding personality questionnaire (Fig. S1 and S2) after seeing the general description, but did not receive any feedback.Thus, Experiment 2 employed a 2 (control vs. boosting) ⇥ 2 (description only vs. description plus questionnaire) between-subjects design.We tested three mutually exclusive follow-up hypotheses (conditional on hypothesis H1 being supported): • H2a: The boosting intervention increases accuracy primarily by raising people's awareness of the specific targeting strategy (i.e., di↵erential targeting of extraverts and introverts).This implies that people already have su cient self-knowledge about their extraversion level and spontaneously apply this knowledge to the task.Thus, fostering self-knowledge is not necessary for boosting accuracy. • H2b: Raising people's awareness of the specific targeting strategy is not su cient to increase accuracy. ople need to actively reflect on their own relevant personality dimensions to recognise that they are being targeted.This also means that simply providing warnings and explanations on platforms will not su ce to enable people to detect microtargeting. • H2c: Neither of the above mechanisms apply; knowledge about one's relative score on the targeted personality dimension (i.e., explicit feedback on one's level of extravs.introversion) is required to boost accuracy.This implies that the main reason for people failing to detect microtargeting is a lack of relevant and accurate self-knowledge about the relevant personality dimension. Results Experiment 1. Fig. 2 shows that Experiment 1 supported hypothesis H1: Relative to the control condition, participants in the boosting condition on average correctly identified 26 percentage points more ads targeted at them (95% Bayesian credible interval, CI: 18-35)-raising the mean accuracy from 64% (95% CI: 53-73) to 90% (95% CI: 85-94). This di↵erence corresponds to an e↵ect size, expressed in terms of the "common language e↵ect size" 35 , of CL = 0.78 (95% CI: .70-.84), which here indicates the probability that a randomly selected participant from the boosting condition has higher detection accuracy than a randomly selected participant from the control condition.A value of 0.5 would imply no di↵erence and 1 would imply perfect separation between conditions.Additional analyses, detailed in the Supplementary Information (Supplementary Fig. S9-S11), attest to the robustness of these results.To summarize, the intervention worked (a) for both extraverts and introverts, (b) di↵erent levels of education, (c) irrespective of whether participants were clearly or more tentatively classified as extravert or introvert; moreover, the e↵ect (d) was stronger for extraverts than for introverts and (e) also emerged when we measured detection performance independently of any response tendency (lenient vs. strict), in terms of the area under the Receiver Operating Characteristics curve 36 (AUC; based on participants' confidence in their detection decisions).Overall, these results demonstrate that it is possible to improve people's ability to detect targeted advertisements through a short, simple boosting intervention. Although the results of Experiment 1 were unambiguous, the study left one key question unanswered: What drives the intervention's success?Is it su cient to hint at the strategy used by the advertiser, thus raising participant awareness (H2a)?Or is it neces- where partici ants in the boosting conditions received feedback about their extraversion prior to the task.Point ranges show the Bayesian point estimate and 95% Bayesian credible interval for the probability of correctly detecting a targeted advertisement (based on a multilevel logistic regression model; see Methods for details).In the boxplots, the box shows the the first and third quartiles (the 25th and 75th percentiles).The lower and upper whiskers extend from the respective end of the box to the largest value no further than 1.5 ⇥ IQR from the box (where IQR is the inter-quartile range, or distance between the first and third quartiles); outliers are not displayed.The area of the dots and their numbers denote the within-condition percentage of participants for each of the 11 possible values for a participant's proportion of correct decisions (given the 10 ads). sary that participants also reflect on their own relevant personality dimensions (H2b)?Or is explicit knowledge of one's relative score on the relevant personality dimension required (H2c)?In Experiment 2, we set out to tease apart these three di↵erent mechanisms. Experiment 2. The results of Experiment 2 support hypothesis H2b (Fig. 3): reflecting on one's relevant personality dimensions-without receiving any relevant feedbackis necessary, but also su cient to boost people's ability to identify ads that have been targeted at them.The boosting condition that included the extraversion questionnaire improved participants' performance by, on average, 10 percentage points (95% CI: 2-20) compared to the boosting condition with only the extraversion description, raising mean accuracy from 72% (95% CI: 63-81) to 83% (95% CI: 76-88); this di↵erence corresponds to a common language e↵ect size of CL = .62(95% CI: .52-.71).This positive e↵ect is at odds with hypothesis H2c, according to which explicit knowledge of one's level on the relevant personality dimension is necessary for the intervention to work.By contrast, participants who only read the extraversion description performed no better than participants who read the unrelated description of the ATI personality dimension (CL = .52,95%: .43-.62); the latter participants correctly identified 70% of the ads (95% CI: 61-77).This result is at odds with hypothesis H2a, according to which hinting at the strategy used by the advertiser is su cient for the intervention to work.Importantly, the e↵ectiveness of self-reflection was not generic: performance was boosted only when people reflected on the relevant personality dimension.Participants who read the unrelated description of ATI and then completed the ATI questionnaire correctly identified 68% of the targeted ads (95% CI: 57-77)-that is, 15 percentage points (95 CI: 7-24) fewer than the participants who reflected on the relevant personality dimension (i.e., extraversion; CL = .66,95%: 58-74). Additional analyses, detailed in the Supplementary Information (Supplementary Fig. S12-S14), attest to the robustness of these results.To summarize, the results hold (a) for both extraverts and introverts, (b) di↵erent levels of education; moreover, the e↵ect (c) was stronger for extraverts than for introverts, and (d) also emerged when we measured de- .Participants in the boosting conditions either just read a description of the relevant personality dimension prior to the task ("without questionnaire"), or additionally filled out the short questionnaire from Experiment 1, but without feedback ("with questionnaire").Point ranges show the Bayesian point estimate and 95% Bayesian credible interval for the probability of correctly detecting a targeted advertisement (based on a multilevel logistic regression model; see Methods for details).In the boxplots, the box shows the the first and third quartiles (the 25th and 75th percentiles).The lower and upper whiskers extend from the respective end of the box to the largest value no further than 1.5 ⇥ IQR from the box (where IQR is the inter-quartile range, or distance between the first and third quartiles); outliers are not displayed.The area of the dots and their numbers denote the within-condition percentage of participants for each of the 11 possible values for a participant's proportion of correct decisions (given 10 ads). tection performance independently of any response tendency (lenient vs. strict), in terms of the AUC 36 (based on participants' confidence in their detection decisions).However, 240 13 for moderately extraverted participants, we did not observe an e↵ect of filling out the relevant (vs.unrelated) questionnaire (Fig. S12 & S13); for those participants the explicit feedback about their personality seems necessary for improving their detection accuracy (cf.Experiment 1).In summary, Experiment 2 showed that the boosting intervention can improve detection accuracy even without provision of explicit feedback, whereas merely describing the relevant personality dimension was insu cient. Conclusion Two experiments demonstrated that prompting people to reflect on a targeted personality dimension-by means of a short and simple intervention-boosts their ability to identify ads that target them on the basis of that personality dimension.Merely providing a description of the targeted personality dimension did not enhance detection accuracy. Completing a short personality questionnaire about the targeted personality dimension was su cient to increase accuracy-even if people did not receive any feedback.This result resonates with the recent finding that simple interventions, such as exposing misinformation strategies, can help to inoculate people against misinformation strategies 37;38 . Further research needs to clarify the cognitive mechanisms underlying these e↵ects; the extent to which the observed increases in detection ability translate into improved downstream outcomes (e.g., in terms evaluating and responding to ads); and the extent to which the e↵ects generalize to other personality dimensions, domains (e.g., political advertising or misinformation), and populations. Boosting interventions-which by finition target people's competences-ha e the advantage that they can often be deployed independently of any platform or technology. That is, they do not need to interface with a platform's information architecture and are therefore not dependent on the platform's cooperation (in terms of access and maintaining interoperability).Compared with, say, an intervention where advertisements are labelled within the platform's interface, an intervention targeting people's competences is therefore more robust with respect to constantly changing technology, advertising strategies, and the tech companies' level of cooperation.Furthermore, as boosting interventions aim to improve people's competences, they have the potential to generalize beyond the immediate context in which they were initially deployed 32;39 .Self-reflection tools aimed at helping people increase their awareness of their vulnerabilities to microtargeting could be deployed on independent websites or apps-or even as "analogue" tools (e.g., a checklist on a printed flyer).Such tools would need to cover a range of the most relevan
2020-11-26T09:04:00.448Z
2020-11-19T00:00:00.000
{ "year": 2020, "sha1": "5110e067b2b400c52bf7b1808bdb56959c05aeb1", "oa_license": "CCBY", "oa_url": "https://pure.mpg.de/pubman/item/item_3265379_8/component/file_3334018/s41598-021-94796-z.pdf", "oa_status": "GREEN", "pdf_src": "Anansi", "pdf_hash": "2486f22d6dea096917db7810d1eff7b6699d5116", "s2fieldsofstudy": [ "Psychology" ], "extfieldsofstudy": [] }
256611386
pes2o/s2orc
v3-fos-license
Exploiting immune-dependent effects of microtubule-targeting agents to improve efficacy and tolerability of cancer treatment Microtubule-targeting agents (MTAs), like taxanes and vinca alkaloids, are tubulin-binding drugs that are very effective in the treatment of various types of cancers. In cell cultures, these drugs appear to affect assembly of the mitotic spindle and to delay progression through mitosis and this correlates with their ability to induce cell death. Their clinical efficacy is, however, limited by resistance and toxicity. For these reasons, other spindle-targeting drugs, affecting proteins such as certain kinesins like Eg5 and CENP-E, or kinases like Plk1, Aurora A and B, have been developed as an alternative to MTAs. However, these attempts have disappointed in the clinic since these drugs show poor anticancer activity and toxicity ahead of positive effects. In addition, whether efficacy of MTAs in cancer treatment is solely due to their ability to delay mitosis progression remains controversial. Here we discuss recent findings indicating that the taxane paclitaxel can promote a proinflammatory response by activation of innate immunity. We further describe how this can help adaptive antitumor immune response and suggest, on this basis and on the recent success of immune checkpoint inhibitors in cancer treatment, that a combination therapy based on low doses of taxanes and immune checkpoint inhibitors may be of high clinical advantage in terms of wide applicability, reduced toxicity, and increased antitumor response. Introduction MTAs have been introduced in cancer therapy since several years and are still among the most widely used antitumor drugs, utilized alone or in combination with other antiblastic drugs, to treat different cancers [1][2][3] . In addition, MTAs are still an essential resource as second line treatments and for the treatment of tumors that lack known specific molecular targets and cannot benefit from recent advances in targeted therapy. Indeed, the taxane docetaxel has been approved for treatment of castration resistant prostate cancer and of triple-negative breast cancer, alone or in combination with other drugs 4 . MTAs bind β-tubulin and severely affect microtubule dynamics through different mechanisms. Taxanes stabilize microtubules while vinca alkaloids hamper microtubule polymerization. Thus, MTAs interfere with many key cellular processes. In interphase, the intracellular transport of proteins, vesicles, and organelles along trucks formed by microtubule fibers are deeply affected by MTAs [5][6][7] . In mitosis, the microtubular cytoskeleton is profoundly rearranged to form the mitotic spindle, the structure required to segregate replicated chromosomes during cell division, and this is also deeply affected by MTAs 8 . By altering normal mitotic spindle assembly, MTAs activate the spindle assembly checkpoint (SAC), a safeguard mechanism that prevents errors in chromosome segregation and generation of aneuploid cells by delaying mitosis exit when spindle assembly is impaired ( Fig. 1; high taxanes) 9,10 . When spindle assembly is incomplete or impaired, SAC effector proteins, like BubR1 and Mad2, bind Cdc20, a coactivator of the ubiquitin ligase Anaphase-Promoting Complex/Cyclosome (APC/ C), to form a mitotic checkpoint complex (MCC). MCC inhibits APC/C-dependent degradation and inactivation of the master mitotic kinase cyclin B-dependent kinase (Cdk) 1 and of the anaphase inhibitor securin, arresting cells in mitosis. However, after prolonged SAC-dependent delay in mitosis, cells activate cell-death programs 10-14 . Indeed, in cell cultures, MTA-dependent mitotic perturbance and delayed progression through mitosis correlate with the MTA ability of inducing cell death, thus providing a mechanistic rationale for the therapeutic effects of these drugs. However, as discussed later, mitotic delay may not be the only mechanism by which MTAs kill cancer cells. Limitations of MTA-based cancer therapy In the clinic, most patients immediately respond to treatment with MTAs, with very few cases of naive resistance against these drugs 15 . Unfortunately, patients that initially respond well to MTA therapy may later develop resistance to treatment. The mechanisms of acquired resistance to MTAs are several, spanning from the more general upregulation of the ABC transmembrane efflux transporters to the much more drug specific, "on-target" mutations of microtubule forming or binding proteins. For example, the response to MTAs can be limited by mutations or by altered expression of the microtubule building blocks αand β-tubulin 4,15,16 . A more common way of resistance to MTAs, however, is believed to derive from an adaptation mechanism known as "mitotic slippage" 17 . Indeed, even in the presence of therapeutic concentrations of MTAs, cells can override SAC-induced mitotic arrest and slip out of mitosis ( Fig. 1; high taxanes). This is mainly due to a progressive loss of Cdk1 activity, during arrest, to a point in which the SAC, that requires Cdk1 activity, cannot be held active anymore in preventing cyclin B degradation and mitosis exit 18,19 . MTA-treated cells can survive if they slip through mitosis before the threshold necessary to induce mitotic delaydependent cell death has been reached and either die Middle panels; at high doses (high taxanes), taxanes activate the SAC and delay mitosis exit. This translates in activation of apoptotic pathways that lead to cell death. Nevertheless, cancer cells can escape cell death induced by high taxanes and slip through mitosis without dividing but forming micronuclei (see main text). Lower panels; at low doses (low taxanes), taxanes do not induce significant delay in mitosis exit but rather induce chromosome segregation errors (mitosis exit with chromosome missegregation) and the formation of micronuclei in daughter cells. afterwards, or stop proliferating, or give rise to more aggressive clones 8,14 . It can, therefore, be inferred that preventing mitotic slippage for a time sufficient enough to induce mitotic cell-death pathways could avoid this mechanism of resistance. To this end, it has been suggested that inhibiting APC/C Cdc20 , the ubiquitin ligase responsible for cyclin B and securin degradation and, therefore, crucial for mitosis exit, might help cancer cell killing in aid to MTA treatments 18 . However, the two so far described APC/C Cdc20 inhibitors, TAME and Apcin, though promisingly efficient in blocking mitosis exit in experimental settings, are not yet available for clinical use and very recent observations cast doubts on their mechanism of action [20][21][22] . As an alternative to sustain MTA-induced mitotic arrest, we have proposed to combine MTAs with the Wee1 kinase inhibitor AZD-1775, based on a novel role we unveiled for Wee1 in regulating mitosis exit 23,24 . Of note, we proved that Wee1 genetic depletion substantially delayed mitotic exit and that chemically inhibiting Wee1 with AZD-1775 synergized with MTAs by further prolonging mitosis and increasing cell death in cancer cell lines and primary human lymphoblastic leukemia cells 18,23 . Besides resistance, the use of MTAs can be substantially limited also by side effects, in some cases so severe to force to dose reduction or discontinuation. Above all, MTAs cause neutropenia and lymphopenia, a consequence of toxicity on cycling hematopoietic precursor cells, and neurotoxicity, likely by disruption of microtubule-mediated axonal transport in neurons 25 . To control this, new MTA formulations have been developed and tested to reduce doses, optimizing delivery, and distribution. The nanoparticle albumin-bound (nab)-paclitaxel is a solvent-free, colloidal suspension of the taxane paclitaxel, and human serum albumin, already approved for cancer therapy 26 . Microspheres and liposomes are currently tested as MTA vehicles 27 . Looking for an alternative way to delay mitosis exit and promote cancer cell death, many spindle-targeting drugs, not directly targeting tubulin but rather microtubuleassociated proteins or kinases required for mitotic spindle assembly, have been developed 28 . Among them, several inhibitors of the kinesin superfamily proteins (KIFs) have been exploited. Kinesins and kinesin-related proteins make up a large superfamily of molecular motors responsible for the major microtubule-and ATPdependent transport pathways and some of them are particularly relevant for spindle assembly 29,30 . Disappointingly, however, most of mitotic KIF inhibitors tested, although able to perturb mitotic spindle assembly, do not kill cells efficiently 29,30 . A large number of molecules has also been developed to inhibit kinases like Plk1, Aurora A and Aurora B that are required for spindle assembly 31 . Nevertheless, initial clinical trials with most of the Plk and Aurora inhibitors have not confirmed the promising preclinical efficacy and only very few drugs, such as the Aurora A kinase inhibitor alisertib, reached phase III trials for a wide variety of tumors, upon encouraging response rates in phase II trial. However, the positive results have been poorly confirmed in further trials 32,33 . Immunotherapies targeting immune checkpoints Cancers are often infiltrated by a heterogeneous population of tumor-infiltrating immune cells and whether this produces pro-or antitumor effects is still matter of extensive investigation. IFN-γ-producing CD4 + T helper (Th)1 cells, CD8 + T cells, natural killer (NK) cells, type 1 NK T cells, mature dendritic cells (DCs), and M1 macrophages could generate an antitumor response. On the other hand, CD4 + Th2 cells, CD4 + T regulatory (TReg) cells, type 2 NK T cells, myeloid-derived suppressor cells, immature DCs, and M2 macrophages could suppress antitumor immunity and promote cancer progression 34,35 . T cells are also negatively controlled by immune checkpoint proteins, classes of molecules and signals that restrain T-cell proliferation, survival, and activation 36 . Although cancer cells can express tumor-specific neoantigens, thus being susceptible to be targeted by the immune system, cancer cells often express on their surface immune checkpoint molecules that suppress activation of T cells that could grant tumor immune surveillance. These immune checkpoint molecules, like Programmed cell-Death-1 Ligand (PD-L1) and B7, are normally found on the antigen presenting cell (APC) to avoid auto-immune T-cell activation in the body (T cells are expressing the relative receptors PD-1 and CTLA-4). Based on these observations, immunotherapies targeting these molecules have been developed and are emerging as a major breakthrough in cancer treatment 36,37 . CTLA-4 is expressed on the cell membrane of T cells and competes with the TCR-costimulatory protein CD28 for binding to the B7 protein of the APC. Upregulation of CTLA-4 expression and increased CTLA-4:B7 binding results in a negative signal, which limits proliferation and survival of the T cells 38 . The exact mechanism by which anti-CTLA-4 antibodies induce an antitumor response is still imprecisely known, although preclinical evidence suggests that CTLA-4 blockade supports the activation and proliferation of a higher number of effector T cells and reduces TReg cell-mediated suppression of effector T-cell response 39,40 . Indeed, after successful clinical trials, the anti-CTLA-4 monoclonal antibody ipilimumab was first approved for the treatment of advanced or unresectable melanoma 41,42 . PD-1, a cell surface receptor, is expressed on regulatory and cytotoxic activated T cells in peripheral tissues while PD-L1 is mainly expressed on APC. Physiologically, binding of PD-L1 to its receptor results in T-cell inactivation. Thus, PD-1/PD-L1 is an immune checkpoint that guards against autoimmunity, promoting self-tolerance, and is crucial to limit immune responses in case of infections 43,44 . PD-1 is highly expressed on many tumorinfiltrating lymphocytes and cancer cells often overexpress PD-L1, thus escaping immune surveillance 45 . This has provided a strong rationale for the development of drugs targeting the PD-1/PD-L1 checkpoint. Indeed, antibodies blocking the binding of PD-L1 to its receptor, such as nivolumab and pembrolizumab, enhance immunity against a wide variety of cancers 37,46 . FDA rapidly approved these drugs for the treatment of melanoma, urothelial cancer, renal cell carcinoma, non-small-cell lung cancer (NSCLC), Hodgkin lymphoma, and squamous cell carcinoma of head and neck. Unfortunately, not all cancers respond to Immune Checkpoint Inhibitor (ICI)-based therapy 47 . This appears to correlate very strongly with the relatively poor infiltrate of immune and inflammatory cells of ICI-resistant cancers that are, therefore, called "cold tumors" 48 . Combination of immune checkpoints targeting immunotherapies with DNA-damaging treatments A possible strategy to improve ICI-based therapy in cancer patients is to combine it with radiation or traditional, DNA-damaging, antiblastic therapies. Indeed, by induction of necrosis or immunogenic cell death (ICD), DNA damage may render cold tumors inflamed 49 . It was soon hypothesized that the combination of DNAdamaging radiations or drugs with ICIs could be highly beneficial to cancer patients. Increasing evidence suggests that the antitumor activity of DNA-damaging treatments is mediated not only through cytotoxic effects, but also because they stimulate immune surveillance by affecting both cancer and immune cells 49 . For example, some DNA-alkylating agents, like cyclophosphamide and carboplatin, or antimetabolites, like pemetrexed, both increase the expression of MHC class I molecules on cancer cells and subvert the immunosuppressive functions of TReg cells 50,51 . The autocrine and paracrine circuits controlling cancer immune surveillance mainly depend on type I IFNs secreted by tumor cells and/or by tumor-infiltrating immune cells 52 . Recently, it has been demonstrated that DNA damaged cancer cells are an important source of type I IFNs. Cells with doublestranded DNA breaks that progress through mitosis accumulate micronuclei 53,54 . Micronuclear DNA is sensed by cGAS, an enzyme that catalyzes the formation of cyclic GMP-AMP (cGAMP) from ATP and GTP 53,54 . cGAMP, upon binding to the adaptor protein Stimulator of Interferon Genes (STING), activates the transcription factor IRF3, leading to the transcription of Type I interferons (IFNs) and, in turn, to an innate immunity response [53][54][55] . Remarkably, using a well-described B16 syngenic mouse model of melanoma, it has been shown that irradiation of one tumor along with immune checkpoint blockade results in a T-cell-dependent growth delay of a contralateral unirradiated tumor 53 . The irradiated tumor produces cGAS/STING-dependent immunomodulatory signals that result in an efficient immune-mediated regression of the contralateral tumor provided that PD-1/PD-L1-or CTLA-4-mediated signaling are inhibited. The results obtained in the melanoma mouse model, therefore, suggest that the immune checkpoint manipulation could indeed enhance the response to DNAdamaging, radio-and chemotherapies. Indeed, upon successful conclusion of a phase II clinical trial, one such combination treatment (pembrolizumab plus carboplatin/ pemetrexed), has been already approved for NSCLC patients 56 . In any case, limitations of combining DNAdamaging radiations or drugs with ICIs appear essentially due to lack of ICI effects because of damage to immune cells by the DNA-damaging agents or, conversely, to adverse, toxic, effects of hyperactivation of inflammatory, and immunological reaction towards normal tissues 57,58 . Exploiting immune-dependent effects of MTAs to improve cancer treatment The fact that most of the spindle-targeting drugs are not so efficacious in cancer treatment has reinforced the idea that MTAs also act independently of their ability to delay mitosis completion 13 . How MTAs might kill cells besides their ability to induce mitotic cell death, is, however, mechanistically unclear and object of great debate 8,13,14,[59][60][61] . Recent observations have indicated that when cells slip through mitosis, after mitotic delay, and adapt to paclitaxel, they form micronuclei that, as in the case of DNA-damaging treatments, activate the cGAS/ STING pathway and stimulate a proinflammatory response (Fig. 1, high taxanes) 8,14,62 . Thus, as recently suggested, taxane-based therapy may also work because it elicits the antitumor intervention of the immune system on cells escaping mitotic death 8,63 . Indeed, it has been shown that paclitaxel stimulates breast cancer cells to produce IFN-β and taxane-based therapy often induces increased tumor-infiltrating immune cells, despite their suppressive effect on the rapidly cycling bone marrow cells 3,5,64,65 . Thus, taxane-based therapies could benefit by the combination with ICIs. Several clinical trials have also been designed now to explore the effect of a combinatorial therapy with taxanes and ICIs. The majority of these clinical trials are still ongoing and their preliminary but very promising results are still to be definitively proven. It is a fact, however, that upon successful completion of two such trials, pembrolizumab and atezolizumab, another anti-PD-L1 monoclonal antibody, have been approved in combination with paclitaxel or its albumin-stabilized nanoparticle formulation nab-paclitaxel for the first-line treatment of metastatic squamous NSCLC 66,67 . Moreover, atezolizumab in combination with the sole nab-paclitaxel has also been approved for the treatment of women with unresectable triple-negative breast cancer 68 . The efficacy of the combination of taxanes and ICIs in cancer therapy may be explained by a simple additive effect of the two classes of drugs. However, as already discussed, the complex and not yet completely investigated immunomodulatory activity of MTAs on tumor-infiltrating immune cells might at least in part explain the success of the MTAs and ICIs combination 63 . Combination of ICIs with low doses of MTAs Death after prolonged mitosis or following slippage is certainly a way MTAs kill cancer cells. However, in the case of paclitaxel, recent correlations between clinical therapeutic success for breast cancer patients and the type of mitotic aberrations induced by this drug in their breast cancer cells have indicated that the therapeutic benefit correlates with alterations in chromosome segregation rather than with prolongation of the duration of mitosis 69 . Indeed, while at relatively high doses paclitaxel induces mitotic delay, at much lower concentrations it does not significantly delay mitosis duration but perturbs its normal execution inducing a significant degree of chromosome missegregation and formation of micronuclei in daughter cells ( Fig. 1; low taxanes) 70 . When single or small groups of chromosomes do not segregate with the mass of other chromosomes, they become wrapped up in nuclear membranes and remain separate from the primary nucleus 8 . Micronuclei formed upon chromosome segregation errors bear extensive membrane defects because 'non-core' nuclear envelope proteins, including nuclear pore complexes, do not assembly properly on lagging chromosomes 71 . Thus, micronuclei spontaneously and frequently lose nuclear envelope integrity, generating further DNA damage 72 . This micronuclear DNA can activate the cGAS-STING pathway stimulating macrophages and innate immunity and, as discussed earlier, Fig. 2 Low taxane-induced micronucleation stimulates innate immunity response and may promote lymphocyte-mediated cancer cell killing when combined with ICI treatment. Cancer cells may express tumor-specific neoantigens and when treated with low doses of taxanes may induce micronucleation-dependent activation of antigen presenting cell (APC). a Micronucleation-dependent activation of APC may stimulate adaptive immunity to promote effector T lymphocyte-mediated cancer cell killing. b Micronucleation-dependent activation of APC may stimulate adaptive immunity but keep effector T lymphocytes under check by upregulating immune checkpoint molecules (immune checkpoint ligands and cognate receptors are indicated in green and red, respectively). c Cancer cells themselves may upregulate immune checkpoint molecules and keep effector T lymphocytes under check. d The combination of low doses of taxanes with ICIs unleashes potent effector T-cell-mediated cancer cell killing. innate immunity may help adaptive immunity and favor antitumor immune surveillance (Fig. 2a) 8 . These observations suggest not only that a major reason for the therapeutic success of taxanes, and perhaps of other classes of MTAs, relies on their ability to promote antitumor immune surveillance but also that low doses of the drugs may be sufficient to achieve this goal. By induction of micronucleation and cGAS/STING-signaling, low doses of taxanes might be sufficient to activate innate immunity and inflammation, giving less negative side effects than standard therapeutic regimens as neutropenia and lymphopenia that may oppose to antitumor immune surveillance 25 . Thus, low doses of taxanes would be sufficient to enhance recruitment of immune cells and render "hot" otherwise "cold" tumors now readily attackable by the immune system (Fig. 2a). However, immune checkpoint may oppose to antitumor immune surveillance stimulated by taxanes (Fig. 2b, c). Based on these considerations, we would like to propose the use of low doses of taxanes in combination with ICIs as a strategy of wide applicability, high tolerability, and efficacy in cancer treatment (Fig. 2d).
2023-02-06T14:50:48.414Z
2020-05-01T00:00:00.000
{ "year": 2020, "sha1": "0e7ab8498878a9edb145c7c610e2923ec3e80b53", "oa_license": "CCBY", "oa_url": "https://www.nature.com/articles/s41419-020-2567-0.pdf", "oa_status": "GOLD", "pdf_src": "SpringerNature", "pdf_hash": "0e7ab8498878a9edb145c7c610e2923ec3e80b53", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [] }
53472566
pes2o/s2orc
v3-fos-license
Interest Rate Risk Measurement in Brazilian Sovereign Markets Fixed income emerging markets are an interesting investment alternative. Measuring market risks is mandatory in order to avoid unexpected huge losses. The most used market risk measure is the Value at Risk, based on the profit-loss probability distribution of the portfolio under consideration. Estimating this probability distribution requires the prior estimation of the probability distribution of term structures of interest rates. An interesting possibility is to estimate term structures using a decomposition of the spread function into a linear combination of Legendre polynomials. Numerical examples from the Brazilian sovereign fixed income international market illustrate the practical use of the methodology. Introduction Fixed income emerging markets developed quickly during the last decade.Higher international liquidity, the interest of portfolio managers in diversifying internationally, and the continuous improvement of risk control by international rating agencies are three reasons for such a development.Some of the most liquid instruments traded in fixed income emerging markets are the so-called Brady bonds (Fabozzi and Franco (1997)).They are dollar denominated sovereign instruments, originated from the restructuring of defaulted bank loans of countries located in Latin America, Central and Eastern Europe, Middle East, Africa and Asia. Pricing and hedging these instruments is not easy due to their usually complex cash flows.They may present floating (for instance, depending on the LIBOR rates) or step up interest payments, amortize or capitalize principal before maturity, contain embedded options, as well as offer collateralized principal and/or interest payments. Other fixed income instruments in the emerging debt market include bank loans, local issues, and eurobonds.Eurobonds are bonds issued in a foreign currency, in a foreign country. Interest on them has been growing steadily due to the improving credit rating of certain emerging markets, with special attention directed, more recently, to Global bonds, which are eurobonds issued simultaneously in several countries.Whenever investing in emerging markets one must pay special attention to risk management.Controlling risk is mandatory in order to avoid unexpected high losses.The Value at Risk (VaR;Jorion (2001)) of a portfolio, obtained as a percentile of the profit -loss probability distribution of the portfolio, is one of the most frequently adopted market risk measures.On its turn, the profit-loss probability distribution i s closely related to the distribution of term structures of interest rates probability distribution.For instance, if we are interested in calculating the VaR of fixed income portfolios, we need first to estimate the probability distribution of term structures of interest rates.Vasicek and Fong (1982) suggests estimating the U.S term structure of interest rates applying a regression model based on exponential splines.Litterman and Scheinkman (1991) verifies, using Principal Component Analysis (PCA; Mardia et al. (1992)), that more than 90% of the U.S term structure of interest rates movements were explained by just three orthogonal factors.Several applications were proposed in the finance literature following that.For instance, Singh (1997) uses PCA to estimate the market risk of fixed income instruments in the US market.Barber and Copper (1996) proposes an immunization strategy also based in PCA to generate optimal hedges.Almeida et al. (1998) suggests a modeling approach for term structures of interest rates in emerging markets.The model is based on a decomposition of the term structure in a risk free benchmark curve plus a spread function representing the sovereign credit risk spread.This spread function, on its turn, is decomposed into a linear combination of Legendre polynomials (Lebedev (1972)).An extension to this model, which considers the relative credit risk among the assets included in the estimation process, is possible (Almeida et al. (2000)).In this extension, the relative credit risk is captured by considering the rating of each asset in the evaluation of the credit spread function.This enhanced methodology provides more accurate estimates of term structures, at the expense of more computational complexity. In this article, a methodology for estimating the VaR of portfolios in fixed income emerging markets is proposed.We estimate the historical probability distribution for the variations of the benchmark curve, as well as for each orthogonal factor responsible for movements of the spread function, obtaining the historical probability distribution of variations for the whole term structure.Two numerical examples using data from the Brazilian sovereign fixed income market are presented.These examples illustrate the VaR estimation of a portfolio composed by Brazilian Brady and Global bonds. The article is organized as follows.Section 2 presents the model for the estimation of term structures of interest rates in emerging markets, and the methodology used in our examples.Section 3 presents the numerical examples.Section 4 concludes the article. Definition The term structure of interest rates in a fixed income emerging market can be modeled where t denotes time, ) (t B is a benchmark curve (for instance, the U.S. term structure of interest rates), n P is the Legendre polynomial of degree n (Lebedev (1972)), n c is a parameter, and l is the longest maturity of a bond in the emerging market under consideration. The price of a bond ( p ) relates to the term structure by: where i C denotes the cash flow paid by the bond on time i t , and A n denotes the total number of cash flows paid by the bond. The Legendre polynomial of degree n is defined (in the compact set [ , ] −11 ) according to the following expression: The first four Legendre polynomials are: Their graphs are depicted in Figure 1.The first Legendre polynomial will be related to parallel shifts in the term structure, the second Legendre polynomial will be related to changes in slope of the term structure, the third Legendre polynomial will be related to changes in curvature of the term structure, and the fourth Legendre polynomial will be related to double changes in curvature of the term structure. Estimation The first step required is to estimate the coefficients ..., , , , We define the discount function as: ]. We assume that m bonds are available in a particular instant of time for the estimation process. The estimation of the coefficients ..., , , , is accomplished by the use of a non-linear regression equation given by: , where p j denotes the price of the j th bond, a j denotes the accrued interest of the j th bond, 1 j put and 1 j call are dummy variables indicating the existence of embedded put and call options in the bond, o p and o c are unknown parameters related to the prices of the embedded put and call options, f j denotes the number of remaining cash flows of the j th bond, and t jl the time remaining for payment of the l th cash flow u jl of the j th bond; j e is the random disturbance, with ( ) 0 Table 1 presents the characteristics of fourteen Brazilian bonds (Brady and Global) used in the numerical examples.Figure 2 depicts two term structures of interest rates estimated using the model just described: one term structure for Brady bonds, and the other term structure for Global bonds. [Table 1 and Figure 2 about here] Market As an example of the methodology just described, let us suppose that we are interested in estimating the Brazilian Brady and Global bonds term structures of interest rates.These two types of assets may present, during certain periods of time, substantially different levels of credit risk.Thus, estimating a unique term structure of interest rates to represent both markets may "distort" results.On the other hand, estimating separately one curve for each market may present statistical difficulties in the cases where the number of liquid assets belonging to one of these markets is small.In order to avoid these drawbacks, we apply the methodology presented in Almeida et al. (2000), which captures the difference in risk between different levels of credit risk using different credit spread functions.The result is a joint estimation procedure that estimates simultaneously the two term structures. Consider the existence of two levels of credit risk represented by The spread function is still modeled as a linear combination of Legendre polynomials. The model is completely specified when the dependence on different levels of credit risk is defined.For instance, it is possible to capture the difference in credit risk using just the translation factor (Legendre polynomial of degree zero).A consequence from such a specification is: a) The term structures of interest rates estimated simultaneously using the model differ only by parallel shifts. Or, equivalently: b) When comparing all maturities of these term structures, the volatilities of the interest rates spreads differ only by a constant value.For the first numerical example presented in this article we used a more general model. It allows the interest rates spreads of one term structure with respect to the other to present volatility differing not only by a constant value along the maturities.This flexibility can be achieved by allowing the term structures to possess also different curvatures.Thus, the model captures the difference in credit risk using the Legendre polynomials of degree zero and two (which are responsible, respectively, for parallel shifts and changes in curvature of the term structure of interest rates). The equations for the term structures of interest rates for the Global and Brady markets The coefficients 2. Portfolios Suppose we wanted to estimate the Value at Risk of two portfolios on November 10, 2000, using the model just described. Portfolio 1 presents the following composition (see also Table 1) As mentioned before, we need first to estimate the probability distribution of the variations of the term structures.We apply the Historical Simulation approach (Jorion ( 2001)) for estimating the interest risk in these two portfolios. Obtaining the historical joint probability densities of the variations of the U.S strips term structure, and of all orthogonal factors in Table 2, is a computer intensive step.It requires running an optimization procedure for each day in the database to estimate the values of the orthogonal factors and, then, to estimate the historical term structures for Brady Bonds and Global Bonds.After obtaining the distributions for the orthogonal factors and for the U.S Strips, we can obtain the Brady and Global bonds term structures scenarios required by the Historical Simulation approach.In the numerical examples two hundred and fifty historical scenarios were generated.For each scenario, the associated term structures were used to price all bonds in the portfolios.At the end, we obtained the historical probability density of bond prices.3 that the five historical distributions in Figure 3 violate the hypothesis of normality.They all present kurtosis greater than three, as well as nonzero skewness. [Table 3 and Figure 3 about here] Figure 4 and Figure 5 present, respectively, the probability densities of the returns of Portfolio 1 and Portfolio 2. Note that both are skewed to the left and present fat left-tail, meaning that there is a greater probability of loosing extreme values than earning extreme values.For the reasons detailed in Duarte (1997), the Analytical Approach (Jorion (2001)) is not recommended to compute the VaR of these two portfolios. [Figure 4 and 5 about here] Table 4 presents the Value at Risk for each bond used in the estimation process, for two distinct confidence levels: 99% and 95%.Similarly, Table 5 presents the Value at Risk for the proposed portfolios for 99% and 95% confidence levels.Table 6 shows the price correlation matrix for the bonds used in the estimation process. [Table 4, 5 and 6 about here] For the sake of illustration, we present two other models to decompose the spread of the Brady over the Global term structure.Table 7 presents the combinations of factors for each analyzed model.Model 1 corresponds to the model where the difference in risk is captured using only the translation factor.Model 2 captures the difference in credit risk using the translation and rotation factors.Finally, Model 3 represents the model used so far in this article (which is included here only for comparative purposes). We provide in Figure 6 and Figure 7 the historical evolution of Brazilian Brady and Global term structures during one year of analysis, for Model 1 and Model 2, respectively. [Table 7, Figure 6 and 7 about here] Observing these graphs, we are capable of capturing which are the most important factors responsible for interest rate risk.For instance, Figure 6 and Figure 7 reveals that the Brady rotation factor is more volatile in Model 2 than in the others.This fact is in accordance with the specification of the model, which uses an extra free variable related to the rotation factor to describe the spread of the Brady over the Global term structure.Observing now the Global term structures, Figure 6 reveals that the rotation and torsion factors become more important as risk factors for more recent observations of the time series for Model 1.On the other hand, Figure 7 indicates that the Global term structure suffers significant changes in its curvature since the beginning of the time series for Model 2. These pictures represent an interesting tool for identification of the regions where the scenarios for the evolution of the term structures might produce the most extreme movements. Table 8 presents the estimated VaR for Portfolios 1 and 2 for the three models.It generalizes what was observed in Table 5 for Model 3: Portfolio 2 presents higher risk than Portfolio 1, for all models, for both the 99% and 95% confidence levels.Another interesting fact is related to the distribut ion of mass in the left tails of the density functions of the returns of Portfolios 1 and 2. Note that if a model generates the higher risk, among all models, for a fixed confidence level, it does not mean that this model generates the highest risk for another fixed confidence level.Observe also that the difference in estimated risk existent comparing the models can be very large.For instance, if we compare the VaR for Portfolio 2, at a 99% confidence level, estimated by Models 1 and 3, we identify a diff erence of 30% (US$ -5,230,000 and US$ -3,680,000).These remarks indicate the importance of observing the risk for different confidence levels, and also using models presenting different sources of risk to better capture the magnitude of possible losses, avoiding model risk when measuring market risk (Duarte (1997)). Conclusion We propose a methodology for estimating the Value at Risk of portfolios in fixed income emerging markets.It exploits the dynamics of the orthogonal factors, obtained by the decomposition of the credit spread function into a linear combination of Legendre polynomials.This methodology produces a probability density function for the term structures of interest rates.It is possible to show that the use of the model described in Section 2.4 (to estimate the historical evolution of the whole term structures of emerging markets) generates a dynamic equivalent to the one obtained by using Principal Component Analysis in a market presenting an observable term structure.In other words, the methodology proposes the application of Principal Component Analysis in markets presenting non-observable term structures, which is the case of fixed income emerging markets.This fact allows us to use this methodology for, at least, all fixed income applications that may apply Principal Component Analysis, which is the case of risk analysis, portfolio allocation, immunization techniques etc. Although the portfolios presented in the numerical examples were composed by only Brazilian fixed income instruments, the methodology can be easily extended to other financial markets (such as the U.S. corporate bond market). where the first is related to the Global bonds market, and the second to the Brady bonds market.The methodology proposes an extension to Equation (1) to capture different levels of credit risk considering the spread function depending explicitly on these levels: term structure to present a distinct decomposition when compared to the Global term structure, with respect to parallel shifts and changes in curvature.In this particular case, the two curves present the same rotation factor with respect to the benchmark curve.Later in this article we shall investigate the use of different rotation factors. Figure 2 Figure 2 depicts the Brazilian Brady and Global bonds term structures of interest rates estimated on November 10, 2000, based on the model just described, with the U.S strips playing the role of the benchmark curve.Values for the first three orthogonal factors for each curve, with their p-values, are given inTable 2. : a) Long US$ 20 million in CBOND.b) Long US$ 20 million in DCB.c) Long US$ 10 million in GLB30.d) Short US$ 20 million in EI. e) Short US$ 15 million in IDU.f) Short US$ 15 million in GLB01.Portfolio 2 is composed by the same bonds and the same amounts as Portfolio 1, but it presents only long positions: a) Long US$ 20 million in CBOND.b) Long US$ 20 million in DCB.c) Long US$ 10 million in GLB30.d) Long US$ 20 million in EI. e) Long US$ 15 million in IDU.f) Long US$ 15 million in GLB01. . (in US$) of each bond in the th j proposed portfolio, and j V denotes the random variable which measures the profits and losses (in US$We constructed the probability density of the profits and losses of the th j portfolio by multiplying the returns of the bonds listed in this portfolio by the amounts held on them: historical simulation) we estimate the VaR of both the proposed portfolios and of each bond used in the estimation process.For instance, Figure3depicts the marginal probability densities of the returns of the translation, rotation and torsion orthogonal factors related to the Brady and Global term structures of interest rates.Note from Table Figure Figure 2. A Joint Estimation of Term Structures of Brady and Global Figure Figure 4.Estimated Probability Density Function of the Returns of Figure Figure 5.Estimated Probability Density Function of the Returns of Table 1 . Brazilian Bonds Used in the Estimation Process Table 2 . Orthogonal Factors for Two Brazilian Term Structures *The first three Legendre polynomials correspond respectively to the translation, rotation and torsion factors.** Statistically significant at a 5% confidence level.*** Statistically significant at a 1% confidence level. Table 3 . Jarque Bera Normality Test for the Factors Returns *Reject the hypothesis of normality at a 1% significance level. Table 4 . Estimated Value at Risk for Global and Brady Bonds Table 5 . Portfolios Estimated Value at Risk Based on Historical Simulation Table 6 . Correlation Matrix for the Bonds Used in the Estimation Process Table 7 . Estimating VaR Using Different Combinations of Factors
2018-10-24T16:35:57.050Z
2004-06-01T00:00:00.000
{ "year": 2004, "sha1": "c3ea291d722e2e7b48c507bb8b73ef990bf68c95", "oa_license": "CCBYNC", "oa_url": "https://www.scielo.br/j/ee/a/35HTvLxHmkM5RLbbwqJvRNJ/?format=pdf&lang=en", "oa_status": "GOLD", "pdf_src": "ScienceParseMerged", "pdf_hash": "77dea0d59e3619c56b1c8fb2a19d039faab1616a", "s2fieldsofstudy": [ "Economics" ], "extfieldsofstudy": [ "Economics" ] }
13442996
pes2o/s2orc
v3-fos-license
A repeat-until-success quantum computing scheme Recently we proposed a hybrid architecture for quantum computing based on stationary and flying qubits: the repeat-until-success (RUS) quantum computing scheme. The scheme is largely implementation independent. Despite the incompleteness theorem for optical Bell-state measurements in any linear optics set-up, it allows for the implementation of a deterministic entangling gate between distant qubits. Here we review this distributed quantum computation scheme, which is ideally suited for integrated quantum computation and communication purposes. Introduction One of the fascinating aspects of quantum computers is the potential ability to process efficiently certain computational tasks that are deemed intractable using classical computer technology [1]. Existing experiments in quantum information theory have focused on the implementation of sequential interaction of particles (quantum bits or qubits) based on a model of a quantum computer as a network of quantum logic gates [2,3]. Although the basic features of a quantum computer have been demonstrated to some extent, certain currently insurmountable challenges exist: scalability and decoherence, for instance. In a network of logic gates, a quantum computer proceeds reversibly. A different model of a scalable quantum computer has recently been proposed [4] in which the entire resource for the quantum computation rests initially in the form of a specific entangled state (a so-called cluster state) of a large number of qubits. Information is then written onto the cluster, processed, and read out from the cluster by single-qubit measurements only. Thus, unlike the usual paradigm of assembling logic devices for computation, this scheme is essentially irreversible. The entangled state of the cluster serves as a universal substrate for any quantum computation. One way to create cluster states is to employ systems with a quantum Ising-like interaction. Photons are often regarded as a favourite qubit due to their long decoherence times and speed as well as their ability to distribute themselves in an optical network. However, photons cannot interact directly with each other. Without non-linear effects [5,6], photons can only become entangled via post-selected measurements and local operations [7]- [10]. Moreover, linear optics alone does not permit complete Bell measurements [11]. Thus, such entangling operations between photons are necessarily probabilistic. Obtaining success probabilities close to unity requires therefore the presence of highly entangled ancilla states and quantum teleportation [7] as a universal quantum primitive [12]. First experiments demonstrating the feasibility of proposed schemes have already been performed [13]- [15]. Photons can be transmitted easily from point to point. Hence they are often regarded as 'flying' qubits. However, there is a trade-off to this advantage of easy distribution in quantum computing: it is generally difficult to store them and to use them as quantum memory. 'Stationary' qubits on the other hand, i.e. qubits realized through atoms or ions, provide good quantum memory due to the relatively long decoherence times of their internal ground states. For stationary qubits, it is relatively easy to implement single qubit rotations and to read out information with a very high precision. Experiments done in Innsbruck and Boulder have already demonstrated the feasibility of two-qubit gates for ion trap quantum computing [16,17]. However, ion trap quantum computing with more than 10 qubits remains challenging due to the relatively high vulnerability of two-qubit gate operations to decoherence. It is therefore natural to consider a hybrid platform based on both flying and stationary qubits. Numerous such schemes have already been explored [18]- [28], exploiting the benefits of stationary atomic and flying photonic qubits. Here, we discuss as an example of such a robust and scalable hybrid quantum computing scheme, namely our recently proposed repeat-untilsuccess (RUS) quantum computing scheme [27,28]. Under real conditions, when photon loss is a possibility the scheme can be used for the efficient build up of cluster states for one-way quantum computing, as we describe in section 3. Finally, we summarise our results in section 4. Repeat-until-success scheme The repeat-until-success (RUS) quantum computing scheme is based essentially on an atomphoton interaction. There are inherently three possible schemes for entangling the single atoms in distant cavities using a generic atom-photon interaction (see figure 1). The first scheme involves sending a photon or photons from the first atom to the second one through an optical fibre or free space, thereby giving rise to an effective interaction between the atoms [22,24] with or without the need of measurement. The second scheme [19,23], [26]- [28] relies on photon emissions from each atom, whereby the state of each newly generated photon depends on the state of its respective source. In this case, an entangling measurement is performed on the photons, which subsequently creates entanglement between the atoms. Our RUS scheme falls into this class. In a third scheme, entangled photons from the common source are sent to each atom simultaneously [21,25]. If the photons are entangled, then this entanglement can be transferred onto the atoms via the cavity modes. For the RUS quantum computing scheme, we need to focus a bit more on the second scheme. The main idea behind the RUS scheme is shown in figure 2. Consider two isolated single atomcavity systems A and B interacting with two photons P and Q. System A gets entangled to system P (or B to Q) through an effective Hamiltonian so that where |0 and |1 are the states of the atom systems, |h and |v describe a horizontally and a vertically polarised photon, and |ξ is the vacuum state of the photon systems. At this stage the systems A-P and B-Q remain separable. But if an appropriate measurement is made on the two photons P and Q, the two atoms A and B can become entangled. Indeed, the state of the whole if the initial state of the two atomic qubits equalled The subscript refers to the atom systems A and B, and the photon systems P and Q. If we now project the states of the photon on one of the four Bell states, namely (|hh ± |vv )/ √ 2 and (|hv ± |vh )/ √ 2, we easily see that the atom states (A and B) become entangled. If the system P and Q are atom systems, then a complete 'Bell states'measurement is possible [29]. However, if P and Q are photons, a complete Bell state measurement is not possible, using using only passive linear optics elements [11]. Fortunately, we can still perform a partial Bell state measurement. The key point for the RUS computing concept is that partial Bell state measurements can be found such that the intended qubit gate operation is accomplished, if a Bell state is measured. However, if a product state is measured, the original qubit state is nevertheless unmodified, up to known local unitary operations. This is the concept of 'failure with insurance'. PBS Figure 3. Realisation of the partial Bell measurement for RUS quantum computing. The photons are then directed into a polarizing beam splitter (PBS) which transmits photons with state |h but reflects photons with state |v . Thus photons with the state |hh and |vv maintain separate paths after the PBS (anti-bunching) while photons with state |hv and |vh leave the setup together (bunching). The schematic setup for the process and the following detection of the photons is shown in figure 3. It is instructive to see that the state of the atom-cavity systems after the passing of the photons through the PBS can be written as For simplicity we have neglected the indices, which indicate the respective output ports of the photons. As mentioned above, if both photons have the same polarisation, they leave the setup through different output ports. If both photons are of different polarisation, they leave the setup through the same output port. Detecting the photons in a rotated basis, which does not distinguish the polarisations |h and |v it is thus possible to perform a measurement which distinguishes between two Bell and two product states. For example, measuring the states (|h ± |v )/ √ 2 in each output port of the PBS, corresponds to a measurement of the two-photon states on the atom-photon state in equation (5). Such a measurement is called an incomplete Bell state measurement. Thus, if the outcome is | 1 or | 2 , the stationary qubits become maximally entangled. If the outcome of the measurement is | 3 or | 4 , the atoms can be reset to their original state by appropriate known local operations. The latter property is crucial for the realisation of repeat-until-success quantum gate operations as arbitrary measurements may not necessarily recover the original state of the atoms. Since each possible measurement outcome occurs with probability 1/4 and the probability for a successful Bell state detection is 1/2, we need only to perform an average of two measurements to complete a gate. The above measurement basis is chosen as an example to illustrate the principles of the scheme and is not restrictive. See [28] for general guidelines for the measurement basis selection and the realisation of maximallyentangling deterministic two-qubit gate operations for arbitrary initial states. One-way quantum computer In classical computation, it is possible to assemble a network of logic gates to perform various tasks. Examples of logic gates are the AND, OR, NOT, NAND and NOR gates. It has been shown that all of the above logic gates can be implemented by various configurations of one or more NAND (or NOR) gates. The negation is essential as two NOR gates can give an OR (positive variant) gate but not two OR gates in concatenation. Thus, we say that a NAND gate is a universal gate. Just as a NAND gate constitutes a universal gate for classical computation, the set of all one-qubit gates and any two-qubit entangling gate forms the set of universal gates for quantum computation. However, different from classical computations, these gates do not have to be performed successively. Instead of performing entangling quantum gate operations, a quantum computer can be prepared in a highly entangled state, a so-called cluster state. Once a cluster state has been generated, single-qubit rotations and single-qubit measurements are sufficient to simulate the performance of universal gate operations and to realise any possible quantum algorithm [4]. The simplest version is the linear chain for an arbitrary unitary operation on a qubit shown in figure 4. One of the main applications of the RUS scheme is to create a cluster state. In our scheme, cluster states can be generated by attempting to bond qubits using the above described entangling operation. Under realistic conditions, where photon loss is a possibility, this operation has three possible outcomes: 'success', 'repeatable' or 'insurance', and 'failure'. As we have seen earlier, observation of photon antibunching (or bunching) corresponds to a 'success' (or 'insurance'). Observing fewer than two photons denotes a failure. In this case, the static qubits are left in an unknown state. However, this damage can be repaired as follows. Firstly, each of the two qubits involved in the failed gate can be measured in the computational basis to determine the nature of the error. If either qubit was already part of a cluster state, the bonds to its neighbours within the cluster are also destroyed. However, the remainder of the cluster state can be recovered by applying appropriate single-qubit operations to these neighbouring qubits, conditional on the outcome of the measurement on the qubit involved in the failed controlled-phase (CZ) gate. Therefore, the cluster state can grow, shrink, or remain the same size, depending on whether the CZ operation was successful, failed, or failed with insurance. The possibility of the 'failed with insurance' scenario offered by the RUS scheme helps boost the growth rate of the cluster state [28]. The key to scalably generating cluster states is to attempt CZ operations between qubits such that the cluster state grows on average. Indeed, this can be accomplished efficiently with an appropriate cluster state growing strategy [26,28], [30]- [32]. Concluding remarks In this review article, we have described a proposed hybrid architecture for quantum computing using stationary and flying qubits. The scheme is largely implementation independent: it could be a single atom-cavity system or a system based on dipole induced transparency. Moreover, quantum electrodynamic (QED) cavity implementations do not need to operate in the strong coupling regime. We showed that, despite the incompleteness theorem for optical Bell-state measurements, it is in principle possible to implement a deterministic gate between distant qubits [27,28]. For non-negligible noise, the gate becomes necessarily probabilistic. In order to achieve robustness against general decoherence, we construct cluster or graph-states using the two-qubit RUS quantum gate. Even in the presence of photon loss, distributed quantum computation can still be performed with high fidelity. Our entangling operation, which produces the bonds in the graph states, is not limited to physically adjacent matter qubits. As a consequence, no extensive swapping operations need to be taken into account in the production of nontrivial graph states. This architecture for quantum computation is inherently distributed, and hence it is ideally suited for integrated quantum computation and communication purposes.
2014-10-01T00:00:00.000Z
2007-06-01T00:00:00.000
{ "year": 2007, "sha1": "002d824c5904cd902dc97f9b3a5d235ebf7a6522", "oa_license": "CCBY", "oa_url": "https://doi.org/10.1088/1367-2630/9/6/197", "oa_status": "GOLD", "pdf_src": "IOP", "pdf_hash": "4a3357b114e7bc534aaf71a7cf319f1f8fada5f5", "s2fieldsofstudy": [ "Computer Science", "Physics" ], "extfieldsofstudy": [ "Physics" ] }
227951685
pes2o/s2orc
v3-fos-license
Gallager Exponent Analysis of Coherent MIMO FSO Systems over Gamma-Gamma Turbulence Channels This paper studies the Gallager’s exponent for coherent multiple-input multiple-output (MIMO) free space optical (FSO) communication systems over gamma–gamma turbulence channels. We assume that the perfect channel state information (CSI) is known at the receiver, while the transmitter has no CSI and equal power is allocated to all of the transmit apertures. Through the use of Hadamard inequality, the upper bound of the random coding exponent, the ergodic capacity and the expurgated exponent are derived over gamma–gamma fading channels. In the high signal-to-noise ratio (SNR) regime, simpler closed-form upper bound expressions are presented to obtain further insights into the effects of the system parameters. In particular, we found that the effects of small and large-scale fading are decoupled for the ergodic capacity upper bound in the high SNR regime. Finally, a detailed analysis of Gallager’s exponents for space-time block code (STBC) MIMO systems is discussed. Monte Carlo simulation results are provided to verify the tightness of the proposed bounds. Introduction Over the past few years, the ergodic capacity has been intensively investigated over various types of fading channels for single-input single-output (SISO) and multiple-input multiple-output (MIMO) systems, since it determines the fundamental limit on achievable information rates of communication systems [1][2][3][4][5]. However, considering that this metric can not be sufficient to reflect the limits of communication systems, a stronger form of the channel coding theorem has been pursued to describe the relation among the error probability P e , codeword length N and information rate R. Specifically, it is shown that for any rate less than the channel capacity, the error probability for the optimal block code satisfies [6,7] where E(R) is defined as a reliability function or error exponent and is typically difficult to obtain. According to Equation (1), it can be observed that the error probability approaches zero as the codeword length tends to infinity for a rate below the channel capacity. However, it is difficult to find the supremum of the function E(R) through this expression. The classical lower bound of the error exponent, known as the random coding error exponent or Gallager's exponent [8], is easily computable and has been used to estimate the codeword length required to achieve a prescribed error probability. Since then, a large amount of research has investigated the random coding error exponent for single-input single-output (SISO) and single-input multiple-output (SIMO) flat-fading channels The received electric field at the n-th, n ∈ {1, 2, · · · , N r }, receiver aperture from the m-th, m ∈ {1, 2, · · · , N t } transmit aperture, is given by E mn (t) = √ P mn A s,m exp (j (ωt + φ mn + φ s,m )) (2) where P mn denotes the received power and is subject to the optical scintillation; ω is the optical carrier frequency of the transmit signal laser; φ mn represents the overall phase noise from the m-th transmit aperture to n-th receiver aperture and can be modeled as a Wiener process [18]; and A s,m and φ s,m are the encoded amplitude information and encoded phase information respectively. The electric field of the local oscillator (LO) can be expressed as where P LO is the power of the LO, ω LO denotes the optical carrier frequency of the LO, and φ LO represents the phase noise of the LO. Using the 2 × 4 90 • optical hybrid mixer and two pairs of balanced photodetectors [19], we can derive the four output photocurrents as where R oe denotes the photodiode responsibility and i 1 (t), i 2 (t), i 3 (t), i 4 (t) denote the 0 • , 90 • , 180 • and 270 • respectively. Note that we have assumed that carrier synchronization is perfect in the receiver. The in-phase and quadrature signals can be obtained as Thus, the n-th received signal at the input of the decoder can be expressed as η mn A s,m exp (j (∆φ mn + φ s,m )) + w n (t) (6) where ∆φ = φ mn − φ LO is assumed to be uniformly distributed between 0 and 2π for convenience. The signal w n (t) is zero-mean complex Gaussian noise with independent, equal variance real and imaginary parts. According to Equation (6), the term h mn = η mn exp (j∆φ mn ) can be regarded as the channel fading and ||h mn || 2 = I mn follows the gamma-gamma distribution given by Equation (8) when the intensity of h mn is normalized. Therefore, based on the above information, the received signal at the input of the decoder for the k-th coherence interval can be expressed as where X k ∈ C N t ×N c represents transmitted signal matrices, H k ∈ C N r ×N t is the channel gain matrices and W k ∼ N N t ,N c (0, N 0 I N r , I N c ) ∈ C N r ×N c is additive white Gaussian noise (AWGN). The entries of H k are denoted by h k,i (i = 1, 2, · · · , N r N t ) and are assumed to be statistically independent, of which the amplitude and phase follow Generalized-K (K G ) and uniform distribution respectively [20,21]. According to [22], the so-called gamma-gamma distribution considered here is equivalent to the squared K G distribution, which is given by where K v (·) denotes the modified Bessel function of order v; Γ [·] is the Gamma function; and Ω is related with mean, i.e., E [I] = Ω with E denoting expectation. The large-scale fading a > 0 and small-scale fading b > 0 are the distribution shaping parameters that can be expressed as where σ 2 R = 1.23C 2 n k 7/6 L 11/6 is the Rytov variance and d = √ kD 2 /4L with k = 2π/λ is the optical wave number, L is the length of the optical link, D denotes the receiver's aperture diameter and C 2 n is the refractive index structure constant that can be used as a measure of the strength of the turbulence and varies from 10 −17 m −2/3 for weak turbulence to 10 −13 m −2/3 for strong turbulence. The typical parameters for wavelength, receiver's aperture and the length of the optical link were set to be 850 nm, 0.01 m and 1000 m respectively [23]. Moreover, the input signal matrix X k is assumed to satisfy an average power constraint, i.e., where Q represents the N t × N t positive semidefinite matrix and P is the total transmit power over N t transmit apertures. In the later analysis, we define the random matrix Θ ∈ C m×m as and s min {N t , N r }, t max {N t , N r }. Specifically, note that the MIMO channel can be collapsed into a single channel for each symbol when employing the space-time block codes (STBC) technique [24]. Thus, the effective output symbol SNR is given by where R c , || · || F are the code rate and Frobenius norm respectively. Without loss of generality, full-rate STBC is assumed such that R c = 1. We can omit the index k for channels memoryless and stationary over each coherence time interval. Gallager's Exponent In this subsection, we present a detailed description of Gallager's exponent, which gives the upper bound of error probability with maximum-likelihood (ML) decoding for a channel with continuous inputs and outputs. Additionally, Gallager's exponent provides us a significant look into the reliability-rate tradeoff in MIMO communication. Specifically, it is shown that the diversity-multiplexing tradeoff of MIMO channels can be regarded as a special case of the reliability-rate tradeoff in the high SNR regime [25]. (1) Random coding exponent: The random coding bound on the error probability of ML decoding was developed in [8], which is given by where The above bound involves a number of random parameters, namely r, δ ≥ 0 and input distribution p X (X). The random coding exponent E r (p X (X) , R, N c ) in Equation (14) is defined as with E 0 (p X (X), ρ, r, N c ) shown in Equation (17). Generally, optimizing the input distribution p X (X) for the maximization error exponent is a difficult task. However, the assumption of capacity-achieving Gaussian distribution for p X (X) that is subject to the power constraint can make the problem analytically tractable, though it is valid only if the rate R approaches the channel capacity. As such, p X (X) is given by where etr (·) = e tr(·) . Substituting Equation (18) into Equation (17), we then have ( [7], (Proposition 1)) Proposition 1. Equation (16) can be maximized with equal transmit power when the Gaussian inputs are assumed, i.e., Q = P N t I N t . Proof. A proof is given in Appendix A. For the case of equal power allocation to each transmit aperture, Equation (19) can be further reduced toẼ after some algebraic manipulations. Then the random coding exponent in Equation (16) becomes As shown in [7], a new upper bound on the error probability is given by which will be used for estimating the required codeword length L = N c N b , given rate R and prescribed P e in the latter, where · denotes the smallest integer larger than or equal to an enclosed quantity. β * (ρ) in Equation (22) denotes the value β that maximizesẼ 0 (ρ, β, N c ) defined in Equation (20) for each ρ, and is in the range 0 < β ≤ N t . (2) Ergodic capacity: According to [7], the information rate R can be expressed as Note that R becomes identical to the Shannon (ergodic) capacity C defined in [1] when ρ = 0 and β = N t , such that where γ = P N 0 denotes the SNR. The above formula indicates the relation between Gallager exponent and Shannon capacity. (3) Expurgated exponent: It has been shown in [8] that the random coding exponent is defined by choosing the codeword independently according to input distribution p X (X) In other words, the good and bad codewords contribute equally to the overall average error probability. However, the poor codewords dominate the average error probability and have an adverse effect on the random coding exponent. Thus, the random coding exponent can be improved by expurgating poor codewords form the ensemble and is given by (25) with E x (p X (X) , ρ, r, N c ) defined in Equation (26) as follows The above Equation can be obtained as for the Gaussian input distribution and equal power allocation at the transmitter. Accordingly, the expurgated exponent in Equation (25) can be written as Gallager's Exponent for Gamma-Gamma Block Fading Channels In this section, we present Gallager's exponent's upper bounds for coherent MIMO FSO systems over gamma-gamma fading channels. These results are established with the help of Hadamard inequality. Thus, it should be noted that the derived bounds are only mathematically meaningful. However, the analytical expressions of Gallager's exponent are obtained for the MIMO FSO systems employing the STBC scheme, and the tightness of them is verified through the comparison with the exact results. The independent and identically distributed (i.i.d.) fading is considered here for convenience. Random Coding Exponent Analysis Using Hadamard inequality, we first investigate the random coding exponent and give the upper bound as follows. Proposition 2. The random coding exponent for coherent MIMO FSO systems over gamma-gamma fading channels can be upper bounded by where χ denotes the sum of t statistically independent and identically distributed (i.i.d.) gamma-gamma variables. According to [22], it is known that a sum of t i.i.d. gamma-gamma variates with parameters (a, b, Ω) can be approximated efficiently by a single gamma-gamma distribution χ with parameters (a t , b t , Ω t ), where The expectation expression in Equation (29) can be evaluated as where in Equation (32) The derived upper bound involves the MeijerG function, which does not enable us to do further analysis. In the following, we derive a simpler expression forĒ r (R, N c ) in the high SNR regime to gain more sight. Corollary 1. For MIMO gamma-gamma fading channels using coherent detection, in the high SNR regime, the random coding exponent can be approximated as Proof. At high SNRs, 1 + γχ where min(a t , b t ) > N c ρ and the last equation holds in Equation (35) Corollary 2. The upper bound of the random coding exponent in the high SNR regime,Ē hsnr , is a monotonic decreasing function of the channel coherence parameter N c . Proof. We prove the corollary by showing the first derivative ofĒ hsnr with respect to N c is strictly less than zero, which is given by where ψ(·) denotes the digamma function and is equivalent to In Figure 2, we have plotted the random coding exponent for various MIMO systems. It can be seen that the upper bound becomes tighter with the increasing number of apertures and almost overlaps for t s, and this is due to for large t. Specifically, we found that the upper boundĒ r (R, N c ) overlaps with the exact random coding exponent for the single-input multiple-output (SIMO) or multiple-input single-output (MISO) channel, namely, when s = 1. Additionally, the upper boundĒ hsnr (R, N c ) is also included in Figure 2 and gives a reasonable reference forĒ r (R, N c ) in the high SNR regime. , Ω = 1, N c = 2 and R = 2 nats/symbol. In Figure 3, we investigate the effect of channel coherent time N c on the random coding exponent. It can be observed that the channel coherence time N c plays an important role in the error exponent. Note that the ergodic capacity with perfect CSI at the receiver is independent of N c , which is consistent with the results shown in the literature. Table 1 shows the required codeword length L for MIMO gamma-gamma fading channels with N t = 2, N r = 8, Ω = 1, N c = 3 at P e ≤ 10 −6 . It is clear that there is a considerable reduction in the required codeword length from strong turbulence to weak turbulence. As expected, a higher SNR results in a shorter required codeword length for achieving the prescribed error probability P e . , Ω = 1 and γ = 10 dB. Table 1. Required codeword lengths L over gamma-gamma fading channels at rate R = 9 nats/symbol with P e ≤ 10 −6 , N t = 2, N r = 8, Ω = 1 and N c = 3. Ergodic Capacity Analysis In this subsection, our focus is on the derivation of ergodic capacity for coherent MIMO FSO systems over gamma-gamma turbulence channel bases on Hadamard inequality. Proposition 3. The ergodic capacity of MIMO gamma-gamma is upper bounded by Proof. Similarly, using the Hadamard's inequality, Equation (24) can be upper bounded by To obtain further insights, a more simplified formula of capacity upper bound in the high SNR regime is presented. Corollary 3. In the high SNR regime, the ergodic capacity upper boundC can be approximated as Proof. At high SNRs, ln 1 + γ N t χ can be approximated by ln γ N t χ and we havē In deriving the equation above, we have used the relations shown in Equation (44) The above corollary reveals that the effects of small and large-scale fading are decoupled in the high SNR regime, which is consistent with the results shown for the Nakagami channels ( [29], (Corollary 5)). Proof. It is easy to show that the first derivative ofC hsnr with respect to a is greater than zero and this is done as follows: (45) Figure 4 presents the monte carlo simulation, analytical expression Equation (40) and high-SNR approximation Equation (42) ergodic capacity results for various MIMO systems. It can be seen that the upper boundC provides reasonable reference to the actual performance for a large MIMO system. In addition, the derived boundC shows the exact capacity results for a SIMO or MISO channel when s = 1. The same conclusion can be also drawn from Figure 5. Note that for a fixed transmit aperture N t , increasing number of receiver apertures N r helps overcome the effect of fading. For instance, when N r = 2, the ergodic capacity increases considerably when a ranges from 1 to 9. However, the difference is almost inappreciable for N r = 32. Expurgated Exponents The expurgated exponent for MIMO FSO systems is considered in this subsection. Thus, we have Proposition 4. The expurgated exponent for coherent MIMO FSO systems over gamma-gamma fading channels can be upper bounded by Proof. The proof follows a similar line of reasoning as in Proposition 2. Corollary 5. At high SNRs, the above boundĒ ex (p X (X) , R, N c ) reduces tō Proof. The proof follows a similar line of reasoning as in Corollary 3 and is omitted here. In Figure 6, the expurgated exponent is plotted as a function of R for different coherence time over strong turbulence channel. As expected, system performance becomes worse with increasing coherence time N c . Error Exponent for MIMO-STBC Systems It has been shown in Section 2 that the MIMO systems reduce to SISO systems when employing the STBC technique and let Thus, the probability density function (pdf) of Ξ follows gamma-gamma distribution with parameters (a st , b st , Ω st ), which is given by Accordingly, (1) STBC random coding exponent: Note that Equation (20) can be simplified intõ (51) Proposition 5. The random coding exponent E r,STBC for MIMO STBC systems can be derived as According to the ( [29], (Example 2)), Equation (52) can be regarded as a lower bound of the E r (R, N c ), namely, E r,STBC (R, N c ) ≤ E r (R, N c ). Proof. The proof follows a similar line of reasoning as in Proposition 2 and is omitted here. (2) STBC Ergodic Capacity: Corollary 6. The ergodic capacity of STBC over MIMO gamma-gamma fading channels can be expressed as (53) Proof. The proof follows a similar line of reasoning as in Proposition 3. (3) STBC expurgated exponent: The expurgated exponent of STBC over gamma-gamma MIMO fading channels can be obtained as Then, in order to obtain additional insights for E r,STBC , C STBC and E ex,STBC , we now elaborate on the high-SNR regime and have In Figure 7, we present the results of the random coding exponent of STBC over the strong turbulence channel; the analytical results were derived according to Equation (52). It can be seen that the random coding exponent decreases monotonically with the parameter N c . In other words, it is impossible to transmit information at a positive rate with arbitrarily small error probability when N c → ∞. As expected, the ergodic capacity is independent of coherence time N c . , Ω = 2.5. In Figure 8, we have plotted the random coding exponent and expurgated exponent under turbulence strengths. It can be observed that there is a performance improvement as both shaping parameters a, b increase, i.e., from strong turbulence to weak turbulence channels, which indicates a shorter code is required to achieve the same level of reliable communications. The same conclusion can be also drawn in Table 2. , Ω = 2.5. Conclusions In this paper, a detailed Gallager's exponent analysis for the coherent MIMO FSO systems was presented in order to investigate the fundamental tradeoff between communication reliability and information rate. In particular, we considered gamma-gamma fading channels, which have been exhaustively used in the performance analysis of FSO communication systems. For the considered models, the upper bounds of the random coding exponent, ergodic capacity and expurgated exponent were derived by virtue of Hadamard inequality, which allows us to avoid calculating the eigenvalue distribution of the channel matrix. Moreover, in the high SNR regime, we have derived simple closed-form expressions of upper bounds to gain further insights into the effects of the system parameters, including shaping parameter a and the number of apertures N t , N r . Note that the effects of small-and large-scale were found to be decoupled for the ergodic capacity upper bound at high SNRs. The performance metrics for MIMO FSO systems employing the STBC scheme were also investigated. We noticed that larger values of a, b tend to increase Gallager's exponent or communication reliability. Conflicts of Interest: The authors declare no conflict of interest. Then we formulate an optimization problem under a power constraint max λ 1 ,··· ,λ N t λ l ≤ P. (A4) A Lagrange multiplier method can be employed to solve Equation (A4), which is expressed as Then, we have It can be observed that maximization of Equation (A2) can be equivalent to optimizing For convenience, we define M = Q −1 − rI N t −1 . Note that the matrix M is also symmetric, of which the eigenvalues are where Σ = diag [σ 1 , · · · , σ N t ]. Specifically, equality holds for Equation (A9) when UMU † is diagonal, which indicates Q should be diagonal. Therefore, based on the above argument, Equation (19) can be maximized with Q = P N t I N t . Appendix B Note that for any non-negative definite matrix A ∈ C n×n , the following inequality holds: which is also known as Hadamard's determinantal inequality [30]. Thus, we have where χ denotes the sum of t i.i.d. gamma-gamma random variates. By substituting Equation (A11) into Equation (21), Equation (29) is then obtained. This completes the proof.
2020-11-05T09:05:24.355Z
2020-11-01T00:00:00.000
{ "year": 2020, "sha1": "af0ae32c0ba368ee7e679e71764ab4b4b199d525", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/1099-4300/22/11/1245/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "804069e3915ca3bd08625b96323cea78829b8f2c", "s2fieldsofstudy": [ "Engineering", "Physics" ], "extfieldsofstudy": [ "Computer Science", "Medicine", "Physics" ] }
8131297
pes2o/s2orc
v3-fos-license
Designing a Profit-Maximizing Critical Peak Pricing Scheme Considering the Payback Phenomenon Critical peak pricing (CPP) is a demand response program that can be used to maximize profits for a load serving entity in a deregulated market environment. Like other such programs, however, CPP is not free from the payback phenomenon: a rise in consumption after a critical event. This payback has a negative effect on profits and thus must be appropriately considered when designing a CPP scheme. However, few studies have examined CPP scheme design considering payback. This study thus characterizes payback using three parameters (duration, amount, and pattern) and examines payback effects on the optimal schedule of critical events and on the optimal peak rate for two specific payback patterns. This analysis is verified through numerical simulations. The results demonstrate the need to properly consider payback parameters when designing a profit-maximizing CPP scheme. Introduction Demand response (DR) programs give customers a more active role in the operation of the power system, allowing them to change their consumption patterns.They have been implemented to ensure secure power system operation when the system suffers from severe supply-demand imbalances [1].The main operators of DR programs are load serving entities (LSEs) that supply electricity to contracted customers (i.e., utilities).Recent deregulation of the power industry has made it possible for DR programs to be implemented in a market environment [2].Accordingly, an LSE could become a demand-side participant in the market, aiming to maximize its profit through the DR program [3]. In a market context, the LSE's profit depends on the difference between the purchase and resale prices.The purchase prices are determined for a specified time interval (e.g., every five min) based on the supply and demand for electricity [4].They are inherently time-varying, and will be denoted here as "real-time market clearing prices" (RTMCPs).Since the resale (or retail) rates are relatively fixed compared to the RTMCPs, a sudden increase in the RTMCP leads to a corresponding reduction in the LSE's profits [5].As a result, dynamic pricing schemes are typically designed and included in DR programs to take variations in the RTMCPs into consideration [6]. Such dynamic pricing schemes include three main approaches: real-time pricing (RTP), time-of-use (TOU), and peak pricing (CPP) [6].This study focuses on CPP, which has certain advantages over RTP and TOU.In RTP, customers are exposed to continuously changing prices; they must thus repeatedly decide whether to respond to price changes in order to reduce their electricity bills.By contrast, in a CPP scheme, customers must make decisions about whether to reduce consumption only when critical events occur.Thus, CPP is simpler to implement than RTP, particularly for residential customers [7].TOU is also easy to implement, as it consists only of a few block rates.However, these rates must be announced to customers in advance, making TOU unable to manage sudden increases in the RTMCP.CPP can thus complement TOU by dynamically applying the peak rate during critical situations of high RTMCP [7]. In terms of the design of a CPP scheme, two main problems have been addressed in the literature.One, the event scheduling problem, seeks to determine when critical events should be triggered to maximize or minimize a certain target outcome, such as profits.Past research has formulated and solved the event scheduling problem so as to maximize profit via dynamic programming based on the forecasted price and demand [8].In [9], the problem is solved through a stochastic approach considering the uncertainties inherent in price and temperature, also with the goal of profit maximization.Integer programming is used to solve the problem in [10].The second problem in CPP design involves selection of the peak rate.Recent research [11] presents guidelines for determining the optimal peak rate (along with other CPP parameters) to achieve maximum profits for an LSE.In [12], a methodology is proposed to determine the peak rate as well as the optimal events schedule considering variable wind power generation. In various DR programs, the interrupted or curtailed demand later appears as delayed consumption after the restriction is lifted [13][14][15][16][17][18].This phenomenon is referred to differently in the literature-as payback in [13][14][15] and load recovery in [16,17]-but here we refer to it as payback.Further, in [18], the payback is represented concisely as the cross-elasticity in a mathematical form of the elasticity matrix.In DR programs, the payback phenomenon occurs because demand is shifted in time, but the reduction in overall consumption is very small [19].Moreover, the paid-back amount may exceed the amount of curtailed demand because of losses from the energy conversion processes of customers' appliances [20].Because of the costs incurred in serving this delayed demand, the payback lessens the market value of demand-side resources [21].Therefore, some studies model the payback phenomenon as part of their economic analyses.In [13], the effect of payback on generation costs and the amount of peak reduction are evaluated for air-conditioning loads.In [16], a mathematical model of payback is developed, and its effects on each market participant are analyzed from an economic perspective. The payback phenomenon also occurs in CPP following a critical event [22].Accordingly, it affects the profits enjoyed by the LSE and thus must be appropriately considered when designing a profit-maximizing CPP scheme.The optimal peak rate in a CPP scheme is determined in [11] based on the assumption that customers' demand is not recovered but rather purely reduced when critical events take place.However, if the payback phenomenon occurs, the optimal peak rate given in [11] no longer ensures optimality.In addition, the main concerns of an LSE operating a CPP scheme are the optimal schedule of critical events and the resulting profit.However, if payback is present, the optimal event schedule and the LSE's profit, which are determined without considering payback as in [11], would change in a manner differing from the characteristics of the payback phenomenon.Nonetheless, few studies have examined the payback within CPP schemes; even fewer have presented how CPP parameters, such as the peak rate, should be chosen to maximize LSE profits considering the payback. This study thus extends the work presented in [11] and presents several analyses to fill these gaps in knowledge.First, the payback phenomenon is characterized using three parameters: duration, amount, and pattern.Further, a payback ratio and a payback function are introduced and defined for the payback amount and pattern, respectively.Second, the payback effects on the critical event scheduling problem are demonstrated based on the characteristics of customers' responses to a critical event.Finally, an analytical expression for the optimal peak rate considering payback is derived for a general payback pattern.Then, the payback effects on the optimal peak rate are further analyzed for two specific payback models: exponentially decreasing payback (EDP), to model an intense demand recovery a short time after a critical event, and uniformly distributed payback (UDP), to represent a redistribution of demand over a longer time period.In all these analyses, we adopt the customer price response model used in [23] to quantify the reduction in electricity consumption in response to a critical event. The remainder of this paper is organized as follows.As background information, Section 2 briefly describes the customer price response model [23] and CPP design in the absence of payback [11].Section 3 characterizes the payback phenomenon.The effects of payback on the design of a CPP scheme are analyzed in Section 4, the results of which are verified through numerical simulations in Section 5. Finally, Section 6 offers some concluding remarks. Price Responsiveness Model The response of customers to a price change is described in [23] as given by: where 0,k q and 0,k  are the nominal demand and price, respectively; k q and k  are the modified demand and price, respectively; and k  is the customers' demand elasticity, defined as: The variable k  is negative because a price increase leads to a demand reduction.For example, when k  is equal to −0.01, demand decreases by 1% following a 100% increase in price. CPP consists of two price levels: the off-peak rate, BASE  , and the peak rate, PEAK  .The off-peak rate is applied in most periods while the peak rate is applied only rarely, when critical events are triggered.When a critical event is triggered, the price changes from BASE  to PEAK  , and demand changes accordingly.Assuming that elasticity of demand is constant at  , the modified consumption, CR ,k q , during the critical event triggered in period k can be determined by replacing 0,k  and k  in Equation ( 1) with BASE  and PEAK  , respectively, as [11]: In reality, 0,k q does not occur when consumption has already changed to k q in response to PEAK  . Thus, 0,k q should be interpreted as forecast demand based on BASE  .The cumulative curtailed demand CUR ,k Q for a critical event starting in period k can then be represented as: where CPP D is the duration of the critical event.It should be noted that CUR ,k Q takes a positive value because both  and   1 PEAK BASE    are normally negative. Designing a Critical Peak Pricing (CPP) Scheme without Payback The event scheduling problem of CPP to maximize LSE profits can be formulated as: where k u is a binary event decision variable that takes a value of one if a critical event is triggered and zero otherwise, H is the scheduling time horizon of the problem, and k R and k C are the LSE's revenues and costs, respectively, in period k.When payback does not occur, k R and k C are defined as: where RTMCP ,k  is the forecasted RTMCP in period k.The constraints consist of the conditions on the maximum number of events, the maximum event duration, and the minimum interval between successive events [8,11].The specific forms of these constraints are as follows: Minimum Interval between Successive Events ( k  ): In Equations ( 8)-( 10), 0 k u  is assumed for 0 k  .These constraints are imposed in order to avoid inconveniencing customers by interrupting consumption through critical events.For example, Equation ( 8) prevents triggering an excessive number of critical events, and Equation ( 10) allows customers to return to normal consumption within a reasonable time.The existing techniques for solving such an optimization problem include dynamic programming [8] and integer programming [10].In this study, we use the former methodology, as in [8], to solve the event scheduling problem.In terms of the optimal peak rate, we first need to define the profit index, which means an additional profit that the LSE will receive from triggering a critical event.Suppose that denotes the optimal solution of the event scheduling problem for a given PEAK  .Then, the profit index, N ,k PEAK PI ( )  , for a critical event in period payback can be expressed as [11]: where the event duration is equal to the maximum event duration, CPP D , because the LSE's profit is always maximized when the maximum event duration applies.Substituting Equations ( 3) into (11) makes a quadratic function of PEAK  .Then, the critical point of the function is determined as the optimal peak rate without payback, * N ,PEAK  , which has a form as [11]: where is the optimal event schedule for optimal peak rate and 0,k Q is cumulative consumption during all critical event periods starting from period k, which is expressed as: Characterization of the Payback Phenomenon There are two key aspects that characterize the payback phenomenon.The first is the amount of paid-back demand.The curtailed demand, CUR ,k Q , may be under-, equally, or over-recovered following a critical event [13][14][15].Thus, a payback ratio, denoted as PB,k  , is introduced as the ratio of recovered consumption to curtailed demand.The specific value of PB,k  depends on the composition of customer demand.For example, one does not compensate for turning off lights during a critical event period by greater light use later on; such demand thus tends to be connected with under-payback, or 1 In contrast, a heating or air-conditioning system may require more post-event energy for transition to the target value from the decreased or increased room temperature arising during the critical event period; this tends to result in over-payback, or 1 be defined as the amount of paid-back demand for a critical event in period k.Then PB ,k Q can be represented as: where PB ,i q is the recovered demand in period i and PB,k  and PB,k D are the payback ratio and duration, respectively, for the critical event in period k. The other key aspect of the payback phenomenon is its pattern.Let the payback function, , be defined as the ratio of paid-back demand to CUR ,k Q in the n-th time period from the end of the critical event.Then, 14) is expressed as: Comparing Equations ( 14) and ( 15),   PB,k f n should satisfy the condition: Let the normalized unit payback function, satisfies the condition: such that the payback function can be separately expressed by the payback ratio, PB,k  , and the payback pattern, ( ) In real-world situations, it is difficult to specify a particular form for ( ) Nonetheless, some studies suggest a payback function model for analytic purposes particularly for water heating and air conditioning loads.Empirical results in [14] show that the payback pattern can be represented with an exponentially decreasing function for water heating loads.In [13] and [15], sets of decreasing values are specified as the payback function values for water heating loads and both water heating and air conditioning loads, respectively.This study thus adopts an EDP function as a specific payback pattern to model an intensive recovery of demand over a short time period immediately after a critical event.This takes the form of: where  is a constant, which is determined by solving the equation Despite past studies [13][14][15] mentioning the EDP pattern, we cannot rule out the possibility that the curtailed demand is recovered fairly evenly during the payback period.Consequently, a constant function is used to model UDP and is analyzed as an additional specific payback pattern for comparison purposes.The UDP function is expressed as: where c is a constant that takes the value 1 PB,k c D  from Equation (17). Payback Effects on the Event Scheduling Problem The optimal schedule of critical events that was determined without considering payback may no longer be optimal once the payback phenomenon is taken into account.Further, the LSE's profits may decrease if the additional costs arising from payback exceed the cost savings reaped through the critical event.A simple example in Figure 2 demonstrates such a scenario.Suppose that 4 H  , 1 Payback function values Under these parameter settings, customers eliminate half of their nominal demand when a critical event is triggered according to Equation (3). Figure 2b and Table 1 present the modified consumption and profit levels under four different scenarios.Without payback, the profit is largest when a critical event is triggered in period 2 k  ; when payback is considered, however, the profit-maximizing critical event period shifts to 3 k  .In other words, the optimal event schedule changes due to payback.In addition, profit decreases from $30 in the case without payback to $28 in the case of payback with optimal scheduling.This clearly indicates that payback may have a negative effect on an LSE's profits, suggesting that the event scheduling problem in the presence of payback must be solved as a separate optimization problem. Payback Effects on the Optimal Peak Rate As in Equation ( 11), the profit index including the payback arising from a critical event in period k, , is represented as: As with the procedures for * N ,PEAK  in Equation ( 12), the optimal peak rate considering payback, * PB ,PEAK  , can be obtained by substituting Equations ( 4) into (20), differentiating with respect to PEAK  , and solving the resulting equation for PEAK  .After rearranging the terms, a specific form of * PB ,PEAK  is obtained as: The terms in the first square bracket in Equation ( 21 Equation (23) shows that, while maintaining the optimal schedule of critical events, the payback ratio, PB  , has a linear relationship with the amount of change in the optimal peak rate.PB  is not, however, related to whether the payback causes an increase or decrease in the optimal peak rate. On the other hand, the payback pattern, f n , and duration, PB D , affect both the amount and sign of the change in the optimal peak rate. Despite the relationships between the payback parameters and the optimal peak rate, it is not evident whether payback causes an increase or decrease in the optimal peak rate based only on Equation (23).This is because  depends on the specific RTMCPs and nominal demand as well as the payback parameters; the interrelation among these factors is difficult to define conclusively, particularly in cases where 2 CPP N  .As a result, the payback effects on the optimal peak rate will first be examined analytically for the simplest case ( 1CPP N  ).These results will then be extrapolated to the general cases with 2 CPP N  .Suppose that 1 CPP N  and the optimal event schedule is determined as . Then, 0,k Q terms in the numerator and denominator of Equation ( 23) cancel one another out and PB ,PEAK  can be simplified to: For EDP, most of the paid-back demand is concentrated in the initial time periods following the critical event.In addition, a critical event is usually triggered when the real-time market clearing price (RTMCP) is high, such that the RTMCPs in the early time periods of the payback phase are likely to be higher than BASE  .This implies that the second term inside the bracket in Equation (24)  in Equation ( 24) can be simplified as: In Equation ( 25), all relevant RTMCPs are equally weighted in the calculation of PB ,PEAK  .In addition, the RTMCPs are likely to be small, as the times in question are far from the critical event period.As a result, the absolute value of  for 1 CPP N  .Therefore, the above-presented analysis of payback effects on the optimal peak rate remains valid in cases with 2 CPP N  unless 0,k Q takes a very abnormal value for a certain critical event.Nonetheless, the payback effects on the optimal peak rate still depend strongly on the specific conditions of the RTMCPs and demand levels.As a result, the following section will perform numerical simulations for 3 CPP N  given specific values of the RTMCP and demand; this will allow verification of the payback effects for 1 CPP N  and validate their application to cases with 2 CPP N  . Simulation Methods Actual data for future RTMCPs and demand is unavailable when an LSE designs a CPP scheme.Thus, these quantities must be forecasted for all periods within the scheduling time horizon.In this study's simulations, the autoregressive moving average (ARMA) method in [24] is used for the forecasting.Historical data on RTMCPs and demand levels, as announced by the Pennsylvania-New Jersey-Maryland Interconnection for 31 days in January 2014 [25], are used as input data for the ARMA method.The resulting forecasted data are shown in Figure 3.The time period length is assumed to be one hour, making the simulations' scheduling time horizon, H, equal to 744.The simulations are performed using the CPP parameters of 3 Results: Payback Effects on the Optimal Event Schedule The effects of payback on the event scheduling problem are examined under the conditions that 1 PB   and 3 PB D  h.For set-ups both with and without payback, the optimal peak rate is arbitrarily selected as 120 N ,PEAK PB ,PEAK    cents/kWh.Simulations of three different scenarios were undertaken, as listed in Table 2, including without payback, with EDP, and with UDP.The specific values of the payback functions are also given in Table 2 (first column). The results for the simulated optimal schedule and corresponding profits are also listed in Table 2.The optimal schedules differ from one another according to not only whether or not payback occurs but also the payback pattern.This suggests that the payback phenomenon must be considered to properly solve the event scheduling problem.Furthermore, the profit is larger in the non-payback case than in either case with payback, verifying that the payback phenomenon has a negative effect on the LSE's profits due to the additional cost of the paid-back demand. Result: Payback Effects on the Optimal Peak Rate The effects of payback on the optimal peak rate are simulated by changing the payback duration and ratio.The payback duration is set to change from one to ten ( {1 2 10} PB D , , ,   ), and the range of the payback ratio is taken from [13]  should be set below the usual level if the payback phenomenon is expected.In addition, Figure 4 shows that * PB ,PEAK  tends to be smaller for a short payback duration than for a long one, regardless of the payback pattern.This suggests that the LSE should select a lower optimal peak rate if the payback period is expected to be short. Figure 4 also demonstrates how the payback pattern affects the optimal peak rate.The value of  with respect to PB  is represented in Figure 5 for a few values of PB D .As Equation ( 24 ).In practical terms, properly designing a CPP scheme by considering payback effects could lead to a significant, if not dramatic, increase in profits for the LSE. Conclusions A CPP scheme is a useful demand response program that enables an LSE to increase profits by controlling customers' demand at key moments.However, these profits are later reduced by the payback phenomenon.This study considered optimal strategies for designing a profit-maximizing CPP scheme taking payback into consideration.After characterizing payback through the appropriate parameters, the resulting change in optimal event scheduling was demonstrated, and the optimal peak rate under payback was analytically derived.The validity of this analysis was then verified through numerical simulations. The results yield certain practical suggestions for designing a CPP scheme in a payback scenario.When payback occurs, it is better to set the peak rate to a lower value than would be optimal without payback.Moreover, if the paid-back demand is expected to be concentrated in the time periods soon after a critical event, the peak rate should be set at an even lower value.As long as the optimal event schedule does not change, payback results in a slight linear decrease in the optimal peak rate.However, if the schedule changes, there is a step change in the optimal peak rate.Consequently, the LSE should jointly optimize the event schedule and the peak rate. Although the results of the proposed method are helpful for designing a CPP scheme, there remain open questions regarding their practical applications.In particular, the availability of payback parameters, which are not constant but depend on the levels of demand and price, can be challenging in the implementation of a CPP scheme.This is one reason why payback has never been considered in the operation of a real-world CPP scheme; as such, it is not obvious that the extended characterization of the payback concept with other unknown and arbitrary parameters will have meaningful implications for actual operations.Additionally, the effects of nonlinear and unpredictable behavior resulting from different compositions of customer loads need to be examined.Therefore, further empirical research will be necessary to demonstrate the practical implications and real-world effectiveness of the CPP design strategy presented here. ρ PB PEAK optimal peak rate considering payback effects D  .Figure 1 shows the shape of the EDP function for various values of PB,k D when 1 PB ,k   .It should be noted from Figure 1 that the EDP functions are almost identical for PB,k D greater than five. cents/kWh, and 44 PEAK   cents/kWh.The nominal demand and RTMCPs are given in Figure 2a.The customers' price responsiveness is assumed as 0 05 .   . Figure 2 . Figure 2. (a) Demand and real-time market clearing prices (RTMCPs); (b) A simple example to show how payback affects the optimal schedule of critical events. and a customer price responsiveness of 0 05 .  .It is assumed that the payback parameters are equal for all critical event time period, that is, Figure 3 . Figure 3. Forecasted data for real-time market clearing price (RTMCP) and demand. Figure 4 . Figure 4. Simulation results of the optimal peak rate with respect to the payback duration for (a) Exponentially decreasing payback (EDP); (b) Uniformly distributed payback (UDP). Figure 5 . Figure 5. Simulation results of the optimal peak rate with respect to the payback ratio for selected values of PB D .  determined via the simulation are clearly the extreme ones, yielding maximum profits.The existence of payback decreases profits in all cases.To emphasize the significance of this analysis, the profits resulting when 177 indicated in Figure6.(Note that the functions for the two payback patterns are the same for 1 PB D  ).Comparing the two profits reveals that using * PB ,PEAK  increases profits by 2.83% (from $40.307 million for * N ,PEAK  to $41.448 million for * PB ,PEAK  Figure 6 . Figure 6.Profit of the load serving entities (LSE) with respect to the peak rate for the case without payback and several cases with payback. , in period k when a critical event is triggered , PB k q recovered demand due to payback in period k 0,k Q cumulative consumption during the critical event periods starting from the period k , CUR k Q cumulative curtailed demand for a critical event starting in period k , PB k Q paid-back demand for the critical event in period k k R revenue of a load serving entity in period k k C cost of a load serving entity in period k k PI profit index in period k , N k PI profit index in a normal situation without payback in period k , PB k PI profit index considering payback effects in period k k u binary event decision variable in period k CPP N maximum number of critical events CPP D duration of the critical event Table 1 . Profits of the load serving entity (LSE) with and without payback for two event schedules in the example. as the payback duration increases.Nonetheless, in real situations, the payback duration is usually limited to a few time periods, and the RTMCPs around the critical event periods are likely to exceed BASE is greater for UDP than for EDP.In the extreme situation when the RTMCPs below BASE  are dominant over a long payback duration, it is possible that Table 2 . Simulation results for the optimal schedule and the corresponding profit of the LSE. Optimal Schedule of Critical Events Profit (Million Dollars) is smaller for EDP than for UDP, particularly for a long payback duration.This is because the RTMCPs in the late time periods of the payback duration are smaller than those in the early time periods, but they are all equally weighted when determining * * PB ,PEAK
2016-04-23T08:45:58.166Z
2015-10-01T00:00:00.000
{ "year": 2015, "sha1": "0ffa88aaca4fa426b7109887aa415e688e3640d1", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/1996-1073/8/10/11363/pdf?version=1444722954", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "0ffa88aaca4fa426b7109887aa415e688e3640d1", "s2fieldsofstudy": [ "Engineering", "Business" ], "extfieldsofstudy": [ "Economics" ] }
259141076
pes2o/s2orc
v3-fos-license
Determinants of targeted cancer therapy use in community oncology practice: a qualitative study using the Theoretical Domains Framework and Rummler-Brache process mapping Background Precision medicine holds enormous potential to improve outcomes for cancer patients, offering improved rates of cancer control and quality of life. Not all patients who could benefit from targeted cancer therapy receive it, and some who may not benefit do receive targeted therapy. We sought to comprehensively identify determinants of targeted therapy use among community oncology programs, where most cancer patients receive their care. Methods Guided by the Theoretical Domains Framework, we conducted semi-structured interviews with 24 community cancer care providers and mapped targeted therapy delivery across 11 cancer care delivery teams using a Rummler-Brache diagram. Transcripts were coded to the framework using template analysis, and inductive coding was used to identify key behaviors. Coding was revised until a consensus was reached. Results Intention to deliver precision medicine was high across all participants interviewed, who also reported untenable knowledge demands. We identified distinctly different teams, processes, and determinants for (1) genomic test ordering and (2) delivery of targeted therapies. A key determinant of molecular testing was role alignment. The dominant expectation for oncologists to order and interpret genomic tests is at odds with their role as treatment decision-makers’ and pathologists’ typical role to stage tumors. Programs in which pathologists considered genomic test ordering as part of their staging responsibilities reported high and timely testing rates. Determinants of treatment delivery were contingent on resources and ability to offset delivery costs, which low- volume programs could not do. Rural programs faced additional treatment delivery challenges. Conclusions We identified novel determinants of targeted therapy delivery that potentially could be addressed through role re-alignment. Standardized, pathology-initiated genomic testing may prove fruitful in ensuring patients eligible for targeted therapy are identified, even if the care they need cannot be delivered at small and rural sites which may have distinct challenges in treatment delivery. Incorporating behavior specification and Rummler-Brache process mapping with determinant analysis may extend its usefulness beyond the identification of the need for contextual adaptation. Supplementary Information The online version contains supplementary material available at 10.1186/s43058-023-00441-3. • The first study to identify the determinants of the delivery of precision medicine in community oncology settings, where most US cancer patients receive care • Identifies role misalignment as a previously undiscovered, distinct barrier to genomic testing, creating justification for new implementation strategies to support targeted cancer therapy delivery by focusing on changes in professional roles among teams typically responsible for cancer care delivery, rather than knowledge needs of individual providers • Expands the value of existing implementation science determinant frameworks, which emphasize the contextual adaptation needs, by explicit specification of the intervention behavior through process mapping methods • Demonstrates the value of behavior specification in determinant analysis in addition to implementation strategy evaluation Background Precision medicine is the practice of tailoring treatments to individual patients by classifying individuals into subpopulations that differ in their susceptibility to disease or response to treatment [1]. The promise of precision medicine lies in its ability to guide healthcare decisions toward the most effective treatment for a given patient, while reducing the need for unnecessary therapies, side effects, and costs. The realization of precision medicine in oncology could have a substantial impact. An estimated half-million cancer patients may be eligible for guideline-recommended targeted therapies each year and could benefit from demonstrated benefits of targeted therapy, including delay in tumor progression, longer survival, more quality-adjusted life years, avoidance of noneffective treatment, and lower treatment costs [2][3][4][5][6][7][8]. The FDA has approved > 90 pharmacogenomic drugs in cancer, and the National Comprehensive Cancer Network (NCCN) recommends precision medicine not just as a general approach, but for specific treatment decisions across a number of cancers, including breast, lung, colorectal, and melanoma skin cancer, among others [9][10][11][12][13]. Despite the guideline recommendations and the immense promise, not all patients who could benefit from targeted cancer therapy receive it and some who may not benefit actually receive it [14][15][16]. Only half of white elderly women and 40% of black elderly women with non-metastatic human epidermal growth factor receptor 2 (HER2)-positive breast cancer receive appropriate monoclonal antibody therapy [17]. Only 18% of colorectal cancer patients in the last decade with Kirsten rat sarcoma viral oncogene homolog (KRAS) wild-type tumors received anti-epidermal growth factor receptor (EGFR) antibodies [18]. Less than half of non-small cell lung cancer patients eligible for EGFR inhibitors receive them [19]. Although increasing in recent years, one-quarter of advanced lung cancer patients still do not receive tyrosine kinase inhibitors (TKIs) [20]. Furthermore, some cancer patients may be treated with targeted therapy where it is not warranted [21,22]. Barriers to providers' appropriate use of targeted therapies include knowledge and skill deficits [23,24] and environmental and resource constraints, including lack of reimbursement, limited access to testing technology and treatments, and lack of time for the burdensome coordination of testing, therapy, and required follow-up [14,[25][26][27][28][29][30][31][32][33][34]. Technology limitations have also been recognized: test processing time often exceeds the treatment decision-making interval [35][36][37] and the specimens required for testing are difficult to obtain in some cancers [38]. Less studied are the motivational determinants that may facilitate or inhibit physician use of targeted therapy, although perceptions of limited utility and lack of patient receptivity have been noted as barriers for some genomic tests [36]. Recent policy-level changes and continuing evolution of the technology are lessening the impact of reimbursement barriers and testing accessibility [39,40]. Training programs have been put in place to address knowledge deficits [24,30]. These changes in the precision medicine landscape put into focus the need to explore the motivational barriers that oncologists and their teams may experience (e.g., beliefs about consequences and capabilities, social and professional roles and identities). Recent changes also engender the need to unpack the organizational contexts in which targeted therapy is delivered, particularly in the community oncology setting, because most cancer care in the USA is delivered in community practice. Despite this, few previous studies of precision medicine implementation have included US community-practicing oncologists among their samples [41][42][43]. The specific roles that community oncology teams play and the subsequent behaviors they perform in delivering targeted therapy are poorly understood. Furthermore, most interventions to improve precision medicine delivery have been focused on tumor boards or EPIC-based decision support tools, interventions which may have limited availability outside of academic settings [28,42,[44][45][46]. Although identifying the most salient barriers can increase the likelihood that interventions are effective and changes are sustained [47][48][49][50], no existing studies have taken a comprehensive approach, using a theoretically derived implementation science determinant framework, to identify effective intervention points for implementing targeted therapy [32,33,46]. Furthermore, existing research does not specify the behaviors and delivery processes currently in place to pinpoint what changes need to occur. We sought to use an established implementation science framework to identity actionable determinants of targeted cancer therapy use among community oncology practices and conduct generalized process mapping to identify current state processes across sites. We anticipate these results will be foundational to the development of implementation strategies to support guideline-based targeted therapy delivery in community oncology practice. Methods We conducted a modified template analysis [51] of semistructured qualitative interviews based on the Theoretical Domains Framework [52] and process mapping using the Rummler-Brache approach [53,54]. The study was conducted under a protocol approved by the University of Kansas Institutional Review Board and is reported according to Consolidated Criteria for Reporting Qualitative Research (COREQ) [55]. Oncology care providers involved in the delivery of targeted therapy and practicing in community settings were eligible to participate and could include medical oncologists, surgeons, pharmacists, pathologists, healthcare administrators, nursing staff, or other ancillary providers, as we anticipated multiple healthcare professionals to be involved in such complex care delivery. We excluded academic providers and solo providers. Initial efforts were to restrict participation to providers practicing in a 13-state region in the Central USA and whose institutions were willing to participate in a companion medical record abstraction study, but due to COVID-19 pandemicrelated practice disruptions, we altered the protocol [56] to expand recruitment by extending geographic reach to other US states and relaxing the requirement to participate in the medical record abstraction. We mailed invitation letters, signed by regionally prominent physicians, to all oncologists who billed Medicare in January 2020 in the 13-state region, inviting them to participate in a mixed methods study. We also targeted US pathologists and critical access hospital administrators via email and extended personal invitations to NCI Community Oncology Research Program principal investigators and administrators. Lists were obtained from the Centers for Medicare and Medicaid; Medical Marketing Service, Inc.; a National Rural Health Association Consulting Service, and directories on public websites [57,58]. Once we identified an index provider within practices, we used a snowball sampling strategy to identify other care team members and allowed the index provider to specify the roles important at his or her institution around targeted therapy. No exclusions on provider roles were applied by the study team. We used a semi-structured interview guide based on the Theoretical Domains Framework (TDF) [49,59] to identify capability, opportunity, and motivational constructs key to targeted therapy use and allowed the interviewers to tailor questions to the interviewee's role and involvement in targeted therapy delivery. The interview guide was modeled on previous determinants assessments conducted by our team [60] and relied on broad open-ended questions to identify many possible determinant domains, but encouraged probing on specific domains. To carefully and comprehensively specify the behavior, we used process mapping techniques in which we devoted interview time to detailing the targeted therapy delivery process, using a specific item to elicit process characteristics [61]. We collected or derived demographic information about sites from publicly available data sources, including the CMS Compare file, census information, Health Resources and Services Administration, the Kaiser Family Foundation, and the American Hospital Directory [62][63][64][65]. A single interview with each participant was conducted between July 2020 and May 2021. All interviews were conducted via telephone or video conference per participants' preference. Verbal consent was obtained. Participants were offered a gift card for participation. Two female, PhD-trained qualitative researchers (SDE and JVB), a sociologist, and an anthropologist with > 30 years of combined ethnographic interviewing experience, but naïve to targeted cancer therapy delivery, conducted all interviews. The use of naïve interviewers was a pragmatic decision to align the skills of the study team with the study design, but interviewer naivete was used strategically to establish the interviewee as an expert and to elicit mundane details about targeted therapy delivery by making the process "strange, " in accordance with common ethnographic interviewing techniques [66]. A single practice interview was conducted and discussed to familiarize the team with the interview guide and key concepts related to precision medicine. Participants were unknown to the interviewers prior to the study interaction. Interviewers had no investment or biases toward targeted cancer delivery but were motivated to identify implementation strategies potentially effective for future studies. Interviews, ranging from 27 to 63 min, were recorded and interviewers collected brief field notes on each interview to assess the saturation and identify issues for follow-up in subsequent interviews. Interview audio content was transcribed verbatim and coded in NVivo [67]. Coding consisted of assigning excerpts to defined TDF constructs described in a code book developed a priori, combined with inductive coding of (1) specific behaviors performed in the delivery of targeted therapy and (2) potentially distinguishing contextual factors informants reported. Interviewers kept memos during coding, and the study team met regularly to discuss the findings. In the initial analysis, two investigators (SDE and JVB) identified distinct behaviors and then attributed determinants to specific behaviors. After the initial coding, a third investigator experienced in the TDF framework (EM) reviewed all transcripts to ensure TDF constructs were consistently identified. Subthemes were then identified within domains and key quotes displayed. Concurrently, we summarized all process descriptions into a single Rummler-Brache diagram, also known as a swimlane diagram [53,54]. Based on descriptions gathered from participants, we identified the roles responsible for each part of the targeted therapy delivery process, represented by a single "lane" in the process map. We then summarized the targeted cancer therapy delivery process across all cancer programs, with arrows indicating handoffs across roles and diamonds representing decision points. We used distinct colors to represent contextual differences in processes or teams across sites. As part of iterative analysis, we identified pathologists as having important roles in targeted therapy delivery and sought additional input from pathologists to reach saturation. Across both the determinant coding and swimlane diagram, we noted contextual determinants reported to impact testing and treatment behaviors and summarized these factors by major domains. Interviews and results were not returned to study participants for review but were shared at multiple time points with other community oncologists participating in a Cancer Center Disease Working Group. Feedback was used to shape interpretation and validate findings. Participant characteristics Broad notifications about the study were pushed to 22,229 medical oncologists, pathologists, rural healthcare administrators, and research network personnel, which represented approximately 5013 non-unique practice contacts (Fig. 1). Across these notifications, 108 providers indicated interest of which 70 were considered eligible to participate. From this group, 24 individuals agreed to and completed the individual interview (range of 1-4 respondents/site). Individual participants represented a variety of roles in their cancer programs, including medical oncologists, pathologists, surgeons, pharmacists, healthcare executives, advanced practice nurses, and oncology nursing staff (Table 1). Ten participants held some type of leadership role within their oncology program or organization. Two participants had specialized training in genomics. Participants represented 11 community oncology programs, ranging in size, geography, and location. Table 2 describes the characteristics of cancer programs represented in the sample. Behavior specification In eliciting the nature of the care delivery behavior from participants, we recognized two distinct behaviors across sites-testing and treatment decision-making-which were essential to targeted cancer therapy delivery. While successful implementation of targeted therapy requires both behaviors, each behavior consists of its own set of steps, and the behaviors are typically performed by different members of the healthcare team. The importance of the distinction was made more evident after we mapped the dual-behavior process onto a swimlane diagram ( Fig. 2), as the inter-team interactions involved in each behavior were different: pathology-oncology teams interact in testing and pharmacy-oncology teams interact in treatment. As a result of process mapping, the study team recognized that pathologists were under-represented in the interview sample, thus additional pathologists were recruited until saturation was reached. Testing processes were characterized by a bottleneck at some sites, leading to delays in treatment initiation, additional work by nursing staff, and anxiety for patients and their providers. At most sites, oncologists took a lead role in ordering molecular and genomic tests. Patients presented for treatment decisions after tumor biopsy and then oncologists ordered necessary tests. This sequence of events resulted in two potential treatment scenarios: either prioritizing expediency of treatment by proceeding with non-targeted therapy while awaiting testing results or prioritizing comprehensiveness by delaying treatment while awaiting final test results. The nursing staff at some sites had the role of managing the test results and coordinating the timing of patient scheduling to align. At other sites, the pathology team initiated molecular and genomic test orders, negotiated test reimbursement, managed the results, and shortened the window of time from diagnosis to treatment decision-making. Treatment processes differed depending on whether the prescribed targeted therapy was delivered orally or infused. Importantly, because payor reimbursement policies differ, and costs and economic benefits accrue differently based on the mode of delivery, different care delivery teams within sites and different care delivery processes were used to execute targeted therapy delivery. Some community oncology programs did not have the organizational capacity to deliver both oral and infusion therapy. For example, some sites had nursing and administrative processes in place to deliver infused drugs but were constrained in their ability to access some treatments or to manage complicated payment programs to provide access to oral therapies for un-or underinsured patients. The size of the oncology program seemed to create variations in roles (Table 3). For example, nurse practitioners and pharmacists were involved in the delivery and management of targeted therapy at larger organizations. There was also variation among cancer programs in engagement with external organizations who could manage drug acquisition and reimbursement program management. Behavioral determinants Variation in the successful implementation of targeted therapy across sites was reported. Because we identified different determinants and actors for testing and treatment, which have not been distinguished in prior determinant studies, we present motivation, opportunity, and capability determinants for each behavior separately. Additional file 1: Tables S1, S2, and S3 provide exemplary quotes across each TDF domain. Motivational domains The intention to perform molecular and genomic testing and to provide targeted therapy to eligible patients was prevalent and strong across all interviews (Additional file 1: Table S1). Providers acknowledged the high demand from providers and patients for testing and characterized targeted therapy as "the way of the future" [rural oncologist]. For testing, providers perceived mostly positive consequences, including providing something of benefit to patients and to themselves. They believed molecular and genomic testing (and College of American Pathologists (CAP) protocols) helped them to meet professional standards and avoid audit failures. Although participants expressed concerns about patients' potential out-of-pocket costs, which created negative reinforcement, some perceived societal benefit as they considered the (low) cost of testing relative to the (high) cost of unnecessary therapy. We Others limited cost concerns to tests that have no actionable treatments; still, others acknowledged broadening reimbursement for testing. Beliefs about the limited capability to acquire sufficient tissue for testing were widely acknowledged, and enthusiasm was dulled by the rarity of actionable mutations. We found wide variation in role assignment for genomic test ordering across community oncology programs that seemed to impact guideline adherence and timeliness. In addition, these roles were in flux. Both oncologists and pathologists noted that their roles were changing because of targeted therapy, sometimes creating communication failures across teams that impact task proficiency. Some programs relied on oncologists to order somatic tests, whereas other programs assigned the responsibility to the pathologist. Pathologists typically welcomed this role in subtyping tumors, acknowledging that genomic testing aligns with their existing responsibility to stage tumors. Ultimately, we are not only the stewards of the tissue but we're also the owners of the classifications and it is not enough anymore to be good at recognizing things under a microscope. We do understand what is driving these diseases and the relevance and mechanisms that are altered or disorganized with each one of these translocations with the exception of maybe somebody doing clinicals and ethics, we are the physicians that are closest to what is happening and most pathologists have a research background and some kind of familiarity with molecular and genetics…So I think we're going to be the driving force. [Non-rural Pathologist] Some sites created new roles for managing the multitude of reference lab orders, tracking test results, ensuring tests were incorporated into the electronic medical record and made available at the point of treatment decision-making, but other sites relied on physicians or other clinical staff to be responsible for this work. Consequently, they experienced delays in obtaining and reviewing test results, which created anxiety for both patients and providers. Many participants were highly motivated to identify strategies to facilitate interdisciplinary communication, and at least one program did so by creating new roles to manage inter-team communication needs, potentially alleviating providers' fears and frustration surrounding test results interpretation. …they came up with this idea of having a pathology liaison attending the hematology oncology meetings and because I had a background in molecular research, I happened to be the one. [Non-rural pathologist] Some challenges to role realignment were acknowledged: inertia and industry marketing of tests to physicians. In addition, participants' beliefs about their capabilities determined patterns of use. Smaller sites expressed more concerns in their ability to deliver targeted therapy, acknowledging institutional costs and staffing limitations. Smaller cancer programs were cognizant of the volume required to cover fixed costs of testing, in contrast to providers at larger programs: Rural providers saw the great potential advantage of liquid biopsy over tissue testing to address their unique delivery needs. Providers had high intentions and motivation to use targeted treatment. Providers saw mostly positive benefits of targeted treatment, believing it provides more treatment options, higher response rate, better outcomes, more convenient delivery modes, fewer toxicities, and, consequently, fewer medication side effects than standard therapy. Observing these treatment benefits contributed strongly to adoption among participants. However, targeted therapies were not seen as exclusively beneficial. The high costs of the therapies were of concern (although participants acknowledged generous industry subsidies) as were the side effects patients experience. Although targeted therapies were perceived to have a relative advantage over standard therapy, some questioned whether the high costs were worth the benefits, as patients may have better survival or other outcomes, but remain without a cure. Furthermore, very few patients were eligible for the treatments. Echoing these concerns and the infancy of the field, several participants acknowledged that the full promise of precision oncology has not yet been realized. Role conflicts were also apparent with targeted treatment delivery. Regarding their professional role and identity, community oncologists saw their role as changing dramatically with a greater need to subspecialize, a difficult transition for those established in practice. I'm a general medical oncologist. I probably know some areas better than others but I treat technically anybody who walks in the door with any type of cancer. [Non-rural Oncologist] [I]t's very, very hard to be a general oncologist anymore. The field has exploded with each cancer almost a unique specialty of its own so increasingly we will see doctors who only see one type of cancer because the field keeps changing. [Rural Oncologist] Beyond treatment decision-making, care delivery also involved monitoring side effects, treatment adherence, and disease progression, and this role was filled by different professions depending on the size and capacity of each cancer program. Capability domains For both testing and treatment, the pace of knowledge creation surpassed all providers' ability to keep up, particularly because community oncologists practice as generalists, creating the need to keep abreast of advances in all cancer types (Additional file 1: Table S2 for themes and quotes). Because they so rarely see any one patient eligible for targeted therapy in their practice, maintaining current knowledge is difficult. Furthermore, biomarker discovery often outpaces actionable recommendations; thus, much information directed to them in the literature and in testing reports was not relevant for treatment, and sometimes required expertise they do not have to interpret. Testing required special communication skills, both with patients and with colleagues from different disciplines, to alleviate the perceived untenable knowledge requirements. We have a great working relationship with our pathology group. They're very open to …change … based on NCCN guidelines and recommendations as to what they reflexively test for, so…docs aren't the ones having to know all this stuff and constantly be ready to order ALK and ROS and EGFR. Some things now, if it's a lung cancer patient, it reflexively is being performed and sent out by pathology. That's also helped speed up the process…So reflexive testing and a relationship with your Pathology Department I think is key so that you're getting those things. [Non-rural Nurse] NCCN guidelines were frequently mentioned across interviews as a strategy to promote proficiency, both for testing and for treatment. CAP protocols regulated behavior for many pathologists; pharmacists found board training materials important for informing treatment decisions. However useful, users saw opportunities to improve these materials to facilitate implementation, with disease-specific (rather than biomarker-specific) protocols for testing and standardized results reporting as additional needs. Although professional society guidance supported first-line testing and treatment decisions, more standardization of testing protocols, both institutionally, and within guidelines to accommodate second and subsequent treatment decisions were desired. Some cancer programs used pathways which embed guidelineconcordant precision oncology into the electronic health record; others relied on tumor boards for enhancing knowledge deficits, both of which targeted physicians. Notably, very few participants reported auditing their testing or treatment performance, and the recognition of its absence was described only in the context of testing. Opportunity domains The organizational and larger policy environments were not perceived as supportive of testing or therapy, underscoring perceived contextual differences in community oncology programs (Additional file 1: Table S3). For treatment, the cost of targeted therapy created individual and organizational work. Providers were aware of the substantial treatment costs and co-payments many patients face. They perceived the pharmaceutical assistance programs to be generous in providing drug assistance for patients who needed it, but participants described staff with roles and considerable responsibilities primarily dedicated to managing drug acquisition, rather than patient care. Only FDA-approved indications were reimbursed by the pharmaceutical assistance programs and payors, and because of the high cost, made off-label use costly to the organizations and to providers personally. Rural and smaller cancer programs had particular challenges in acquiring targeted therapies for their patients (Table 3). They perceived greater pre-authorization burdens, more limitations from their drug wholesalers, fewer reserves to absorb non-reimbursed care, and fewer staff with expertise in targeted therapy. Larger cancer programs had their own specialty pharmacies on which they could draw for drug acquisition. Navigation programs were seen as important to help patients with some of these challenges, but it was acknowledged that these resources were only available for certain cancer types, leaving other cancer patients' needs unmet. The NCCN guidelines were not only used to regulate behavior, but also as a resource to understand what would be reimbursed. Surveillance of therapy, once it was acquired, was complicated for those on oral agents, as they had less scheduled interaction with nursing or pharmacy staff than those receiving infused therapies, creating concerns about adherence and side effects. For testing, reimbursement was also a concern, dictating not only which test, but which testing platform could be used, and when testing could be ordered. Providers acknowledged that only actionable biomarkers were eligible for reimbursement. However, testing faced additional organizational constraints. Few community cancer programs had the capacity for in-house genomic testing; thus, most tests were sent out to reference laboratories, which were perceived by some to accrue a greater cost burden to them. Clinical trial enrollment and reflex testing (pathologist-initiated testing) were strategies used by some sites to mitigate reimbursement challenges and testing delays, but preauthorization requirements lessened the effectiveness of reflex testing. Molecular tumor boards were seen as very useful in identifying what to test and how to interpret tests. However molecular expertise was often lacking at community cancer programs, so industry resources were welcomed. [Testing company] will help any group coordinate a molecular tumor board and be on the call and help review those results. At any time, certainly we could continue our molecular tumor board without them but it's …been great for us all to learn together and for their scientists sometimes to hear from the clinician perspective. [Non-rural Nurse] At least one site not only ran molecular tumor boards, but also integrated molecular specialists into existing disease-specific tumor boards. When I came [here], I tried to establish a molecular committee and it was really hard so instead of that we started attending the Hem Onc meetings. [Nonrural pathologist] This interaction in the disease-specific tumor boards was influential in creating shared understanding of the tests and treatments appropriate for patient care. Discussion We interviewed a wide range of cancer care providers involved in the delivery of targeted cancer therapy in diverse community-based cancer programs, including those not typically included in precision oncology implementation research. Like previous studies of academic and international programs [68,69], we found similar capability and opportunity constraints in community oncology programs. However, our study extends the existing literature by highlighting a larger range of motivational determinants that can facilitate but also slow implementation of targeted therapy and, potentially, other healthcare innovations. Leveraging these determinants may lessen large institutional investments in clinical decision support currently considered necessary to meet perceived physician capability deficits. Across the sample, there was a steadfast intention to provide targeted therapy to cancer patients eligible for it, a finding recently replicated [69]. Nonetheless, our study documents differences in other motivational domains that may be important, namely concerns about the high cost-benefit ratio of treatment and role identification of the professionals involved in it. Testing is perceived to have societal benefits, allowing for stewardship of costly treatments, in addition to patient benefit. In contrast, the benefits of targeted therapies are not as universally regarded. Although they vastly improve outcomes for the few patients eligible for them, they do not cure disease, and costs per dose and per course are perceived to be high for both patients and for the institutions delivering them. These beliefs could influence perceptions of who should bear the cost of organizing coverage. Although currently pharmaceutical company treatment stipends are seen as offering the uninsured wider access to targeted therapy, they are cumbersome and contribute to the additional uncompensated work oncology programs must provide. Because pharmaceutical companies realize all the benefits by increasing their market share through these programs, they could potentially balance the lack of societal benefit by standardizing copayment programs across companies, make eligibility criteria explicit, and broaden qualifications. To our knowledge, this study is the first to illuminate ambiguity about who should initiate genomic and molecular testing for cancer. Our study suggests that targeted therapy delivery is difficult because it requires incorporating the new task of genomic test ordering and interpretation into the work scope of professionals typically responsible for treatment decision-making, delivery, and monitoring [70]. Most cancer programs relied on oncologists to order somatic tests, the purpose of which is to fully stage the tumor to ensure treatment is appropriate for the patient. However, because the oncologists' role is focused on treatment, the role of staging the tumormay be at odds with their typical responsibility. Whereas for pathologists, definitively staging a tumor falls within existing responsibilities [71]. It also aligns with their need to allocate scarce tissue optimally, making pathologist-centered implementation strategies very promising. Some sites had instituted pathologist-initiated test ordering for guideline-recommended tests. Called reflex testing, automatically ordering one or more secondary tests based on preset criteria applied to the initial test, has been demonstrated to have numerous benefits, including increasing testing rates [72] and identification of mutations or other molecular abnormalities [73,74]; reducing unnecessary testing [75,76], unnecessary care [77], disparities in care [78], and time to treatment [72,79]; and improving outcomes [80] and healthcare operations [81]. It has been shown to be cost-effective [82] and to reduce costs [83,84], mainly by focusing on testing for approved and clinically actionable molecular alterations. Future studies should consider the effectiveness of reflex testing to reduce role dissonance among the cancer care delivery team as well as its impact on patient outcomes. Our findings further extend our understanding of motivational determinants in that few sites reported monitoring testing and targeted therapy use, making it unclear whether their efforts were successful or equitable. Most practices did not have the necessary measurement tools, staffing, or infrastructure to monitor their own performance and thus may not have the performance knowledge needed to regulate their behavior. A limited number of measures related to genomic or molecular testing and treatment are available, required by accreditation agencies, and routinely included in cancer registries [85,86]; thus, support to develop and implement such monitoring at an institutional level may be needed. Sites used known strategies for improving individual knowledge and treatment decision support but lacked inter-team processes to standardize testing across the eligible patient population. Similar to previous reports, community oncology program participants acknowledged knowledge and skill deficits in testing and treatment, especially given the rapid developments in the field [69,87]. Awareness of targeted testing and treatment among our sample appeared high, but participants were less confident in their "how-to knowledge" or their ability to apply appropriate knowledge about testing and treatment options in practice [87]. However, rather than advocate for more education to fill knowledge gaps, an implementation strategy to which technology developers often default, participants in this study suggested institutional-level standardization of testing aligned with clinical practice guidelines, and results reporting and treatment education which prioritizes actionable mutations (i.e., only those mutations associated with existing evidence-based treatments) to overcome capability barriers. Others have characterized the actionability gaps in precision medicine [69] and called for research to enhance clinical utility [88]. Our findings suggest that treatment decision-makers prefer prioritization of actionable mutations in results reporting, consistent with existing reporting guidelines [89,90]. Thus, opportunities to improve result communication, consistent with previous research [91][92][93], remain. Also similar to prior research, our findings highlight significant opportunity barriers to targeted therapy use, namely the high cost of both testing and treatment, that have long been perceived as implementation barriers [41,94,95]. However, our findings also reflect recent transitions in reimbursement which decrease patients' outof-pocket expenses for testing [39,71,96], and assign responsibility for billing to pathology laboratories, shifting incentives for testing from physicians to hospital cost centers [39], and creating new organizational landscapes. In addition, we found that most community cancer programs have made organizational and personnel changes to ensure the delivery of costly targeted therapy by repurposing highly skilled oncology nurses and pharmacists to manage complicated and time-consuming payor and industry requirements. Some cancer programs designed new organizational units to efficiently manage genomic test procurement, tracking, and reporting. Unlike organizational changes to testing management, whose efficiencies may benefit the organization, the addition of new reimbursement and treatment acquisition roles required to deliver targeted therapy to un-and under-insured patients, are not costs that can be recouped. Aside from potential reputational prestige, healthcare organizations bear the cost of these activities with little direct benefit. The fixed cost of these new roles, no matter how streamlined, can only be borne by practices with high volume. Likewise, specialized services, such as specialty pharmacies, which can be revenue-generating for an organization, are not feasible among low-volume sites, potentially creating disparities among smaller community practices. Some rural hospitals have created infusion centers to build sustainable revenue and may be threatened by targeted therapies which can be administered orally, bypassing their billable infusion services. Both smaller and rural programs contrasted their capabilities and the disproportionate impact of organizational and policy decisions. Because genomic testing occurs in reference laboratories, outside of most community practices, we noted fewer testing differences than treatment differences among community practices, suggesting that test-ordering interventions may be more feasible than treatment interventions which will require substantial addition of resources to address. Furthermore, ensuring appropriate testing at smaller and rural community practices could potentially facilitate care when patients are referred for treatment at larger community or academic practices. Nonetheless, the evaluation of precision oncology implementation strategies should assess differential effectiveness among small and rural cancer programs to ensure benefits are realized equitably. Finally, our description of the process from multiple team members' perspectives, specification of testing and treatment as two distinct behaviors, and comprehensive elicitation of all motivational domains adds new understanding of the strong facilitators and unique barriers community providers experience. In particular, our identification of how determinants of testing behavior differ from the determinants of treatment behavior is a unique contribution not only to understanding precision medicine implementation, but also to the field of Implementation Science. By contrasting the determinants of testing with those of treatment, we uncovered unique patterns of determinants and opportunities to respond to areas of significant delay. Others have distinguished testing as a process outcome separate from precision medicine application [68]. However, specifying testing and treatment as two separate behaviors, each with their own determinants, allows us to consider the different teams involved and connect efforts currently siloed in the fields of pathology and oncology. Although careful specification of implementation strategies is widely encouraged across the field [97][98][99], less emphasis has been placed on the careful specification of intervention behaviors in assessing the behavioral determinants which the implementation strategies are designed to overcome. Instead, most frameworks emphasize understanding the contexts in which innovations are implemented. For example, the Consolidated Framework for Implementation Research (CFIR) emphasizes adapting for context as the key to successful implementation, rather than a thorough specification of the behavior to be changed [100]. CFIR is not unique; the relevance of context is pervasive throughout implementation determinant frameworks [101]. In a survey validation of domains identified in the TDF, the Nature of the Behavior construct was dropped from the Framework as it aligned statistically as a separate task, apart from other determinants [49]. Although Cane, O'Connor, and Michie adamantly emphasized that understanding the nature of behaviors is key to analyzing implementation and other behavior change, they removed the construct from the TDF. Instead, they included behavior specification as one of the 8 steps in intervention design in their complementary behavior change wheel (BCW) approach [49]. Influenced by the BCW in a previous study in which we identified a promising implementation strategy [102,103], we subsequently have used careful specification of complex cancer care delivery behaviors to uncover previously unreported determinants [104], including in this study. Our use of process mapping, particularly the swimlane diagram [53,54], as a tool to specify the behavior may be a unique contribution but should be tested as a potentially fruitful addition to determinant analysis. Limitations Our use of a qualitative study design over a quantitative design allowed us to identify new motivational domains that are a key barrier to genomic testing. However, the initial narrow geographic focus of recruitment and requirement to share quality measures, as well as the use of snowball sampling and the small sample inherent to the qualitative study design, holds potential to limit the generalizability of our findings. Secondly, although we presented our findings to community oncologists and shaped our interpretation by their reactions, we did not formally conduct member checking [105] to ensure the credibility of results with participants in this study. Subsequent evidence suggests that the threat to the validity of these design decision may be low. We subsequently conducted a national survey of US pathologists (unpublished) which confirmed barriers to genomic testing and preferred solutions similar to those reported in this study. A recent study of oncology care teams [69] conducted in Australia confirms our findings of high motivation to use targeted therapy in the current era, suggesting differences in motivation between our study and earlier studies may reflect changes in trends over time, rather than groups of providers who hold discordant views. Our study was designed to elicit barriers and best practices related to somatic alterations in tumor tissue. It was not intended to elicit barriers to genetic testing for inherited risk. Hereditary testing typically informs a patients' prognosis, or risk of disease, whereas somatic alterations arising in the tumor can determine whether a treatment will be effective or not. Although some hereditary testing has received FDA approval for treatment decisions, we did not focus on germline testing. We understood physicians to perceive these two types of tests to have different utilities. But because there remains confusion between prognostic and predictive testing and blurring in FDA-approved uses, additional research to assess understanding of, and concerns about, these two types of tests are warranted. Finally, our study was designed to comprehensively elicit a broad range of barriers and best practices but was not designed to draw comparisons between urban and rural programs or large and small programs. Thus, future research should validate the differences we observed among a larger sample of programs. Similarly, the salience of each construct was not evaluated. Future surveys using representative sampling could narrow these constructs to those deemed most important to community oncologists, but implementation strategies matched to identified determinants should be compared in prospective trials. Conclusions Cancer care providers view precision oncology as the wave of the future but our study identifies several motivational challenges that could potentially be addressed through role re-alignment. In addition, opportunity determinants may differentially impact small and rural sites. Standardized pathology-initiated genomic testing may prove fruitful in ensuring patients eligible for targeted therapy are identified, even if the care they need cannot be delivered at these sites. Finally, behavior specification may need to be explicitly and routinely included in the determinant analysis to identify the most promising implementation strategies. Additional file 1: Table S1. Motivation Determinants for Testing and Treatment. Table S2. Capability Determinants of Testing and Treatment. Table S3. Opportunity Determinants for Testing and Treatment.
2023-06-13T13:45:09.242Z
2023-06-12T00:00:00.000
{ "year": 2023, "sha1": "b56f0d539155620fc653566b20d9f86601894140", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Springer", "pdf_hash": "b56f0d539155620fc653566b20d9f86601894140", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
248349739
pes2o/s2orc
v3-fos-license
Use of Artificial Intelligence-Based Strategies for Assessing Suicidal Behavior and Mental Illness: A Literature Review Mental illness leading to suicide attempts is prevalent in a large portion of the population especially in low and middle-income nations. There remains a significant social stigma associated with mental illness that can lead to stigmatization of patients. Hence, patients are reluctant to communicate their problems to health care providers. Physicians have difficulty in timely identification of patients at risk for suicide. Novel and rigorously designed strategies are needed to determine the population at risk for suicide. This would be the first step in overcoming the multitude of barriers in the management of mental illness. Clinical tools and the use of electronic medical records (EMR) are time intensive. Recently, several artificial intelligence (AI)-based predictive technologies have gained momentum. The aim of this review is to summarize the recent advances in this landscape. Introduction And Background At any point in time, a large proportion of the population is affected with mental illness. As a result, prevention and timely management of mental illness has recently become a global health priority [1]. Mental illnesses and suicide attempts are becoming a huge health burden worldwide [1]. According to the World Health Organization (WHO), in 2016 the suicide rate was estimated to be 10.6 per 100,000 population, and the majority (80% of these suicides) occurred in low-and middle-income nations [2]. Recently, a United States (US) report mentioned suicide as the tenth most common cause of mortality in adults [3]. In 2019 in the US alone, approximately 1.38 million suicide attempts, and 47,511 deaths were reported [3]. Mental illness is caused by an interplay of various risk factors. The combination of the absence of appropriate medical treatments and the lack of adequate family support, frequently aggravates mental illness into suicidal behavior [4]. Individuals at risk of suicide frequently do not communicate their problems or challenges to their physicians nor their communities because of generalized societal disapproval and stigma, and a history of forced medical treatments [4]. In addition, individuals with a psychiatric illness (including a large proportion of individuals who ultimately commit suicide) have poor insight into their mental state, and thus do not self-identify as being in any kind of danger [4]. Both these problems (lack of reporting and the poor insight) are further exacerbated by the difficulty that physicians and other providers have with the timely identification of people at risk for suicidality [4]. Therefore, it is critical that physicians are able to conduct suicide risk assessments on high-risk patients. These assessments themselves are challenging because they depend on a patient's location, the availability and access to health care, and the patient's overall suicide plan (and its vocalization) [5]. The recent coronavirus disease 2019 (COVID- 19) pandemic has only exacerbated this suicide risk, especially in the elderly [6]. Based on recent literature, two major factors for this increased suicidality in the elderly are, the self-isolation required with COVID-19 and the loss of connections to family members and the outside world in general [6]. Existing clinical tools used for suicide risk assessment tend to be time-intensive, cost-prohibitive, and usually need practitioner guidance for administration [7]. Novel and rigorously designed strategies are required to determine the risk of suicide, and to successfully overcome economic, clinical, and infrastructure barriers. These are starting to include artificial intelligence (AI)-based technologies [8]. Responding to these requirements for more practical mental status evaluations, various healthcare technologies and digital applications have recently started gaining popularity [9]. Previous studies had largely focused on the adoption of electronic medical records (EMR) to diagnose the mental well-being of an individual. A major limitation with this method is that it is less accurate and has decreased efficiency, compared to novel digital diagnostic tools. Thus, a computerized algorithm (if available) would be ideal to assess mental health status based on the clinical history and electronic health records (EHRs) of patients. This algorithm would ideally be able to classify a patient's risk simply based on their symptom severity [9]. It is hoped that with the employment of AI and machine learning (ML), new prospects will be available to guide early suicide risk prediction, classify mental health status, and improve suicide preventive interventions [10]. Several AI-based prediction technologies are already gaining popularity in medicine. These include the detection of medical errors, enhanced patient safety parameters, and the assessment of chronic conditions [8]. We believe that the use of AI in mental health will be welcomed by practitioners and patients alike. The main aim of this systematic review is to characterize and then summarize the recent advances in AI [both with machine learning (ML) and natural language processing (NLP)], specifically for suicidal behavior evaluation and mental disorders diagnosis. The techniques and tools that are currently being used in mental health practices will be defined in methodological and technical terms. These include techniques used for diagnosis and prognosis, treatment adherence, adverse events, risk factors, and finally, the impact of psychotherapy. We will elaborate the AI techniques used for the assessment of suicidal behavior and mental health. We also intend to summarize the successful use of AI tools in mental health settings such as diagnosis, suicide prediction, and identification of suicide risk factors. Search Strategy A computerized literature search was performed in MEDLINE (PubMed), the Cochrane Library, and Google Scholar databases from 2010 till Dec 2021. The literature search included the following keywords: Artificial intelligence, suicide, behavior, natural language processing, machine learning, psychiatry, suicidal thoughts, mental health, and mental disorder. Inclusion Criteria Based on the availability of the full-text articles, the relevant studies were reviewed to ensure that they met the inclusion criteria as follows: 1) Studies published in the English language. 2) Original journal research articles limited to randomized controlled trials (RCTs) and reviews. 3) Studies including AI-driven models with ML and NLP methodologies. 4) Studies including patients with risk of suicidal behavior, suicide attempts, suicide ideation, and suiciderelated death. 5) Studies with AI-based technology for evaluation of mental health status. Exclusion Criteria 1) Meta-analyses, editorials, letters, abstracts, comments, and book chapters were excluded. 2) Studies with improper randomization protocols (improper allocation to study groups). 3) Studies published in languages other than English (n=4). 4) Studies where the mental health status was not clearly demarcated or those that did not have essential data (n=30). Study Selection The relevant studies were selected in two stages after the search strategy was initiated. First, the authors extracted the required data based on the eligibility criteria (as described above). All available titles and abstracts were identified and read to ensure that the included studies met the inclusion criteria. The full-text articles were then obtained and read in the second stage. Science Citation Index (http://www.isinet.com) was also used to find papers that had cited these articles, as well as to find potentially relevant articles for a secondary review. Finally, based on the inclusion criteria, studies were included in this review, as detailed in Figure 1. Data Extraction and Quality Assessment Studies that fulfilled the inclusion criteria were processed for study details and outcome data. The primary focus of the review was the assessment of suicide, and the role of AI tools in assessing suicidal behavior and mental status. Each study was analyzed, and the information was extracted as highlighted in Table 1. The process of including studies was characterized into different categories such as name of author, year, study design, AI tools, location of study, and study findings. Selection of Studies The process of retrieving and screening the studies which were included in this systematic review is shown in Figure 1. After an initial search, a total of 212 articles were identified. After removal of duplicates (77) there were 135 studies of interest remaining. These were screened by study titles and abstracts and only 76 studies were found to be randomized controlled trials (RCTs), reviews, and scientific reviews. Others (59) were meta-analyses, editorials, letters, abstracts, or comments. These 76 studies were critically evaluated and some of them had improper randomization protocols (improper allocation to study groups) whereby another 29 were excluded. This left 47 full-text articles that were reviewed in entirety for eligibility per inclusion criteria. Studies published in languages other than English (n=4) and studies where the mental health status was not clearly demarcated or those that did not have essential data (n=30) were excluded. Thus 13 studies which met all the inclusion criteria, were included in this review. In these included studies, the different categories of participants included were: 1) Patients whose electronic health record data was available on authentic databases with their overall medical records. 2) Self-reported data obtained from social media platforms such as Twitter and Facebook. 3) Patients who were admitted to psychiatry departments. Population The collected data represents populations from various locations including the USA, Madrid, Southern California, Colorado, Washington, Canada, Boston, China, England, Korea, and London. The common features mentioned in majority of the studies were suicides, depression, psychiatric disorders, mood disorders, schizophrenia, schizoaffective disorder, and bipolar disorder. Clinical Outcomes The effectiveness of AI techniques was initially recognized by Cohen et al. who reported that ML methods can improve mental health outcomes during therapy sessions [13]. As mentioned by Cook et al. [11], NLP and ML helped decipher the overall mood and mental status of patients. An algorithm based on dual model of NLP and hybrid ML resulted in excellent outcomes to extract suicidal behavior factors, and clinical datasets from a psychiatric database [14]. These combinations of AI techniques were successful in determining the suicide risk and the psychological distress seen among adult psychiatric inpatients when they are discharged from the hospital. Similar results were shown in three of the studies included [9,12,15]. Most studies included in this review demonstrate improved performance with an AI-based approach compared to previously used traditional models. A study by Corke et al. (2021) mentioned that by increasing the number of suicide risk factors, ML can improve the performance of suicide risk prediction. However, its superiority over other methods has yet to be proven [17]. AI learning models had an overall agreement of 83.5% with the traditional models with ML displaying significantly better results in identifying self-harm visits [18]. When discussing recognition of risk of suicides based on information available on social media platforms, two studies mention the potential use of AI in correctly identifying this suicide risk. First, the Suicide Artificial Intelligence Prediction Heuristic (SAIPH) algorithm was able to accurately predict future suicidal ideation [19]. In the second study, similar results were demonstrated after adopting "The tree hole action" algorithm [20], which eventually played a significant role in preventing suicides. Graham et al. [16] report that ML and NLP prove to be useful in classification of various mental health disorders such as schizophrenia, depression, and suicidal ideation. Similarly, Gong et al. also observed better performance with an AI-based approach in assessing the patterns of depression compared to usual care [21]. Lastly, one study found that NLP was more effective in predicting suicidal behavior in pregnant women specifically [22]. Overall, most studies demonstrated higher sensitivity, specificity, and positive predictive values (PPV) with AI-based techniques when compared to conventional tools. AI-Driven Prediction Despite the promising results of this review, the utility of identification of suicidal attempts is limited because of only modest sensitivity and low PPV. This is because of the reduced incidence of actual suicide attempts compared to the prevalence of suicidal ideation [23]. However, despite the sensitivity issue AIbased techniques should still be helpful, as EMRs in their current form, will continue to remain minimally predictive. Suicidal Ideation and Risk of Suicides Numerous studies have identified the role and importance of AI in predicting the risk of suicide attempts among adolescents. In a study, Jung et al. applied ML-based approach to 60,000 Korean adolescents determining the suicide risk according to previous suicide attempts and suicidal ideation. They analyzed 26 factors predicting suicide risk, and five different models including support vector machine, artificial neural network (ANN), logistic regression (LR), extreme gradient boosting, and random forest, all demonstrating an accuracy between 77.5% and 79% [24]. A study by Corke et al. (2021) mentioned that by increasing the number of assessed risk factors for suicide, ML can improve the performance of suicide risk prediction models. Its superiority over other methods, however, has yet to be proven. The risk factors for determining suicidal ideation include parameters such as the history of continuous depressed mood, stress awareness, alcohol abuse, educational status, unmet medical needs, socio-economic background, and severity of depression. Data has demonstrated that ML models (such as LogitBoost, and ANN) were more effective than the traditional LR model (0.867) in assessing the risk of suicide [17]. Another study by Yang et al. (2021) investigated the risk of suicide among users of the website "Zou Fan Tree Hole" which was engaged in providing suicide crisis intervention to people at high risk for suicide (level 6-10). The Tree Hole Intelligent Agent is an Artificial Intelligence (AI) Program that eliminates a large portion of superfluous information and allows easy interpretation of data contained on the website, Tree-Hole. "Tree Hole Action" project yielded better results in terms of suicide risk monitoring and intervention for online users of this website. This is an example of AI, social forces, and mental health services working in unison to provide the required support to people at the highest risk of suicide [20]. AI technology is currently showing immense utility when it comes to its implementation for predicting the risk of suicidal behavior in pregnant women. Recently the American College of Obstetrics and Gynecologists (ACOG) advised that physicians should screen patients for depressive symptoms during their perinatal period [25]. Zhong et al. (2019) reported that by mining the disorganized clinical notes, NLP was able to predict suicidal behavior successfully. NLP was found to be 11-times more effective in predicting suicide risk in pregnant women [22]. Hence, the implementation of an AI algorithm in EMR systems can help improve the ability of correctly identifying (and hopefully decreasing) suicidal behavior [25]. Prior studies have demonstrated that conventional methods of predicting suicides based on written questionnaires, surveys, or scales are insufficiently accurate, time-sensitive, and require active participation of the respondent [26]. AI technology bypasses these limitations and so is more likely to help people receive prompt support and earlier treatment. Mental Illness: Use of AI Prediction in Mood Disturbances and Depressive Symptoms Current evidence suggests that more than 80% of individuals who die from suicide suffer from some form of mental illness [23]. A study by Cook et al. (2016) demonstrated that NLP-based models perform better at assessing general mood. These methods also have better predictive value when predicting suicide risk and psychological distress. ML proved effective with a higher sensitivity and specificity in analyzing the mental state compared to when this is assessed by general questions or open-ended texts obtained directly from the patients. A study by Cohen et al. describes that during therapy sessions, ML methods can help improve mental health outcomes by analyzing language samples and voice recordings [13]. Although structured knowledge requires considerably longer time involvement from the respondent, when exploited by computational analytics like NLP, information obtained from free-text responses can serve as an effective tool in predicting suicidal behavior and suicidal risk factors. This technology might have a profound impact to highlight red flags in patients with high suicidal behavior. Ultimately, this will help physicians intervene early and provide the required support for an immediately suicidal patient [11]. Current AI models are being used to determine the patterns and progression of depression based on depression trajectories that have already been studied [21]. Gong et al. (2019) studied AI models that were useful in detecting depression patterns, progression of symptoms, and changes in symptoms over time. This type of data can aid in the development of a depression-specific trajectory-based methodology. Gunn et al. [27] studied 789 patients from 30 randomly selected family medicine practices, with a prominent impact on their PHQ-9 scores over a 40-week period. Depressive symptoms were measured every three months for a year. By using trajectory modeling, this study was able to suggest future directions for prognostication. Mental Illness: Use of AI in Medication Adherence A novel use of AI could be with medication adherence in patients. One such avenue is clinical trials where medication adherence is usually determined by pill counts, which can frequently be unreliable. This was evaluated in a study by Bain et al. (2017), where patients with schizophrenia who used an AI platform had a higher rate of medication adherence (90%) than those who used modified direct observed therapy (72%) [28]. Secondly, in the clinical setting, especially psychiatry wards and even outpatients, AI could be successfully employed for medication administration and compliance. Predicting Mental Illness Based on Social Media Information A systematic review by Pourmand et al. described youngsters disclosing suicide risk factors on social media including Facebook and Twitter rather than reaching out to a physician [29]. The authors suggest that these social media platforms could serve as a clinical aid in detection and decision-making [29]. Similarly, another cross-sectional study investigated suicidal ideation and behavior according to self-reported data, based on 1000 Twitter users tweeting about their suicidal thoughts [30]. These types of data suggest that social media sites such as Twitter and Facebook, if analyzed appropriately and systematically, could actually play a significant role in determining the outcome metrics of suicidal behavior. An ML classifier has been developed by Burnap et al. that is capable of identifying high-risk individuals on Twitter with an accuracy of 68-73% [31]. With the implementation of this tool, a subsequent 12-month follow-up study determined that the classification system was 85% accurate compared to trained human raters [30]. Individuals with schizophrenia are more likely to tweet about suicide [23]. These can easily be identified by using ML as it could be designed to target suicidal ideation and suicidal behavior phrases. Recently, Du et al. (2018) implemented such an ML-based approach and designed a convolutional neural network (CNN) with the potential to identify suicidal tweets [32]. Hopefully, these techniques will eventually pave the way for mental health experts to provide prompt treatment and intervention to those identified to be at a higher risk for suicide. Currently, the main challenge is the limited ability of AI algorithms to predict suicidal thoughts prior to their development as models only identify such tweets after someone expresses them. Thus, it is difficult to identify suicidal thoughts and the mental states of individuals who do not tweet about it. Limitations This study has several limitations. First, only 13 articles were included in this study which met the inclusion criteria. This limits the generalizability of this review. However, the intent of the review was to evaluate the currently available evidence for its quality, and to make potential recommendations for future research directions. Secondly, posts containing depressive-sounding words could indicate a passing state of depression instead of a full-blown depressive episode. In online posts, social media users may sometimes over-express manifestations, or their remarks may sometimes be purely situational. As online posts frequently lack complete contextual information, this information can easily be misinterpreted. Thus, the true clinical utility of these data-rich platforms requires more careful understanding, and studies involving social media should follow quality-based methodological standards [14]. Third, the predictive ability of these studies is restricted to the features that were used as input for the ML models (e.g., clinical data, demographics, biomarkers, etc.). After adopting any AI models, performance metrics need a thorough understanding so that the practicality and relevance of the results can be ascertained. Fourth, the majority of the included studies have been conducted in high-income nations. This demonstrates the readiness of adopting such advanced digital tools might be limited to certain regions and locations. The inclusion of low-income and middle-income nations is necessary to understand the global application of these methods and the role these new models can play on a global scale. Lastly, although several of these studies identified and characterized the risk factors associated with mental illness, further research should also consider factors that improve understanding of the preventative aspects in mental health. This will enable us to gain novel insight into decreasing the incidence of mental disorders, suicidal thoughts, and suicidal attempts in general. Therefore, large datasets will be required to obtain highquality information and to conduct mental health research for promoting mental hygiene and reducing the risk of suicide for a given population. Conclusions Artificial intelligence is becoming a larger part of digital medicine, and it will certainly help with mental health research and clinical practice. To realize the full potential of AI, a diverse community of experts involved in mental health research and care, including data scientists, healthcare professionals, regulatory authorities, and patients, must all collaborate and communicate effectively. The best results for patients will be obtained from the collective efforts of these stakeholders, who will need to work as a team to develop robust algorithms. Combining clinical and social media-driven suicide prediction tools, according to the findings of this review, could strengthen our ability to recognize those who are at high risk of committing suicide. Eventually, this will improve our potential to help save lives. The present review determines the current advances and the future potential of various AI-driven models in predicting mental illness and the risk of suicide. However, further studies are needed to determine the validity, applicability, and moral implications of using these tools in geographically and economically diverse populations. Conflicts of interest: In compliance with the ICMJE uniform disclosure form, all authors declare the following: Payment/services info: All authors have declared that no financial support was received from any organization for the submitted work. Financial relationships: All authors have declared that they have no financial relationships at present or within the previous three years with any organizations that might have an interest in the submitted work. Other relationships: All authors have declared that there are no other relationships or activities that could appear to have influenced the submitted work.
2022-04-24T15:15:50.267Z
2022-07-01T00:00:00.000
{ "year": 2022, "sha1": "c094de088b98b65beb96572554af84f491dac9ae", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "MergedPDFExtraction", "pdf_hash": "3cd4203a7f9fee2259b04f782eea017380914283", "s2fieldsofstudy": [ "Psychology", "Computer Science", "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
13975233
pes2o/s2orc
v3-fos-license
SPI/INTEGRAL observation of the Galactic central radian: contribution of discrete sources and implication for the diffuse emission The INTEGRAL observatory has been performing a deep survey of the Galactic central radian since 2003, with the goal of both extracting a catalog of sources and gaining insight into the Galactic diffuse emission. This paper concentrates on the estimation of the total point sources emission contribution. It is now clear that unresolved point sources contribute to the observed diffuse emission; the increasing sensitivity of instruments with time has lead to a steady decrease in estimates of this ``diffuse emission''. We have analysed the first year data obtained with the spectrometer and imager SPI on board INTEGRAL. First, a catalog of 63 hard X-ray sources detected, time-averaged, during our 2003 Galactic plane survey, is derived. Second, after extracting the spectra of the sources detected by SPI, their combined contribution is compared to the total (resolved and unresolved) emission from the Galactic ridge. The data analysis is complex: it requires us to split the total emission into several components, as discrete sources and diffuse emission are superimposed in SPI data. The main result is that point source emission dominates in the hard X-ray/soft $\gamma$-ray domain, and contributes around 90 % of the total emission around 100 keV, while above 250 keV, diffuse electron-positron annihilation, through its three-photon positronium continuum with a positronium fraction $\sim$ 0.97 and the 511 keV electron-positron line, dominates over the sources. superimposed in SPI data. The main result is that point source emission dominates in the hard X-ray/soft γ-ray domain, and contributes around 90 % of the total emission around 100 keV, while above 250 keV, diffuse electron-positron annihilation, through its three-photon positronium continuum with a positronium fraction ∼ 0.97 and the 511 keV electron-positron line, dominates over the sources. Introduction Observations carried out for more than three decades indicate that the total hard Xray/soft γ-ray emission from the Galaxy results from the superposition of multiple physical processes whose relative contribution depends on the energy. The spectrum of this emission is reasonably well measured and understood above ∼ 1 MeV from OSSE and COMPTEL (Kinzer et al., 1999, Strong et al., 1994, 2004a. At energies above ∼ 100 MeV, the dominant emission process is the decay of π 0 mesons produced in the interaction of cosmic-ray nucleons with the interstellar matter (Bertsch et al., 1993). Between 1 and 70 MeV, electron bremsstrahlung and inverse Compton scattering are expected to dominate over discrete source emission (Sacher & Schönfelder, 1984, Skibo et al., 1993, Strong et al. 2004a ). In the low-energy domain, the spectrum has been measured with RXTE in the 10-35 keV energy band (Valinia & Marshall, 1998) and with Ginga in the 3-16 keV band (Yamasaki et al., 1997). More recently ASCA (Sugizaki et al., 2001) observations in the 0.7-10 keV band, and Chandra observations (Ebisawa et al., 2001) in the 2-10 keV band, show that there is a genuine diffuse soft X-ray component, and this result is confirmed by XMM-Newton measurements (Hands et al., 2004) in the 2-10 keV band. However, in the hard X-ray/soft γ-ray band (50-500 keV), the situation is more complicated as multiple components are believed to contribute to the total emission. These include discrete sources, the positron annihilation line, three-photon positronium continuum radiation and a soft γ-ray component corresponding to the diffuse continuum emission induced by cosmic-ray interactions (CR diffuse). Unfortunately, measurements of the Galactic diffuse emission in the hard X-ray/soft γ-ray band and its interpretation are inherently difficult because of the presence of numerous hard X-ray discrete sources in the Galactic plane. Moreover, generally, hard X-ray/soft γ-ray instruments either have large fields of view and no imaging capabilities or have imaging capability but no sensitivity to extended emission. As a result, discrimination between diffuse emission and point sources remains a difficult task. For this reason, simultaneous multiple-instrument observations had been performed, with coordinated observations of the Galactic Center region with OSSE/CGRO (Kurfess et al., 1991) and the imaging instrument SIGMA/GRANAT (Paul et al, 1991). The main results were that, in the hard X-ray/soft γ-ray regime, the spatial distribution of "diffuse emission" is broad and relatively flat in longitude, with an extension in latitude of ∼ 5.5 • width, while a few discrete sources contribute at least 50% of the total emission (Purcell et al., 1996). However SIGMA had a sensitivity of about 25 mcrab (2σ) for a typical 24 hours observation. As a result, weak sources escaped detection in its survey, and the unresolved emission was still contaminated significantly by discrete sources. The extension of the spectral shape of the Galactic diffuse emission from the hard X-ray to γ-ray regime, and how much of the emission was due to discrete sources remained to be precisely determined. Further attempts to derive the diffuse emission characteristics followed (Kinzer et al. 1999(Kinzer et al. , 2001Boggs et al., 1999;Valinia et al., 2000). Theoretical studies were unable to explain even 50% of the Galactic emission as originating in the interstellar medium (ISM). Two main processes can lead to an interstellar soft γ-ray emission. The first one is inverse Compton scattering of high energy (GeV) cosmic ray electrons on the ambient photons field; but these electrons would also produce radio synchrotron emission in the Galactic magnetic field at a level much higher than one actually observed. The second process is bremsstrahlung of a population of electrons of a few hundred keV, radiating through interactions with interstellar gas. Because these electrons will lose their energy through ionization and Coulomb collisions, the total power required to compensate for these energy losses is of the order of 10 41 − 10 43 erg s −1 . This power, comparable or higher than that of cosmic ray protons, would affect interstellar-medium ionization equilibrium and give rise to excessive dissociation of interstellar molecules. A possible interstellar process has been proposed by Dogiel et al. (2002). Alternatively, a dozen point sources with intensities around 25 mCrab (the SIGMA 2σ sensitivity) could account for most of the remaining diffuse emission. This demonstrates the need for highly sensitive imaging instruments for a precise determination of the diffuse Galactic emission. Actually, a new vision of the hard X ray sky is provided by INTEGRAL with the detection of a significant amount of new sources and the discovery of a new class of objects: the highly absorbed sources (Dean et al., 2005) may represent 20% of the total number of sources, but their contribution could not be evaluated by X-ray survey, with an obvious implication for the diffuse emission. A recent result based on ISGRI/IBIS/INTEGRAL data (Hereafter ISGRI) shows that known binary sources account for the main part of the total Galactic emission (from 86% to 74%) from 20 keV to 220 keV (Lebrun et al., 2004, Terrier et al. 2004). However the limited sensitivity of ISGRI, above 200 keV, does not allow an extension of this study at higher energies. Early study of the diffuse continuum emission using SPI/INTEGRAL (hereafter SPI) shows that diffuse emission dominates above ∼ 200 keV (Strong et al. 2003) and that the ratio of diffuse emission to total emission vary from 10% to 100% along SPI energy domain (Strong et al. 2004b). SPI (see next section) has a moderate angular resolution over a large field, thus is sensitive to extended sources as well as to point sources imaging. SPI can thus be used in a self consistent way to measure both the Galactic diffuse and discrete source emission, avoiding the complication due to different regions observed, different instrument responses, different models assumed and diverse ways in which results are presented (per radian, per Field-Of-View etc.) when comparisons with other instruments are made. We present the catalog of the sources detected with SPI as well as their global spectral emission. An early SPI sources catalog can be found in Bouchet et al. (2004). Moreover a broadband spectral analysis including point sources and diffuse emission components (annihilation, positronium and CR diffuse continuum) has been performed in a self consistent way, i.e using the same instrument for both sources detection and diffuse emission estimate. We draw attention to an independent, complementary study on this topic by Strong et al. (2005). Instrument The spectrometer SPI (Vedrenne et al., 2003) is one of the two main instruments onboard ESA's INTEGRAL (INTErnational Gamma-Ray Astrophysics Laboratory) observatory launched from Baikonour, Kazakhstan, on 2002 October 17. It consists of an array of 19 actively cooled high resolution Germanium (Ge) detectors with an area of 508 cm 2 and a thickness of 7 cm. It is surrounded by a 5-cm thick BGO shield. The detectors cover the 20 keV -8 MeV energy range with an energy resolution 4.1. Image generation SPIROS V6 (SPI Iterative Removal of Sources) algorithm has been used to derive source positions, with the MCM background option (mode 5). 2 SPIROS (Skinner & Connell, 2003), a part of the Integral Science Data Centre (ISDC) package produces synthetic and simplified sky images : they consist of a limited number of sky pixels, those which contain excesses above a given threshold. An important limitation is that sources are considered as constant during the image reconstruction process. As a consequence, at low energy ( < 50 keV), some exposures exhibit an unacceptable χ 2 fit of raw data to the reconstructed sky image convolved by the response matrix along with biased residuals distribution. We found this effect to be due to intensity variations of the most intense sources, which are not taken into account. To suppress the effect of the variability of the strongest sources, a special iterative scheme described in §4.2.2 has been used. SPIROS works both with known and unknown sources. If an input catalog is given, SPIROS will first build a sky model with the proposed sources then looks for a number a new ones required by the data. One must take care of the influence of the source position errors on the image generation. If the source position introduced in SPIROS is different from the true position (even slightly), the iterative removal of sources algorithm leads to subsequent artefacts in the image. This problem may become important for strong sources or when a large number of sources has inaccurate positions. Thus, whenever possible, the exact source positions should be introduced in SPIROS as a-priori knowledge (input catalog) in order to avoid these cumulative errors. But, a contrario, to use an input catalog with too many (non-emitting) sources also leads to unstable solution. Our analysis philosophy (see §5) takes into account both limitations. Time dependent sources and background fluxes determination algorithm (time-model-fit) A complete model fitting procedure (called "time-model-fit") based on the likelihood statistics has been developed (See Annexe A-1). The input sky model can include both point sources and diffuse components. The main features of the algorithm are: -A self determination of the background distribution on the detector plane. -A time dependent background normalization determination -A time dependent flux determination for each point source with its own timescale (mean-ingless for diffuse components). Background determination The SPI imaging system, although using a combination of a coded mask with a position sensitive detector, needs, due to the small number of detector pixels, a dithering scheme to increase the number of sky pixels that can be reconstructed. This results in a time modulation of the sky signal. In order not to confuse such modulation with background variations, the background versus time profile of the 19 detectors has to be evaluated. The construction of a time-dependend background model constitutes a key point of SPI data analysis. Analyses of "empty-field" observations have shown that, for any energy band, the relative count rates (uniformity map) of the 19 Ge detectors are constant while the global amplitude (normalization factor) varies with time. We thus have to determined the relative count-rates of the 19 Ge detectors (background counts-ratio pattern depending on the energy band) leaving normalization as the only free parameter for the background intensity. This is included inside the model fitting procedure as described in annexe A-2 with the background amplitude able to vary on the pointing (∼2500 s) timescale. Taking into account source variability The SPI image reconstruction relies on the dithering. As a result, variability of sources has to be explicitly included in the system of equations to be solved. This is included in our model fitting procedure by means of additionnal equations (Annexe A-3). For each source, the allowed variation time scale can be chosen. In practice, it has to be carefully used as the number of parameters (unknown fluxes) necessary to describe the sky will increase, correspondingly the significances decrease. There is also a mathematical limitation: since there are 19 independent data per pointing, it means that, whatever the number of pointings, there can not be more than 19 variable sources on the time scale of a pointing. However, most of the sources are weak enough to neglect the influence of their variability on the image reconstruction, and thus can be considered as constant. Even flaring sources are too weak and/or shortliving (e.g. SGR 1806-20) to have any effect on long period. In this work, the "time-model-fit" tool has been used for a dedicated variability treatment applied only to the most intense sources, namely, 4U 1700-377, SCO X-1 and OAO 1657-4154, below 50 keV. We build for them the light curves in a one pointing time scale. The corresponding contributions in the count space are then substracted from the original data to derive a "corrected" data set to use as SPIROS input. This cleaning is done from the original data set for each iteration, as the strong sources contributions can be affected by new sources introduction. Applying this scheme to the whole data set in the 25-50 keV energy band improves the global reduced χ 2 from 3.24 (standard procedure) to 1.33 (Table 3). Catalogue generation Images are built using all our data in the following energy bands : 300 -600 keV, 150-300 keV, 50-150 keV and 25-50 keV. In order to minimise bias (see §4.1), the first step of the analysis was performed without any a-priori information about known sources. The a priori knowledge is introduced progressively. For each image, SPIROS is parametrized in order to search for up to 30 new excesses above 2σ in addition to the input catalog built from the previous iteration. This process (i.e. to fix found positions) suppresses the instabilities of such an IROS algorithm. Moreover, such a procedure leads to a minimal sky model able to represent the data. The catalog generation process is the following: • The first image is obtained from SPIROS with an empty input catalog and gives a list of source candidates. • A catalog is built with: (i) All identified sources above 5σ with their celestial positions. (ii) The unidentified excesses above 5σ with the positions found. The source identification process is described §5.1. This current catalog is completely regenerated at each iteration as significances can evolve. • We run SPIROS again with this catalog as input, obtain a new list of sources/excesses, and restart with step 2 • This iterative analysis continues until no other potential excess ≥ 2σ can be found. A flow chart illustrates this process in Figure 1. The final position catalog for a given energy band is built from the last iteration, where sources and excesses with a significance threshold fixed to 4 σ are accepted. The use of a dedicated catalog for each energy band allows us to restrict the number of free parameters to the minimum needed, thus decreasing the flux detection limit at high energy. Figure 2 and 3 present the images obtained by SPIROS in the 25-50 keV and 50-150 keV energy bands using our corresponding catalogs in input. Source identification To obtain source identification X-ray source catalogs are used. Primarly, the ISGRI catalogs (Bird et al., 2004, Revnivtsev et al., 2004 are used and, if nothing is found, the ISDC catalog is used (Ebisawa et al., 2003 3 ). Each excess is considered associated with the nearest known emitting X/γ ray sources whose distance is less than 1 • . This rather high value, 3 times the theoretical value for a 5σ detection (Dubath et al., 2005), is used because the source localisation precision is degraded when more than one source is present, especially for crowded regions. Note however that this value is much less than the geometrical resolving power (2.6 • ), as the dithering pattern amounts to improving the imaging system. So, two known sources closer than 1 • can be retrieved depending on the statistics (Dubath et al., 2005). The Galactic Center region is an exception as too many sources are present in a small area. So we represent the 1 • region around the Galactic Center by only one source, namely 1E 1740.7-2942. A more sophisticated scheme is used for the 25-50 keV band: the dataset used is divided in two parts, one constructed with odd pointing numbers the other with even pointing numbers; then two sub-images are built. A non-identified excess is considered as source candidate if it is detected in both images (cross validation method). This method drastically reduces the number of false source detections introduced by systematics, mainly due to variable sources. The number of excesses as function of the threshold level is shown in table 2. Above 150 keV and below 5σ, the number of unidentified excesses increase rapidely. From this, empirical thresholds of 5σ and 4 σ respectively below and above 150 keV have been fixed. Final flux extraction Once the position catalogs completed, we build a complete sky model including the detected point sources plus diffuse emission morphologies (described is §6.1). Fluxes and significances for each of this sky model components are thus computed in a set of energy bands, using "time-fit -model" ( §4.2). The fluxes of point sources and diffuse emission are assumed constant (except for the 20-50 keV band, where 4U 1700-377, SCO X-1 and OAO 1657-4154 vary on the time scale of one pointing). The background normalization is adjusted on a time scale of one pointing. Excesses above 4.5σ are kept to form the final catalogs. Their fluxes and significances are computed again to be consistent; thus the final catalogs contain some sources below 4.5σ that were previously detected above this threshold. This process is described in Figure 4. The final χ 2 obtained with this method between the proposed sky model and the data are shown in table 3. The catalog The use of a dedicated input catalog for each energy band allows us to optimize the signal-to-noise ratio per energy band. These catalogs are far from exhaustive: In the computed 1-year-averaged emission, weak or short-transient sources can be completely washed out. Moreover, as we looked for a minimal sky model, and given the modest SPI angular resolution, only one source is necessary to represent the complex regions (1E 1740.7-2942 for instance). This work, based on the whole dataset, has been complemented by applying the same procedure in 3 subsets, sorting data according to the average Galactic longitude of each observation (table 1): positive (l > 5 • ), negative (l < −5 • ) and central (−5 • ≤ l ≤ 5 • ) longitudes, that have more or less equal exposures. These new datasets allow us to add four sources in the 25-50 keV band catalog. These sources are labelled with an asterisk. As the final result, in the 25-50 keV domain the catalog contains 63 excesses above 4σ; 59 have been identified with known hard X-ray objects in the IBIS catalog, 4 are tentatively associated with X-ray sources (labelled with **). Among the tentatively identified sources, the source at (l,b)=(1.94 • ,-2.02 • ) has a quite high significance of 11.2σ, while its flux is only 6.3 mCrab. It could represent several weak (eventually extended) sources below ISGRI detection threshold that are integrated within the SPI angular resolution. It could also results from variable sources not taken into account or from errors accumulation due, for example, to a wrong center-of-gravity of the many sources that compose our so-called '1E 1740.7-2942' source actually fixed at the 1E 1740.7-2942 position. Intensive simulations are needed to understand the behaviour of the instrument in such a crowded region, and is beyond the scope of this paper. Spectral components of the Galactic ridge emission We will now concentrate on the relative contribution of point sources to the Galatic ridge emission as a function of the energy range. In the soft γ ray regime, multiple components contribute to the total emission. These include discrete sources, positron annihilation line and three-photon positronium annihilation continuum, and a diffuse continuum resulting probably from cosmic ray interactions with the interstellar medium. SPI is sensitive to both discrete sources and diffuse emission. As a consequence the discrete sources should always be extracted simultaneously with diffuse emission. However it is difficult to separate/distinguish extended and discrete sources emission if too many free parameters are to be determined: the error bars then become very large, and it may be impossible to derive meaningful information from the analysis. Therefore as much a priori information as possible is included in the analysis: -precise external source locations is used in catalogs. -spatial morphologies derived from previous works for each diffuse component are used and fixed a priori. 6.1. Sky model and components of soft γ-ray Galactic emission Point sources model versus energy In order to estimate the total point source emission, it is obvious that all the discrete sources (also weak, transient and possibly undetected) should be included in the analysis. On the other hand, increasing the number of sources decreases the significance of the measurement. Thus only the significant sources are introduced in the analysis, their number depending on the considered energy band. Three spectral regions are considered: • In the 20-150 keV energy range, the 63 sources of the 25-50 keV catalog have been introduced. • In the 150-300 keV energy range, the 20 sources of our 50-150 keV catalog is used. • Above 300 keV, the 4 sources detected above 150 keV have been considered. CR diffuse continuum The diffuse emission morphology in the 50-400 keV estimated by measurements from OSSE/GRO (Purcell et al., 1996, Kinzer et al., 1999 and confirmed with subsequent simultaneous RXTE and OSSE observations (Valinia et al., 2000), is broadly distributed in longitude with a 5 • − 6 • FWHM in latitude and a ∼ ±35 • extent in longitude. However since most of this emission at least in the low energy range is due to point sources (Lebrun et al., 2004), it cannot be used to represent the CR diffuse component. A better model may be the CO map (Dame et al., 2001) since it is a tracer of the interstellar matter. In the absence of a better model and for simplicity, the CR diffuse continuum spatial morphology is modelled by the CO map. Nevertheless, this point constitutes a weakness, as finally the morphology is poorly determined. Positron annihilation line and three-photon positronium continuum The annihilation line detected by SPI has been used to study the 511 keV spatial morphology (Knödlseder et al., 2003). This component is equally well described by models that represent the stellar bulge, by halo populations or by an azimuthally symmetric Gaussian with FWHM of ∼ 8 • (Knödlseder et al., 2005). We thus modelled it with this last simpler hypothesis. A recent study of the positronium annihilation emission with SPI shows that its all-sky distribution is consistent with that of the 511 keV electron-positron annihilation (Weidenspointmer et al., 2005). In the absence of any other information, the three-photon positronium continuum is assumed to have the same spatial distribution as the annihilation line. Extraction of sky model component fluxes Component intensities of the sky model are extracted channel by channel with the timemodel-fit algorithm described in §4.2. Then, count rates are converted into photon spectra for each component. The total point source contribution to the Galactic emission is obtained by summing all the point source spectra in the central radian, excluding those from Sco X-1 and Cen A which are located at high galactic latitude (b ≥ 20 • ). Figure 5 shows the composite spectrum of the three major components. The CR diffuse emission spectrum is more complex. Below 50 keV it clearly exhibits a soft component. In the 300-500 keV domain, there is clearly some cross-talk between the positron annihilation and the CR diffuse continuum components. This can be easily understood as we try to fit simultaneously two sky model distributions that have a common part in the inner Galaxy where the exposure time is the highest. The CR diffuse continuum fit above 50 keV, excluding 300-500 keV points (figure 5), gives a photon power law index of ∼ -1.8 +0.5 −0.4 and a 100 keV flux of (6 ±2) × 10 −5 photons cm −2 s −1 keV −1 for a reduced χ 2 of 0.3(8 dof). In addition, a fit using only data whose pointing direction is outside the central radian, and supposed to contain insignificant electron-positron annihilation emission, gives a power index of 1.7 +0.5 −0.4 above 50 keV. These two indexes are fully compatible with the hard component expected from a 'cosmicray interaction' model (Skibo et al., 1993), which can be approximated by a power law with index of ∼ -1.65 over the 50 keV-10 MeV range. As we poorly constrain this parameter, we decided to fix the power law index to 1.65. Finally, in the 20-1800 keV energy range, and excluding the 300-500 keV band,the CR diffuse continuum spectrum is modelled by two components to give a reduced χ 2 of 0.5 (8 dof): The first one is a power law with a photon index 1.7 +0.5 −0.4 and a cutoff energy of 20 keV, having a 50 keV flux of (3.1 ± 0.7) × 10 −4 photons cm −2 s −1 keV −1 . The second one is a power law with a photon index fixed to 1.65 and a 100 keV flux of (4.8 ± 2.2) × 10 −5 photons cm −2 s −1 keV −1 that represents the CR diffuse above 50 keV up to several MeV (including OSSE and COMPTEL results). From this study, the contribution of point sources to the total Galactic emission can be derived: 85% in the 25-50 keV energy range, 90% in the 50-110 keV and 85% in the 110-140 keV range. This fraction then decreases as the positronium emission begins to be significant. Second, the 511 keV flux extracted in a 10 keV width channel is ∼ 0.9× 10 −3 photons cm −2 s −1 . This value is compatible with that obtained in a completely different way by Knödlseder et al. (2005). From these data, we also extract a positronium fraction (defined as in Brown & Leventhal, 1987) of ∼ 0.9 as a rough estimate. In fact, the CR diffuse component in the 300-500 keV energy range is clearly overestimated, leading to an underestimation of the positronium flux. A more complex and adequate method has thus been proposed for this spectral region (see next section). Complete photon model fitting (above 300 keV) The photon spectra extraction method applied above ( §6.2) presents the advantages to be spectral model independent and relatively easy to implement. Its "simplicity" lies on approximations well-suited only for energies below ∼ 300 keV. Indeed, the SPI response is averaged over all the exposures for a given component, and the energy redistribution matrix is not properly taken into account. The latter can lead to significant errors particularly in the case of photon spectra with positive slope, such as the positronium continuum (i.e. above 300 keV). For these reasons a complete model fitting software has been developed, in which the emission photon spectral models of each sky components are convolved with the full SPI response (Sturner et al., 2003) for each pointing. This algorithm is detailed in Annex B. In the other hand, this kind of algorithm allows to fix the shape of the spectra as a function of the energy, via an analytique emission model. As a consequence, the problem of "cross-talk" previously encountered is drastically reduced. Each photon spectral component follows an appropriate emission model whose parameters can be adjusted simultaneously to the whole set of data along with the background intensity. Given the size of the dataset such an algorithm can lead to prohibitive computing time and memory space if the number of free parameters is too high. Fortunately, the benefit of this method is more significant at high energy, where the source number is low. Moreover, we limit the number of free parameters by fixing some of them. First, as the used energy band is limited to 300 keV-1 MeV, the slope of the power law that modelled the CR diffuse emission is fixed to 1.65 while the flux is to be fit. Second, the point source emission consists of 4 discrete sources (table 4) whose spectra are represented a by power law with an exponential cut-off. Because the source statistics are low, the indices and normalizations of these power laws have been previously determined below 150 keV by the first method ( §6.2), while the cutoff energy is to be fitted. Results may slightly depend on the sources spectral shape assumed above 300 keV. The annihilation spectrum (positronium continuum plus line) is then determined by the complete model fitting. A line flux of (0.93±0.15) 10 −3 photons cm −2 s −1 and a positronium fraction of 0.97 +0.09 −0.07 are found. The 100 keV flux of the diffuse emission is 5±3 × 10 −5 photons cm −2 s −1 keV −1 , in perfect agreement with previous determinations. The reduced χ 2 of the fit is 0.99 (630337 dof), the reduced χ 2 for each energy band are distributed between 0.95 and 1.2. The result is illustrated in figure 6. Discussion and conclusions Despite the SPI instrument's moderate angular resolution, and benefitting from the large number of pointings, a first catalog of hard X-ray sources detected by SPI/INTEGRAL in the Galactic plane between −50 • ≤ l ≤ 50 • , −25 • ≤ b ≤ 25 • has been derived. This catalog contains 63 sources and gives their fluxes up to 300 keV. It has been built by means of model-fitting procedure simultaneously applied to all pointings of the whole dataset of 5.7×10 6 seconds, and represents the minimal averaged sky model needed to decribe the data. Thus transient sources could escape detection. Moreover, due to the modest SPI angular resolution, our sky model in crowded regions is simplified, with one point source representing the combined emission of all sources near 1E 1740.7-2942. On the other hand, this catalog is perfectly suited to estimate the point source contribution to the Galactic ridge emission. The diffuse emission components (continuum, 511 keV and positronium) have been studied by fixing their spatial distributions, as a first order model, and taking into account our catalog. The main conclusions are : • Point sources contribute at least for 85-90% of the total Galactic emission in the energy range 25 to 140 keV. This independantly confirms the ISGRI result that point sources largely dominate the total Galactic emission (Lebrun et al., 2004) at least up to 250 keV. The interstellar emission is around 15 % of the total emission up to 150 keV. The low energy part (< 250 keV) of the total spectrum is source dominated while the high energy part (> 250 keV) is diffuse dominated. • The CR diffuse emission (50 -250 keV) can be fitted with a power law joining smoothly the high energy continuum as measured by OSSE (Kinzer et al., 1999) and COMPTEL (Strong et al., 1994) above 1 MeV. • No additional spectral component is needed above the power law spectrum to model the diffuse continuum above 50 keV. The soft diffuse component reported by Kinzer et al. (1999) in this region is mostly due to point sources not taken into account by this instrument. • Below 50 keV, a real diffuse soft component can still exist, but may also be related to very steep/weak X-ray sources not included in the present analysis. • There is no evidence of 511 keV point source emission, in agreement with the conclusion drawn by Knödlseder et al. (2005). These results constitute a first step as they are based on SPI data taken during the first year of operation and will be obviously refined in the future. An alternative approach is proposed by Strong et al. (2005). The catalogs are available electronically as ApJ 'supplementary electronic materials'. Strong, A. W., et al. 2004b, Proc. 5th INTEGRAL Workshop, ESA SP-552, p507;astroph/0405023 Strong A.W. et al., 2005, A&A in press, astro-ph/0509290 Sturner, S.J., Schrader, C.R., Weidenspointner, G., et al., 2003, A&A, 411, L81 Sugizaki, M., Kazuhisa K., Kaneda H., et al., 2001, ApJS, 134,77 Terrier, R., et al., 2004 (5) Note. -Before subtracting diffuse and annihilation radiation. The number of excesses above 2 σ are respectively 105, 42, 17 and 16 for the 25-50 keV, 50-150 keV, 150-300 keV and 300-600 keV. The 25-50 keV result is the combination of 4 images (l > 5 • , l < −5 • , −5 • ≤ l ≤ 5 • and the sum, see §5.3). In parenthesis is indicated the number of excesses associated with an ISGRI detected sources (Revnivtsev et al., 2004, Bird et al., 2004. Note. -The catalog contains only the sources detected in the 50-150 keV band. The counts for diffuse continuum and sources were extracted simultaneously to obtain these data.(**) Source tentatively identified. Note. -The catalog contains only the sources detected above 5 σ in the 150-300 keV. The counts for diffuse continuum, three-photon positronium continuum, 511 keV line and sources were extracted simultaneously to obtain these data. There are 15 energy bins logarithmically spaced, and a bin of 10 keV centered at 511 keV. Spectra below 150 keV are obtained using the 63 sources catalog (table 4), 150-300 keV spectra are obtained using the 20 sources catalog (table 5) and above 300 keV using the 4 sources catalog (table 6). CR diffuse continuum spectrum (fill squares) and its modelisation (dashed line) that consits of two components : a power law of index 1.65 (solid line), and the power law plus exponential cutoff (dot line) are shown. Summed point sources emission (diamonds) and its fits (long-dash line), and the approximation of annihilation radiation spectrum (star) and its fit (dash-dot-dot line) are also represented. The relation between the expected data, instrument aperture response, background and the N s sources which light the detectors is where E(d,p,e) and B(d,p,e) are respectively the expected data and background in the energy bin e, pointing p and detector d. I(θ j , φ j , e) is the intensity of the source number j, or sky image pixel j, in the direction (θ j , φ j ) in the energy bin e. M(d, p, e, θ j , φ j ) the instrument response of the data element (d,p,e) to the sky image pixel j in the direction (θ j , φ j ). t(d,p) is the duration exposure for the detector d and pointing p. The background term is rewritten as U(d,e) is the uniformity map that fixed the relative counts ratio between the 19 Ge detectors ( §4.1). If the background amplitude varies on the pointing timescale, then an amplitude coefficient A(p) is introduced to describe the variable background B v , For a given energy bin e and to simplify the equation representation, the sky image pixel direction (θ j , φ j ) is represent by the sky pixel number j. The equation reduces to For the maximum likelihood algorithm, the cash statistics (Cash, 1979) is used and the following equation is to be maximized with respect to the parameters I and A (which are constrained to be positive): L(I 1 , ..., I N ; A 1 , ..A Np ) = E(d, p) − N(d, p)log (E(d, p)) where N(d,p) are the measured data (for energy bin e) and N P the number of pointing. A.2. Background uniformity map estimation The background uniformity estimation is obtained in two steps. First an uniformity map U(d) is assumed and the likelihood is optimized with respect to I and A to give the best estimation I and A. Second, knowing the current best estimate of the sources intensity and current background. The background map is obtained through the minimization for each detector of The best uniformity map is then This process is repeated several times until the equivalent χ 2 stops to decrease , the equivalent χ 2 being defined as A.3. Variable source The expected counts E from a given source k counts is -33 -E k (d, p) = M(d, p, k)t(d, p)I(k) For a variable source this equation is to be expanded in several equations, corresponding to intervals whic the source is considered as constant E k (d, p) =          M(d, p, k)t(d, p)I 1 (k) if p = 1, · · · , p 1 M(d, p, k)t(d, p)I 2 (k) if p = p 1 + 1, · · · , p 2 · · · · · · M(d, p, k)t(d, p)I L (k) if p = p L−1 + 1, · · · , P Finally, instead of a single intensity for source k, there is L intensities I 1 (k), · · · , I L (k) to be determined. B. Photon model-fitting The function to be minimized is (N(d, p, e) − E(d, p, e)) 2 σ 2 (d, p, e) where N E is the number of detector energy bin (counts space), N P the number of pointing and N D the number of detector. N(d,p,e) and E(d,p,e) are respectively measured and expected counts for detector d, pointing p and detector energy bin e. The following equation relies the response matrix, sources photon spectra, background and expected counts where R(d, p, j, E, E ph ) is the response (aperture response and detector redistribution matrix) of the data element (d,p,e) to 1 incident photon in the sky direction (θ j , φ j ) for the incident in the energy bin energy e ph .N s and N E ph are respectively the number of sources and energy bin in the photon space. F (j, e ph ) the incident photon spectrum in the energy bin E ph (photon space) for the source j. For each one of the N s sources, the incident photon spectrum is described by few parameters described by the vector θ j F j,e ph ( θ j ) ≡ F (j, e ph ) j = 1, · · · , N s ; e ph = 1, · · · , N E ph The background will depends on the assumed photon spectra ( §B.1) and thus on the parameters set of parameters θ B d,p,e (θ) ≡ B(d, p, e) with θ ≡ ( θ 1 ; θ 2 , · · · , θ Ns ) The set of parameters θ are adjusted to minimized the function f (θ) ( ≡ χ 2 ) through a non-linear minimization method. B.1. The background Given the parameters which describes the photon spectra and thus the photon spectrum of each sources F (j, e ph ), the background is determined in each energy bin e. Given the best current background pattern U (d), the background is modeled as B(d, p) = t(d, p) U(d)A(p) The current best background amplitude A(p) is determined exposure by exposure through the minimization with respect to A(p) of B.2. background uniformity pattern Having the sources parameters best estimate, the background uniformity pattern U(d) can be optimized as in §A.2
2014-10-01T00:00:00.000Z
2005-10-04T00:00:00.000
{ "year": 2005, "sha1": "f9f773bbd7837ed45fd67ab4a1fa4cc7bc09c087", "oa_license": null, "oa_url": "http://arxiv.org/pdf/astro-ph/0510084", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "f9f773bbd7837ed45fd67ab4a1fa4cc7bc09c087", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
258838688
pes2o/s2orc
v3-fos-license
Financial Performance Adequacy of Pension Fund Managers in Nigeria Purpose: Traditionally, the Nigerian pension fund system was based on a defined benefit scheme for the public and private sectors and coincided with serious challenges in the payment of retirement benefits to retirees. These challenges led to the introduction of a defined contribution scheme in terms of the Pension Reforms Act. Since the management of pension fund assets is the sole responsibility of pension fund managers, there is a need to investigate the adequacy of pension fund managers’ financial performance since the change in pension fund regime. The pertinent research question in the study was: To what extent do pension cost incurred, revenue, the inflation rate and total contribution affect benefits paid and cash inflow? The extent to which federal government bonds, securities, total contribution and the inflation rate affect investment income were also examined. Methodology: Autoregressive distributed lag (ARDL) cointegration and multiple regression were used in the analysis of the data. Findings: The results of the study revealed that in both the short-term and long-term analysis, other costs incurred by pension fund management lead to lower benefits paid to retirees. Furthermore, higher administrative costs lead to higher benefits paid, given that increases in administrative costs promote higher inflow contributions, and investing in federal government bonds and, in particular, Treasury bills promotes higher investment income. Thus, securities increase investment income, and the higher the inflation rate, the higher the investment income. Originality/Value: The policy implication signifies a need to reduce pension costs incurred on pension fund management and to Introduction The global population is aging and fiscal stress is placing pension fund systems under increasing pressure, raising concern in developed countries, such as the United Kingdom (UK) and the United States of America (USA) (Hsin & Mitchell, 1995). Developed countries have recognised that the situation necessitates the introduction of a measure of some kind to reduce high pension fund operating costs without reducing the quality of retirement services.The main interest of pensioners and beneficiaries is whether the management of their contributions to future retirement savings is producing optimal returns, thereby adding value to their investment (Alda, 2018).Fortunately for developed countries, it was found that their pension fund industries contribute to economic growth and play a significant societal role, which made global pension fund investments grow significantly over the past two decades (Alda, 2018).Furthermore, developed countries' pension funds are investment products that serve the purpose of enabling their members to save for their retirement. Global pension fund designs can be assessed based on their target returns, asset allocation, cash flow, fund manager selection and the cost involved (Wanger, 2021). In the Organisation for Economic Cooperation and Development (OECD) countries, pension fund assets increased up to EUR15.6 trillion in 2011 (Ballester, 2014).This growth suggests that the pension fund industry is a major investor in the economies of OECD countries.Moreover, Erzurumlu and Ucardag (2021) claim that the total asset size of OECD pension funds increased by 10 per cent annually between the financial crises of 2008 and the end of 2019.This growth amounted to US$32.2 trillion. Several studies have been conducted on pension fund performance in developed countries.In the Euro zone, Otero-Gonzalez et al. (2021) focused on active management, value investing and pension fund performance.Li and Cowton (2022) examined a defined benefit pension de-risking strategy as a determinant of pension buy-ins in the UK.Clark (2022) revealed the problematic nature of pension regulation in performance governance, at the expense of innovation, in the UK.Alda et al. (2017) examined the performance of individual pension fund managers in the UK.Ballester (2014) analysed investor responses to various measures of pension plan performance in Spain. Studies on pension fund performance in emerging economies such as Turkey have examined private pension fund flow, performance and cost relationships under frequent regulatory changes (Erzurumlu & Ucardag, 2021).Chu (2008) reviewed the performance persistence of pension fund managers in Hong Kong.Mittelstaedt and Olsen (2003) examined whether the risk-adjusted returns of the pension fund system in Chile were consistent with the Chilean economy.Several studies have been conducted regarding pension funds in Nigeria.Ifenna and Arinze (2020), for example, investigated the relationship between pension fund administrators and the financial transparency of retirement savings.Prior to Ifenna and Arinze's study (2020), pension administration, in both the public service and the private sector, experienced several challenges.The public service operated a defined benefit scheme (DBS), and the payment of retirement benefits was budgeted for annually.The complications of a DBS led to the pension fund reforms of 2004.The Pension Reforms Act (PRA) of 2004, as amended in 2014, is the most recent legislation of the Federal Government of Nigeria that is aimed at reducing the difficulties encountered by retirees in Nigeria (National Pension Commission [NPC], 2019). The PRA covers pension fund members from both the public and private sector.The PRA established a defined contribution scheme (DCS) that is regulated and supervised by the National Pension Commission (NPC).The Commission has the power to formulate, direct and oversee the overall policy on pension matters in Nigeria.The pension fund administrators are represented by firms that manage the pension funds (NPC, 2019).A DBS promises a specific income based on rules set out by the scheme, while a DCS depends on factors such as the member's contribution and the fund's investment performance.The retirement savings account of a DCS is managed by pension fund managers. Against this background, this study investigated the adequacy of the financial performance of firms classified as pension fund managers in Nigeria.The rationale for this investigation was to determine whether the PRA has succeeded in dealing with the various challenges brought about by the old DBS.As a result, two research questions were formulated.The first research question is as follows: To what extent do pension cost, revenue, the inflation rate and total contribution affect benefits paid and cash inflow?The second research question is as follows: To what extent do federal government bonds, securities, revenue, total contribution and the inflation rate affect investment income?Nigeria was selected from among other sub-Saharan countries because of its increasing population of retirees, registered pension fund contributors from public and private organisations and pension fund managers, on the one hand, and its less than satisfactory increases in pension fund assets, on the other. This article is structured in five sections.Section one contains the background to the study.Section two contains a review of relevant literature and a discussion of hypothesis development for the study.Section three presents the methodology that was adopted for the study.Section four contains the empirical results of the data analysis and a discussion of the results.Section five presents policy implications of the study findings. Literature Review and Hypothesis Development Unfunded pension schemes are subject to demographic, wage and longevity risks since expenditure is financed by the contributions of the working population.For this reason, Alonso-Garca (2017) and Wanger (2021) urge policymakers to ensure that pension contributions by workers during their working life correspond with pension rates at retirement and be managed well, to the extent that there is adequate liquidity.These authors posit that this financial performance requirement prompted most developing countries, particularly Nigeria, to transition from a DBS to a DCS.Hsin and Mitchell (1995) found that administrative cost in pension fund management rises less as the size of an individual pension plan grows because of investments.As a pension fund expands, the cost incurred on the pension plan is less.The authors argue that a larger pension plan allows for substantial cost saving.Moreover, Caswell assets and the number of contributors also increase.Cooper et al. (1984) used the Cobb-Douglas cost function to determine pension system operating expenses and obtained evidence that the number of contributions, net pension assets and the annual contribution value appear to be based on scaled economies.Wang and Zhu (2022) argue that owing to the long accumulation of a DCS, the inflation rate cannot be ignored during the investment period.They note that inflation can be affected by fluctuations in the financial market and hence propose an optimal asset allocation problem for DCSs, with stochastic wages under inflation risk. In addition, Xiaohua (2022) found that in almost all provinces of China, the revenue of public pension funds that are defined benefit schemes does not cover expenditure, which points to challenges in the financial performance of the China Pension Insurance System.Against this background, the following hypothesis is posited: Hypothesis 1: Pension cost, revenue and the inflation rate are unlikely to have a statistically significant relationship with benefits paid.Wanger (2021) explains the rationality assumption of neoclassical economics as applied to DCS pension participants.According to Wanger, the degrees of rationality are often limited by demographic and market realities, and the levels of education and information at the disposal of the contributors often make a mockery of the rationality assumption.He further posits that the efficient market hypothesis (EMH), based on its postulation, supports fairness in pension payments, with a relationship to economic growth in the markets that pension funds have been investing in.The EMH posits that information is a major limitation in financial decision-making, particularly in the case of financially illiterate investors.The EMH indicated that for any pension system, particularly DCSs indicate its ability to combine goal with constraints (Wanger, 2021).Wanger (2021) notes that pension design can be examined using target returns, assets allocations, cash flows, fund manager selection and incurred expenses, which translate to forgone opportunities and the design risk composition.Tonks's study (2005) on the performance persistence of pension fund managers in the UK showed strong evidence of significant performance of fund managers in the short term and weaker evidence of persistent performance in the long term.Consequently, based on the above-mentioned literature, the following hypothesis is formulated: Hypothesis 2: Pension cost, revenue and the inflation rate are unlikely to have a statistically significant relationship with inflow contribution. Alonso-Garca (2017) and Wanger (2021) claim that there are challenges that hinder the sustainability of pay-as-you-go public pension systems.These challenges include an inadequate income for pensioners in their retirement phase; the absence of a fair level of payment in respect of the contributions paid by participants of pension systems; and pension systems that are not financially sustainable.Boado-Penas et al. (2020) assert that countries are better off under a mixed pension system from a DBS to a DCS.Moreover, Wanger (2021) argues that a DBS guarantees retirement income adequately but is risky and expensive to manage, whereas a DCS comes with the risk of inadequate retirement income since the portfolio value in the retirement stage may not be sufficient to provide an adequate retirement income, except if the DCS investors applied optimal allocation strategies and raised the contribution rate (Forsyth & Vetzal, 2019).Bulow (1982) observes that a corporate pension liability, which is an employment benefit, can be analysed based on the valuation of an ordinary corporate bond, which depends on the terms of the contract, dates and amounts of interest and principal payments, call prices, seniority of the debt and property alienation to security holders.Based on the above-mentioned literature, the following hypothesis is formulated: Hypothesis 3: Federal government bonds, securities, contribution, revenue and the inflation rate are unlikely to have a statistically significant relationship with investment income. Financial Performance Adequacy of Pension Fund Managers The measurement of organisational performance is tied to organisational purpose. Thus, the purpose of pension fund managers is to maximise pension fund value, subject to liability-related and operational risk constraints (Ambachtsheer et al., 1998).There are three drivers of fund performance, namely, fund size, the proportion of assets that is passively managed and the quality of the fund's organisation design (Ambachtsheer et al., 1998).Tonks (2005) found that the average performance of pension funds, relative to external benchmarks, is rather poor.Regarding the performance persistence of pension fund managers, Blake et al. (1999) argue that, in the UK, pension funds that have the same single fund manager over their length of operation are likely to bring survivorship, although pension funds may continue to hire the same fund management house if its operational performance satisfies the pension fund trustees. The authors express the view that survivorship bias is likely to affect performance evaluation. Furthermore, Wanger (2021) identifies parameters for evaluating a defined contribution pension design, which include automatic enrolment (fighting procrastination); regular dynamic asset allocation adjustment until retirement; higher replacement ratio; workplace financial education; median number of assetallocation changes; different saving rates for males and females; and life cycle (target-date) funds.Wanger (2021) also identifies some of the main defects of a DCS, which include its voluntary outlook, inherent default behaviour, endorsement effect (herding) and inertia.Davis (2005) illustrates the relevance of the DCS using optimal investment, by choosing a trade-off between low risk and higher return.The author states that pension funds derive major benefits from international investment, but 60 per cent of most country's pension assets is in its home market. Overview of the Pension Industry in Nigeria The NPC regulates and supervises the Nigerian pension industry in a transparent and consultative manner, specifically through regulatory and supervisory activities that cover surveillance; compliance and enforcement; investment monitoring; and the maintenance of a databank on pension matters.All regulatory and supervisory activities are targeted at achieving a sound and sustainable pension industry (NPC, 2020).A breakdown of the pension fund operators that are pension managers is provided in Table 1.Source: NPC annual report 2020 Membership of Pension Schemes The total membership of the pension schemes increased from 8 469 257 members as The Dependent Variables The dependent variables in the study were benefits paid (Model 1), inflow contribution (Model 2) and investment income (Mode 3). The Independent Variables The independent variables for Models 1 and 2 were other pension costs, administrative pension costs, revenue, inflation rate and total contribution.The independent variables for Model 3 were federal government bond (Treasury bill), other securities, revenue and total contribution, as well as control variables, such as inflation rate, gross domestic product (GDP) and the number of members, that is, contributors to the pension funds. The Results of the Data Analysis The results of the augmented Dickey-Fuller test (unit root test) are presented in this sub-section.This test was applied to indicate whether the variables were stationery or not. Results of the Augmented Dickey-Fuller (ADF) Test (Unit Root Test) Tables 3 and 4 indicate the results of the ADF test in respect of the variables in Models 1 and 2, respectively.The unit root test results in respect of Models 1 and 2, as shown in Table 3, indicate that the variables benefits paid, inflation rate, GDP, inflow contribution, other pension costs and administrative pension costs are stationary at a zero level, that is, they are in a I(0) series.The other variables, namely, revenue and total contribution are stationary at first difference I(1)-that order of cointegration was at the first difference. The unit root test results in respect of Model 3, as shown in Table 4, indicate that the variables investment income, federal government bond, revenue, total contribution and inflation rate are stationary at first difference I(1)-this suggests a long-term effect.However, other securities and GDP have a stationary effect at level I(0).Based on this result, ARDL regression is very important.Table 5 shows that the F-statistics for Models 1 and 2 and 3 are greater than the upper limits at a five per cent level of significance-therefore, there is a long-term statistically significant relationship in respect of Model 3, whereas there are shortand long-term statistically significant relationships in respect of Models 1 and 2. Results of the ARDL Regression Analysis for Models 1 and 2 This sub-section presents the results of the ARDL regression analysis of Models 1 and 2. The results are indicated Tables 6 and 7. Table 6 provides the results of the ARDL regression analysis of the financial performance adequacy of pension fund managers using benefits paid as the dependent variable and inflation rate, other pension costs, administrative pension costs, revenue and total contribution as the independent variables.Table 6 indicates that if other pension costs incurred are lower, then benefits paid will be lower in both the short-and long-term.This is because other pension costs have a negative statistically significant relationship with benefits paid.Consequently, it may lower the performance of pension fund managers, which may have an impact on what is deemed adequate financial performance for future sustainability.However, administrative pension costs have a positive statistically significant relationship with benefits paid in both the short-and long-term, which shows persistence in the performance of pension fund managers.The result also implies that if higher benefits are paid, there may be a higher cost on pension administration.The result supports Ping-Lung and Oliva's (1995) view that reducing the cost of public pension funds without cutting the quality of the retirement services provided by these plans is an important issue of public concern globally.Moreover, revenue generated increases the benefits paid to pensioners or retirees; the total contribution from members also increases the benefits paid. Table 7 illustrates the ARDL regression analysis of the financial performance adequacy of pension fund managers using inflow contribution as the dependent variable and inflation rate, other pension costs, administrative pension costs, revenue and total contribution as the independent variables.The results indicate that other pension costs have a negative statistically significant relationship with inflow contribution.This finding suggests that the lower the other pension costs incurred, the lower the inflow from contributions in both the short and long term.However, when administrative costs increase, the inflow from contributions also increases. This result is consistent with Alda's (2016) finding in respect of conventional and socially responsible pension funds in the UK that pension contributory inflow is a determinant of the performance ability of pension fund managers.The evidence implies that when the inflow of contributions decreases, fewer amounts are incurred on other expenses that are related to pension cost and higher administrative cost increase the inflow of the contributors. In addition, revenue generated increases the inflow from contributions.However, total contribution reduces the inflow owing to other pension costs and administrative pension costs incurred-these costs reduce the inflow to pension funds. Results of the Regression Analysis for Model 3 The results in respect of the relationship between federal government bonds, other securities, revenue, total contribution, inflation rate and investment income, as indicated in Hypothesis 3, are presented below.Table 8 indicates the results of the regression analysis in relation to the last hypothesis where the effect of total assets, namely, federal government bond, other securities, revenue and total contribution, and control variables, namely, inflation rate, number of members and GDP on investment income was determined.The analysis revealed that federal government bond, other securities, revenue, total contribution and all the control variables have a positively significant relationship with investment income.This result suggests that if pension fund managers invest more of pension funds in federal government bonds such as Treasury bills and other securities in the capital markets, there is an increase in investment income; with increase the inflation rate, GDP and number of members' contributors who registered with pension funds managers, as well as revenue and the total contribution to the pension fund.The result provides evidence of what is presently happening in the country.The federal government is investing heavily in Treasury bills and securities in the capital markets, using the assets of pension funds.The pension contributions from members also increase as the investment income increases.More employees and employers are registering with pension funds managers, leading to an increase in contributions and investment income.Moreover, the revenue from the federal government is higher as a result of higher investment income in payment of the returns on investments to various contributors.Further, the inflation rate is higher with increase in GDP, resulting in a higher value of investment income. Conclusion This article examined the financial performance adequacy of pension funds managers in the context of a developing country using data collected from the annual reports of the NPC and the NBS in Nigeria.From the results indicated in Table 6 it was concluded that there is a statistically significant relationship between benefits paid and all pension costs and revenue.This result provides evidence that, in both the short and long term, higher other pension costs will lead to lower benefits paid.This is because pension costs have a negative statistically significant relationship with benefits paid.Therefore, as much as the pension costs are above the inflow allowed, the benefits paid to pensioners may be lower; this may hinder the performance of pension fund managers.However, administrative costs incurred on pension funds have a positive statistically significant relationship with benefits paid in both the short and long term. The results indicated in Table 7 provide evidence that other pension costs decrease if the inflow from contributions decreases.However, pension costs on administrative costs increase when the inflow from contributions rises.This result implies that pension fund managers incurred more costs on administration, which may affect their financial performance adequacy for future sustainability. From the results indicated in Table 8 it was concluded that federal government bonds, other securities, total contribution to pension funds, the inflation rate, the investment income.This result implies that pension fund managers diversified their portfolios by investing in federal bonds such as Treasury bills and other securities locally.This result is consistent with Davis's (2005) finding that that 60 per cent of countries around the world invest their pension assets in local markets.Defau and De Moor (2021) note that pension funds have started to invest more in alternative assets such as Treasury bills and securities, that is, they show increasing interest in portfolio diversification.Furthermore, Tonks (2005) found that there is persistence in abnormal returns by UK fund managers in the short term and a reduction in returns in the long term. Policy Implication From the evidence provided in this study it is clear that there is a need for regulatory bodies, such as the NPC and the Securities Exchange Commission, to regulate pension fund assets.The DCS that is currently used in Nigeria is likely to face market turbulence, and pension fund contributions may be inadequate owing to the high rate of job losses in Nigeria.Therefore, the regulatory bodies need to implement greater enforcement in the form of rules, guidelines and regulations concerning the investment of pension fund assets.Pension fund managers should receive training in order to improve their performance in respect of the investment of pension fund assets. Greater attention should be paid to the risk associated with the investment of pension fund assets to avoid investment losses.The regulatory bodies should introduce more prudential guidelines regarding the pension costs that are incurred in pension fund management so that a balance can be achieved between costs incurred, inflow contribution and benefits paid to pensioners or retirees.Pension fund managers must be monitored by regulatory bodies to ensure that they invest in real assets that could militate against inflation instead of nominal assets that could be eroded by unexpected increases in the inflation rate. Future research could focus on the determinants of the selection of pension fund managers by pension fund contributors to identify the factors that play a role when pension fund contributors decide which firms would be able to manage their pension funds the best.Another area of further research is the effect of regulatory bodies of pension funds on the performance of pension fund managers.These two research areas have a nexus with the financial performance adequacy of pension fund managers in the Nigerian context. schemes dominated the total pension scheme membership at 8 891 236, representing 99.35 per cent of members.The membership of approved existing schemes (AESs) and closed pension fund administrators (CPFAs) accounted for the balance of 0.65 per cent of members at 40 951 and 17 349 members, respectively (NPC, 2019).The RSA registrations increased from 8 410 184 as at December 2018 to 8 891 236 as at 31 December 2019, representing a growth rate of 5.72 per cent (481 052).This growth was attributed to several factors.These factors include an increase in the level of compliance by the private sector as a result of the various steps taken by the NPC to improve compliance and coverage (such as engagement of recovery agents), as well as marketing strategies of the pension fund managers.The enforcement of the obligation that the Public Procurement Act of 2007 places on bidders for federal government contracts to provide evidence of compliance with the PRA also contributed to the increase in membership (NPC, 2019).Following the launch of the Micro Pension Plan (MPP) in March 2019, registration for the MPP commenced.The RSA registration count for participants stood at 39 686 as at 31 December 2019.It was expected that the number of RSA registrations would continue to grow to improve scheme membership through the sustained implementation of the MPP in 2020 (NPC, 2019). Table 6 : ARDL Regression Analysis of the Financial Performance Adequacy of Pension Fund Managers: Dependent Variable: BENEFIT_PAID for Model 1 Variable Short Term Long Term Coint. equ. ** and *** imply significant levels at 5% and 1%, respectively.Figures in parentheses represent the standard errors of variables. Table 7 : ARDL Regression Analysis of the Financial Performance Adequacy of Pension Fund Managers: Dependent Variable: INFLOW_CONT for Model 2 ** and *** imply significant levels at 5% and 1%, respectively.Figures in parentheses represent the standard errors of variables. Table 8 : Regression Analysis for Model 3: Dependent Variable -Investment Income * and *** imply significant levels at 5% and 1%, respectively.The numbers with significant levels are coefficient values, and the middle numbers are the standard errors.The numbers in parentheses refer to T-statistics. *
2023-05-23T15:03:44.459Z
2023-04-01T00:00:00.000
{ "year": 2023, "sha1": "fc13676b55b503b079c5d3f8078a53af8e09f48f", "oa_license": "CCBY", "oa_url": "https://doi.org/10.32602/jafas.2023.014", "oa_status": "GOLD", "pdf_src": "ScienceParsePlus", "pdf_hash": "53d7dd6130c823f74b10cce8b89ff0aa3f5b780f", "s2fieldsofstudy": [ "Business", "Economics" ], "extfieldsofstudy": [] }
258839698
pes2o/s2orc
v3-fos-license
Relation of Helicobacter Pylori and Hyperemesis Gravidarum in sample of pregnant women in Maternity Teaching Hospital of Erbil city Background and objectives: Nausea and vomiting in early pregnancy are accompanied by great impact on general health status of pregnant women especially if presented as hyperemesis gravidarum. The helicobacter pylori infection is predominant in Iraqi community. The aim of the study is to find out whether sero-positivity for immunoglobulin G antibodies to Helicobacter pylori is related to hyperemesis gravidarum. Methods : A study carried out in Maternity Teaching Hospital in Erbil city, Kurdistan region-Iraq from 1 st of May, 2019 to 30 th of April, 2020 on a sample of 80 pregnant women; 40 pregnant women with hyperemesis gravidarum and 40 pregnant women as controls. Diagnosis of hyperemesis gravidarum was done depending on clinical presentations and investigations findings. Serum electrolytes and immunoglobulin G antibody tests were done for the studied women. Results : In the current study, 60% of pregnant women with hyperemesis gravidarum pregnant women had positive immunoglobulin G of Helicobacter Pylori as compared to 35% of control group. The serum levels of sodium, potassium and chloride were significantly lower in 40%, 50% and 25% of pregnant women with hyperemesis gravidarum respectively compared to 10%, 17.5% and 7.5% in control group. Low educational level and socioeconomic status of pregnant women were significantly associated with hyperemesis gravidarum. Hyperemesis gravidarum was more common in pregnant women with normal body mass index. Conclusions: Helicobacter pylori infection is more common in pregnant women with hyperemesis gravidarum. The development of hyperemesis gravidarum in pregnancy leads to obvious electrolyte imbalance. Introduction The nausea and vomiting are predominantly affecting pregnancies. [1][2][3] Hyperemesis gravidarum (HEG) is a disease recognized by continuous severe nausea and vomiting of pregnant women that lead to ketosis with prevalence rates reaching to (0.3-2%). 4 It leads to weight loss, nutrients deficiency, dehydration, ketonuria, electrolytes and acid-base imbalance. The common complications of untreated HEG are Wernicke's encephalopathy, coagulopathy, depression, longer hospital stay and poor pregnancy outcomes like preterm labour, small for gestational age, fetal mal-development and fetal congenital anomalies. 6,7 The exact etiology of hyperemesis gravidarum is mostly unknown; however, some explanations exist such as hormonal changes, changes in gastrointestinal tract and genetic factors, 8 in addition to other risk factors such as increasing placental AMJ, Vol. 8 role in development of many pregnancy disorders like maternal anemia, maternal thrombocytopenia, intrauterine fetal growth restriction and miscarriage. 12 Many authors explored the relationship between H. pylori infection and increase risk and severity of HEG in pregnancy. [13][14][15] Iraqi studies on relationship between HEG and H. pylori infection are scarce, although, higher prevalence of H. pylori infection among Iraqi general population ranges between 26.6% to 55.8% in different age groups. 16,17 For these reasons, we conduct this study that is aiming to find out whether sero-positivity for IgG antibodies to H. Pylori are related to hyperemesis graviderum. Patients and methods The design of present study was a case control study carried out in Maternity Teaching Hospital in Erbil city, Kurdistan region-Iraq through one-year period from 1 st of May 2019 to 30 th of April 2020. The inclusion criteria included all pregnant women admitted in Maternity Teaching Hospital in Erbil city between five to fifteen weeks of pregnancy complaining from symptoms of hyperemesis gravidarum, while the control group was pregnant women with same gestational age but without manifestations of HEG. The exclusion criteria were pregnant women with history of thyroid disorders, multiple pregnancy, gestational trophoblastic disorders, hepatobiliary disorders, gastric or any intestinal diseases and women refused to participate in the study. An ethical approval was taken from the Ethical Committee of Kurdistan Higher Council of Medical Specialties and respecting the confidentiality of patients' data. The data were collected directly from pregnant women of both groups and filled in prepared questionnaire. Diagnosis of hyperemesis gravidarum was done depending on clinical presentations severe vomiting (≥3 times per day), not responding to traditional treatment, weight loss (≥5% of body weight). The socioeconomic status was classified according to family income per month; high (>1500 $ per month), middle (500-1400 $ per month) and low (<500 $ per month). The body mass index was classified into normal (BMI<25 Kg/m 2 ), overweight (25-29.9 Kg/m 2 ) and obese (BMI≥30 Kg/m 2 ). The gravidity history of pregnant women was categorized into; primigravidity, 2-4 gravida and ≥5 gravida, while the parity history was categorized into; nulliparity, 1-2 para and 3-4 para. The past medical history included the previous history of medical diseases like hypertension, diabetes mellitus and ischemic heart diseases. A sample of 5 ml blood was drawn from each woman for serum electrolytes and immunoglobulin G antibody tests for H.pylori. The investigations were all done in the laboratory of Maternity Teaching Hospital except for H. pylori IgG antibody, which was done at private laboratory in Erbil city by ELISA. The data collected were analyzed statistically by Statistical Package of Social Sciences software version 22. The Chi square and AMJ, Vol. 8 Discussion The burden of HEG does not include the physical impact on pregnant women only, but it also has prevalent social, psychological and economic implications on women and the family, in addition to national health burden. 18 In present study, 60% of HEG pregnant women had positive IgG of H. pylori as compared to35% of controls. This finding is consistent with results of Ahmed`s study 19 23 Similarly, Hussein study also found that the positivity of H. pylori stool antigen test was related to pregnant women with hyperemesis gravidarum. 24 However, recent study in Egypt reported higher H. pylori IgG serology titer among pregnant women with hyperemesis gravidarum than controls. 25 Although plenty of studies suggesting the relationship between H. pylori infection and hyperemesis gravidarum in pregnancy, many authors failed to found a significant relationship between them. [26][27][28] This inconsistency of relationship might be attributed to differences in definition of HEG, study inclusion criteria and differences in study population. Many mechanisms were suggested for effects of H. pylori to HEG like distorted gastric emptying; decreased gastrointestinal motility and hypersensitivity to gastric or duodenal distention. 29 The present study found that serum levels of sodium, potassium and chloride were significantly lower among pregnant women with hyperemesis gravidarum. These findings are in agreement with results of many studies. 30,31 In current study, there was a highly significant association between primary and secondary educational level of pregnant women with HEG. This finding coincides with results of Loh ET al 32 study. Our study found a significant association between low socioeconomic status of pregnant women and HEG. Consistently, Karaca ET al 33 study reported a significant relationship between hyperemesis gravidarum in pregnant women and H. pylori with important role of low socioeconomic status on H. pylori infection. In our study, pregnant women with normal BMI were significantly related to HEG. This finding is inconsistent with results of Kosus ET al 34 study. This inconsistency might be due to differences in study designs and inclusion criteria between two studies in addition to differences in obesity prevalence and dietary patterns between different communities. Conclusions Our study found that H. pylori infection is more common in women with hyperemesis gravidarum. The development of hyperemesis gravidarum in pregnancy leads to obvious electrolyte imbalance. Low educational level and low socioeconomic status of pregnant women are risk factors for hyperemesis gravidarum. Conflicts of interest The author reports no conflicts of interest.
2023-05-23T15:02:19.020Z
2023-05-18T00:00:00.000
{ "year": 2023, "sha1": "354c2b42da6b9cbcf7eeadc0c3ae25ad6cb0ce2c", "oa_license": "CCBYNCSA", "oa_url": "https://amj.khcms.edu.krd/index.php/main/article/download/186/176", "oa_status": "HYBRID", "pdf_src": "Anansi", "pdf_hash": "d70b91e28f411027a21a891eba4f0fb907ff3336", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [] }
245581609
pes2o/s2orc
v3-fos-license
Complement-Mediated Selective Tumor Cell Lysis Enabled by Bi-Functional RNA Aptamers Unlike microbes that infect the human body, cancer cells are descended from normal cells and are not easily recognizable as “foreign” by the immune system of the host. However, if the malignant cells can be specifically earmarked for attack by a synthetic “designator”, the powerful effector mechanisms of the immune response can be conscripted to treat cancer. To implement this strategy, we have been developing aptamer-derived molecular adaptors to invoke synthetic immune responses against cancer cells. Here we describe multi-valent aptamers that simultaneously bind target molecules on the surface of cancer cells and an activated complement protein, which would tag the target molecules and their associated cells as “foreign” and trigger multiple effector mechanisms. Increased deposition of the complement proteins on the surface of cancer cells via aptamer binding to membrane targets could induce the formation of the membrane attack complex or cytotoxic degranulation by phagocytes and natural killer cells, thereby causing irreversible destruction of the targeted cells. Specifically, we designed and constructed a bi-functional aptamer linking EGFR and C3b/iC3b, and used it in a cell-based assay to cause lysis of MDA-MB-231 and BT-20 breast cancer cells, with either human or mouse serum as the source of complement factors. Introduction In cancer therapy, the preferred targets are cancer cells rather than individual molecules, and the preferred outcome is irreversible destruction rather than reversible neutralization. Many approaches to cancer treatment, including the binding of inhibitors to active protein sites, can only reversibly neutralize targets at the molecular level. However, at the cellular level, when appropriate markers are available, they may be utilized for targeting and destruction of specific cell types, which is a more powerful strategy for the eradication of cancer cells. Unlike infectious microbes, cancer cells are descendants of normal cells, not easily recognizable as "foreign" by the immune system of the host. If the malignant cells can be specifically earmarked by a synthetic "designator", the powerful effector mechanisms of the immune response may be recruited to treat cancer. This approach is conceptually analogous to targeted drug delivery, but the "drugs" being delivered are patient-or hostderived factors and cells that are able to put on a powerful immune response with higher specificity and fewer side effects. In the human complement system, C3 is a key component of innate immunity, as in all pathways of complement activation the pivotal step is the conversion of C3 to C3b [1]. Complement has two major effector mechanisms for host defense, formation of the membrane attack complex (MAC), and opsonization. The MAC generated from C5-C9 can form membrane-penetrating lesions that lead to cell death by causing a rapid loss of cytoplasmic components [2]. Opsonization is the process by which particles become coated with molecules (opsonins) that allow them to bind to receptors on effector cells such as Sequences used to assemble the Tri-molecular and bi-molecular versions of "Trifecta": The tri-molecular construct (Trifecta-t) was assembled from the following three fragments. In Vitro Selection The procedures of pool construction and selection/amplification were modified from those published before [12,13] and described in detail in the Appendix A. Selected DNA pools were cloned in the pSTBlue-1 vector (Novagen, St. Louis, MO, USA) using a Perfectly Blunt Cloning Kit (Novagen). Ligated plasmid was transformed into NovaBlue Singles competent cells (Novagen). Plasmid DNA bearing inserts was purified for sequencing. For each selection at least 50 clones were sequenced and analyzed for characterization. Sequencing was performed in the Life Sciences Instrument Core Facility of the University at Albany. Minimization of aptamers was performed using deletion analysis. Molecular Binding Assays Electrophoretic mobility shift assay (EMSA) and filter binding assay were performed as described in [12]. A typical binding mixture with 32 P-labeled RNA contained about 20 fmol of the RNA probe and different amounts of protein (at least 1 pmol) in 20-µL volumes. In competition assays, the competing unlabeled RNA was in excess of protein by at least 10 folds, and both labeled and unlabeled RNA's were presented to the protein target simultaneously. The binding buffer contained 20 mM HEPES, (pH 7.4), 150 mM NaCl, and 10 mM MgCl 2 . Both BSA and yeast RNA at 1 µg/20 µL were added to the binding reaction to prevent nonspecific binding. The binding mixtures were incubated for 45 min at 37 • C before being subjected to filter-binding or electrophoresis. Filter-binding assays were performed with nitrocellulose filters (MilliporeSigma, Burlington, MA, USA) and a Bio-Dot SF (slot format) Microfiltration System (Bio-Rad, Hercules, CA, USA). Specific conditions for individual experiments, such as gel concentration and buffer type, are given in figure legends. Assembly of Multi-Valent Aptamers Subcloned or PCR-generated double strand DNA templates were transcribed in vitro using the DuraScribe T7 transcription kit (Epicentre/Lucigen, Madison, WI, USA) to generate individual aptamers. To assemble bi-functional aptamers all three or two components were mixed at equimolar ratio in 1x binding buffer (20 mM HEPES, 150 mM NaCl and 10 mM MgCl 2 , pH 7.4). The solution was heated at 95 • C for 2 min and then slowly cooled to room temperature over 1 h [14]. The mixture was electrophoresed in an 8% native polyacrylamide gel (acrylamide:bis-acrylamide = 37.5:1) in buffer (45 mM Tris base, 54 mM boric acid, and 2.5 mM MgCl 2 ). Correctly assembled products were purified from the gel and passed through a Sephadex G-50 spin column. Concentration was measured spectrophotometrically. Aptamer Stability Assays The stability of assembly was measured by incubating the "Trifecta" in 1x binding buffer with 0, 2, 4, 6, or 8 M urea for 30 min at 37 • C, followed by electrophoresis in an 8% native acrylamide gel in 1 /2 x TBE buffer. One oligo of the complex was radiolabeled. After electrophoresis the gel was exposed to phosphor screen and scanned by a Storm phosporimager (GE Healthcare, Chicago, IL, USA). Cell Surface Binding Assays MDA-MB-231 cells were seeded in 8-well chamber slides at~3500 cells/well and grown in 10% FBS for 48 h. Cells were washed 3 times with PBS and fixed 10 min in 4% paraformaldehyde in PBS, then washed again 3 times with PBS. EGFR Detection: Nonspecific binding was blocked by incubating the cells for 30 min in PBS containing 1% BSA. Wells were washed 3 times with 0.5 mL PBS, and cells were incubated with a 1:200 dilution of anti-EGFR monoclonal antibody sc-101 in PBS for 1 h. Cells were then washed extensively and incubated with 1:200 anti-mouse Alexa Fluor 594 for 1 h, washed again, and coverslipped in 1:500 DAPI/glycerol diluted 1:1 with PBS. Slides were photographed under a microscope at 44 ms (for DAPI) and 118 ms (for Alexa Fluor 594) exposures. Detection of Anti-EGFR Aptamer: Cells were first blocked with Biotin/Avidin blocking solutions (Molecular Probes) following the manufacturer's directions. To block nonspecific binding of nucleic acids, cells were incubated in 5x Denhardt's solution for 20 min, then washed 3 times in PBS and blocked again with 0.13 µg/µL BSA and 0.17 µg/µL torula yeast RNA in PBS containing 5 mM MgCl 2 for 20 min. Anti-EGFR aptamers or control 2'-F Py RNA (at a final concentration of 2.64 µM) were mixed with biotinylated oligonucleotide and denatured by heating 5 min at 70 • C in 100 µL of binding buffer (PBS containing 10 mM MgCl 2 ) and cooled to 25 • C at 1 • C/sec. Then, 7 µL of SA-PE was added, and the mixtures were incubated for 15 min at 25 • C and pipetted to wells containing 100 µL of binding buffer, 0.13 µg/µL BSA and 0.17 µg/µL yeast RNA, and incubated for 15 min. Finally, cells were washed 4 times in 500 µL binding buffer and coverslipped in 1:500 DAPI/glycerol diluted 1:1 with binding buffer. Slides were photographed under a microscope at 82 ms (DAPI) and 300 ms (SA-PE) exposures. Detection of C3c-containing complement molecules: Nonspecific binding of protein to cells was blocked by incubating cells in 0.1 µg/µL BSA in binding buffer overnight. Nonspecific binding of nucleic acids was blocked by incubating cells in 0.75 µg/100 µL poly DI/DC in binding buffer overnight, followed by incubation in 5x Denhardt's solution for 20 min, then washing 3 times in PBS and blocking again with 0.13 µg/µL BSA and 0.17 µg/µL torula yeast RNA in binding buffer for 20 min. Bi-functional aptamer ("Trifecta") or control 2'-F Py RNA in binding buffer was divided between wells at a final concentration of 0.25 µg of each aptamer per well. Cells were incubated with RNA for 45 min and washed gently 2 times in binding buffer. Then 10% human serum in 100 µL of binding buffer was added to wells and cells were incubated for 45 min. Cells were washed gently twice in binding buffer, then 1:250 anti-C3c was added to all but the "no-primary antibody" control well, and incubated for 45 min. (The primary antibody was previously adsorbed to cells to reduce background.) Cells were washed 4 times with binding buffer and incubated with 1:250 anti-mouse Alexa Fluor 594 in 100 µL of binding buffer for 45 min, then washed 4 times gently and coverslipped in 1:500 DAPI-glycerol diluted 1:1 in binding buffer. Cell Viability Assays Mid-log phase cells were trypsinized and seeded at~3000 cells/well of a 96-well tissue culture plate. Cells were allowed to grow for 12 h, followed by incubation with 1 µM "Trifecta" or its mutated variants in a medium containing 25% human serum. Fresh RNA-containing medium was replenished every 24 h for three days. Cells were then washed with 200 µL 1x PBS two times, and fixed with 10% formalin solution for five min. Fixed cells were washed once with 200 µL water and stained with 100 µL 0.2% crystal violet solution for half an hour. Stained cells were washed three times with 200 µL water. Bound dye was eluted for 30 min with 100 µL 0.1% SDS solution in water. All incubations were carried out at room temperature. The concentration of eluted dye was determined by reading absorption at 540 nm with a multiplate reader (BioTek SynergyH1, Agilent, Santa Clara, CA, USA). Numbers of viable cells were determined using a standard curve. For experiments with mouse serum, cells were treated as described above, except that 25% mouse serum was used in the medium. Cells were photographed using a 20x objective lens on a Nikon light microscope (Model TS100, Nikon Instruments, Melville, NY, USA) with attached digital camera and SPOT basic software (Spot Imaging, Sterling Heights, MI, USA). Generation and Refinement of the Utility Aptamers Previously, we had isolated an RNA aptamer for C3 and used it to commandeer the C3based opsonization-phagocytosis pathway [12]. However, the natural RNA molecules were labile in the extracellular environment. For this study, we performed three different in vitro selection experiments to isolate six distinct aptamers for C3 and its derivatives in the form of RNase resistant 2'-fluoro pyrimidine (2'-F Py) modified RNA. In these experiments, the initial pool was either completely randomized or derived from the sequence of the previously isolated AptC3-1 [12], and the target was either C3 or iC3b. The results of these experiments are described in detail in the Appendix A, and the sequence of these aptamers are presented in Figure 1. Interestingly, none of the newly isolated aptamers share sequence homology with the natural RNA aptamer AptC3-1. 100 µL 0.2% crystal violet solution for half an hour. Stained cells were washed three times with 200 µL water. Bound dye was eluted for 30 min with 100 µL 0.1% SDS solution in water. All incubations were carried out at room temperature. The concentration of eluted dye was determined by reading absorption at 540 nm with a multiplate reader (BioTek SynergyH1, Agilent, Santa Clara, CA, USA). Numbers of viable cells were determined using a standard curve. For experiments with mouse serum, cells were treated as described above, except that 25% mouse serum was used in the medium. Cells were photographed using a 20x objective lens on a Nikon light microscope (Model TS100, Nikon Instruments, Melville, NY, USA) with attached digital camera and SPOT basic software (Spot Imaging, Sterling Heights, MI, USA). Generation and Refinement of the Utility Aptamers Previously, we had isolated an RNA aptamer for C3 and used it to commandeer the C3-based opsonization-phagocytosis pathway [12]. However, the natural RNA molecules were labile in the extracellular environment. For this study, we performed three different in vitro selection experiments to isolate six distinct aptamers for C3 and its derivatives in the form of RNase resistant 2'-fluoro pyrimidine (2'-F Py) modified RNA. In these experiments, the initial pool was either completely randomized or derived from the sequence of the previously isolated AptC3-1 [12], and the target was either C3 or iC3b. The results of these experiments are described in detail in the Appendix, and the sequence of these aptamers are presented in Figure 1. Interestingly, none of the newly isolated aptamers share sequence homology with the natural RNA aptamer AptC3-1. Figure 1. Six distinct aptamers for C3 and its derivatives. Sequences for six 2'-F Py RNA aptamers are presented under their names. For clarity, only the variable region (in uppercase) and the relevant constant regions (in lowercase) are shown. The full-length sequences are given in Materials and Methods. Among multiple isolates of the same aptamer, some harbor point mutations. The sequences shown are the predominant form used in binding assays. Underlined sequence of each aptamer indicates a shorter version that retained binding activity and is portable as a modular domain in the construction of bi-functional composites. When the minimized version was synthesized by in vitro transcription, a GGG tri-nucleotide sequence was added at the 5' end to form a transcription start site, and a CCC tri-nucleotide is often added at the 3' end to stabilize the stem. shown are the predominant form used in binding assays. Underlined sequence of each aptamer indicates a shorter version that retained binding activity and is portable as a modular domain in the construction of bi-functional composites. When the minimized version was synthesized by in vitro transcription, a GGG tri-nucleotide sequence was added at the 5 end to form a transcription start site, and a CCC tri-nucleotide is often added at the 3 end to stabilize the stem. These aptamers would be used as utility aptamers in the bi-functional molecular adaptors. To investigate the binding affinity and specificity of the six aptamers, electrophoretic mobility shift assays (EMSA) and filter binding assays with radioactively labeled full-length RNA aptamers were performed. The dissociation constants of these aptamers are in the range of 10-60 nM. As an example, the K d of the aptamer AptC3-III (which was chosen later for cell viability assays) was 17 nM for C3, and 14.5 nM for iC3b ( Figure 2A). Affinities of the other aptamers are summarized in Table A1 of the Appendix A. These aptamers would be used as utility aptamers in the bi-functional molecular adaptors. To investigate the binding affinity and specificity of the six aptamers, electrophoretic mobility shift assays (EMSA) and filter binding assays with radioactively labeled full-length RNA aptamers were performed. The dissociation constants of these aptamers are in the range of 10-60 nM. As an example, the Kd of the aptamer AptC3-III (which was chosen later for cell viability assays) was 17 nM for C3, and 14.5 nM for iC3b (Figure 2A). Affinities of the other aptamers are summarized in Table A1 of the Appendix A. The aptamers were further characterized by competition assays to determine whether any aptamers are mutual competitors, presumably for a single target site. We found that AptC3-II and AptC3-III are mutually competitive ( Figure 2B): thus, there seem to be five distinct classes of aptamers that bind to different sites on C3 and/or its proteolytic products. To investigate their different target preference among C3, C3b, or iC3b, we performed EMSA. As shown in Figure 2C,D, most of the aptamers were able to recognize all three proteins, whereas AptC3-VI shows greater specificity toward C3b and AptC3-IV toward C3b and iC3b. Additionally, we performed deletion analysis to define minimized portable versions that can be more efficiently incorporated into the bi-functional constructs. The full-length sequence of each aptamer was progressively and successively deleted 5-nt at a time from either end until the activity was lost. Then deletions less than 5-nt were tested to refine the border. These shorter sequences were tested by EMSA under the condition identical to that presented in Figure 2C, and the results are indicated by the underlined sequences in Figure 1. Except for AptC3-I, all aptamers yielded active shorter forms ranging from 37 nt to 64 nt with an average length of 45 nt. Except the one derived from the AptC3-III, these minimized sequences all include a portion of the 5' conserved region. Importantly, although they were selected against human targets, these aptamers were able to bind mouse orthologs equally well. Mouse and human C3, C3b and iC3b have 77% identity and 88% positive residues (with matching charges) based on BLASTp alignments. In EMSA, when the purified human complement proteins were replaced by mouse serum, all aptamers formed a retarded complex with similar mobility to the complexes generated by purified human proteins. In Figure 2D, results for three different aptamers are given as examples. This human-mouse portability would facilitate the future use of these aptamers in animal models. Adaptation of a Targeting Aptamer Epidermal growth factor receptor (EGFR) is a tyrosine protein kinase and a cell surface glycoprotein implicated in epithelial tumorigenesis [15]. Binding of epidermal growth factor to its receptor triggers EGFR autophosphorylation and drives a complex intracellular signal transduction pathway, which modulates a set of cancer-related phenotypes. EGFR has been used as a tumor biomarker and a drug target for monoclonal antibodies (e.g., cetuximab and panitumumab) and low-molecular weight tyrosine kinase inhibitors (e.g., gefitinib and erlotinib). An anti-EGFR aptamer, E07, has been isolated in the form of 2'-F Py RNA [16], and we modified this aptamer for use as the targeting aptamer in the bi-functional constructs to connect a target cell to the complement system. MDA-MB-231 cell line was used as our first cellular target. This cell line demonstrates an invasive phenotype and high metastatic potential. It expresses EGFR on its surface, and resembles the Claudin-low subtype of triple-negative breast cancer (TNBC) [17,18]. The majority of TNBCs (>50%) are EGFR positive, yet individual tumor cells frequently display or develop resistance to EGFR inhibitors [19,20], and clinical trials of EGFR inhibitors in TNBC have been disappointing [21]. Our method employs a mechanism different from solely inhibiting EGFR activity, which might function in resistant tumors that retain membrane EGFR. The E07 aptamer is 93-nt in length. Before incorporating E07 into a bi-functional construct, we created a minimized version, miniE07, and investigated its binding to the MDA-MB-231 cell surface. Using an anti-EGFR antibody, we confirmed the expression of EGFR on MDA-MB-231 breast cancer cells ( Figure 3A). To detect the EGFR aptamer, we added a 24-nt extension to the 50-nt miniE07 to allow for association of a biotinlabeled complementary DNA oligonucleotide with binding of SA-PE to biotin as the signal in fluorescence microscopy. We confirmed binding of this 74-nt version to human EGFR protein by EMSA and filter binding assay using recombinant EGFR-Fc, a 190.2 kDa disulfide-bonded homodimer containing residues 25-645 of human EGFR and residues 100-330 of Fc ( Figure 3B). A mutant version of miniE07 (see Materials and Methods for sequence) was synthesized for the control experiment. As shown in Figure 3C, our refined EGFR aptamer was able to bind to the MDA-MB-231 cell surface. on MDA-MB-231 breast cancer cells ( Figure 3A). To detect the EGFR aptamer, we added a 24-nt extension to the 50-nt miniE07 to allow for association of a biotin-labeled complementary DNA oligonucleotide with binding of SA-PE to biotin as the signal in fluorescence microscopy. We confirmed binding of this 74-nt version to human EGFR protein by EMSA and filter binding assay using recombinant EGFR-Fc, a 190.2 kDa disulfide-bonded homodimer containing residues 25-645 of human EGFR and residues 100-330 of Fc (Figure 3B). A mutant version of miniE07 (see Materials and Methods for sequence) was synthesized for the control experiment. As shown in Figure 3C, our refined EGFR aptamer was able to bind to the MDA-MB-231 cell surface. In total, 100 nM of the protein was used in an EMSA (5% ntive polyacrylamide gel, acrylamide:bis-acylamide was 70:1 with 1 /2x TGB buffer with 2.5 mM MgCl 2 ). The label on EGFR aptamer was α 32 P-ATP. (C) Binding of E07s-e on the surface of MDA-MB-321 cells. The EGFR aptamer was detected using a biotinylated oligonucleotide. SA-PE: Streptavidin-Phycoerythrin. Micrographs are at 200× magnification (merged images using Texas red and DAPI filters). Scale bars in both panels indicate 10 µm. Protocols for the two cell surface binding assays are recapitulated in sketches accompanying the micrographs. All experiments were repeated at least three times. Construction of Bi-Functional Aptamers Construction of multivalent aptamers requires correct folding of each individual aptamer in the composite. Simultaneous binding of two or more targets to a multivalent construct may be prevented by steric hindrance. These two issues necessitate the testing of multiple molecular designs. To facilitate quick and easy swap of individual aptamers to generate alternative configurations, we used a non-covalent three-way junction (3WJ) to articulate the aptamers. The 3WJ domain of phi29 pRNA [14] has been used as a scaffold to connect different functional elements. Utilizing this system, each aptamer was synthesized individually with a tail that forms one strand of the 3WJ and assembled in vitro. We made more than a dozen constructs to pair the EGFR aptamer with each of the six aptamers for C3 and its derivatives. These constructs were screened for simultaneous binding to the target molecule (EGFR) and a utility molecule (iC3b). We chose iC3b to represent the utility molecules because it comprises the majority of C3 derived opsonins in vivo. Among different configurations tested, one outperformed the rest in triple complex formation and was nicknamed "Trifecta". It contains AptC3-III, which has the highest affinity and the shortest portable sequence among the six. Secondary structures of the bivalent "Trifecta" in two slightly different forms are depicted in Figure 4A. In both constructs, miniE07 and miniAptC3-III were placed, respectively, as the extension of H1 and H3 helices of the 3WJ [14]. We first used a tri-molecular version to examine molecular binding in EMSA. In this construct the a strand of the 3WJ was attached to the 3' end of miniE07, the c strand of the 3WJ was attached to the 3' end of the minimized AptC3-III aptamer, and a DNA oligo was used for the b strand of the 3WJ. This b-3WJ oligo enabled us to label the trimolecular construct and establish the binding ability of each aptamer domain in EMSA to demonstrate triple complex formation ( Figure 4B). Subsequently, for cell-based assays, we made the bimolecular construct by covalently connecting the b-3WJ to AptC3-III. Three G's were added at the 5' end of one strand to enable efficient transcription by the polymerase, and three C's were added at the 3' end of the other strand to pair with them. These extra-nucleotides did not result in significant difference in terms of binding activity between the two constructs ( Figure 4B). The stability of the non-covalently articulated constructs was tested against both strand dissociation and strand degradation. The stability of assembly was measured by incubating either "Trifecta" in 0, 2, 4, 6, and 8 M urea for 30 min at 37 • C followed by electrophoresis in a native gel ( Figure 4C). Because these aptamers will be used in the extracellular environment, we also confirmed their stability in the presence of RNase. As shown in Figure 4D, we did not observe significant degradation even after 72 h incubation in 50% FBS. Genes 2021, 12, x FOR PEER REVIEW 11 of 19 Deposition of C3 Family Proteins on the Target Cell Surface Many of the cell surface proteins that can be used as molecular targets are receptors that undergo ligand-induced internalization [22,23], prompting a question regarding the feasibility of our general strategy. EGFR is a representative in this category, and the choice of E07 as the target aptamer in our bi-valent constructs helped us investigate whether the opsonin-aptamer-target complex could remain on the cell surface for sufficient time to trigger downstream effects. E07 is known to be internalized into EGFR-expressing cells [16]. When A431 cells were incubated with E07 at 37 • C for 30 min, about 23% of E07 was internalized [16]. In contrast, 70 % of EGF was internalized in 15 min [24], indicating that most of the aptamer could be retained on the cell surface long enough to recruit the effector mechanism. However, before testing "Trifecta" on living cells we used immunochemical methods with fixed cells to visualize C3 and its derivatives recruited to the surface of MDA-MB-231 cells (because most fluorescent dyes are toxic to living cells). Specifically, after the cells were fixed, nonspecific binding of protein and nucleic acids were blocked with BSA, poly DI/DC, and yeast RNA. Bi-functional aptamer in binding buffer was added at a final concentration of 2.64 µM (see Materials and Methods for more details). For control wells, an equal amount of unselected 2'-F RNA pool was added to the same final concentration. Other controls included no RNA, no added serum, and no primary antibody. Cells were incubated with RNA for 45 min and washed. Then 10% human serum in 100 µL of binding buffer was added to wells and cells were incubated for 45 min. Cells were washed, and 1:250 anti-C3c was added to all but the "no-primary antibody" control well, and incubated for 45 min. The anti-C3c antibody recognizes C3 and all breakdown fragments containing C3c, including C3b and iC3b. Any of these breakdown fragments could act as opsonins [1]. With the help of a secondary antibody, we observed strong signals for C3 family proteins in the presence of bi-functional aptamer constructs ( Figure 5A) compared to the controls ( Figure 5B,C). This result indicates that simultaneous and independent binding of the target molecule and the opsonin to the aptameric adaptor can occur on the surface of target cells. Reduced Viability of Target Cells C3b/iC3b deposited on tumor cells may cause MAC formation or promote adhesion of effector cells such as macrophages and NK cells through complement receptors, whereby cytotoxicity may ensue. After demonstrating the aptamer-dependent surface Reduced Viability of Target Cells C3b/iC3b deposited on tumor cells may cause MAC formation or promote adhesion of effector cells such as macrophages and NK cells through complement receptors, whereby cytotoxicity may ensue. After demonstrating the aptamer-dependent surface deposition of opsonins, we set up an assay with serum to see whether target cell viability could be reduced. In this system only one effector mechanism, MAC formation, could be enacted, as all factors required for MAC formation are present in serum whereas cell-mediated effects such as CR3-dependent cellular cytotoxicity (CR3-DCC) requires additional signals or cells. A major concern is the presence on tumor cells of inhibitory membrane-bound complement regulatory proteins (mCRPs) such as CD46, CD55, and CD59, which enable cancer cells to evade complement attack [25,26]. MDA-MB-231 cells are reported to have high expression of both CD55 and CD59 [27,28], therefore serving as an informative target for evaluating our approach. Based on the bi-strand "Trifecta" (Trifecta-b), we constructed three derivatives as controls: a double-aptamer mutant (mEGFR-mC3) in which both C3 and EGFR aptamers were inactivated, and two single-aptamer mutants (EGFR-mC3 and mEGFR-C3). In cell viability assays the cells were incubated with 1 µM "Trifecta" or other constructs in medium containing 25% human serum. Fresh medium containing Trifecta-b or one of its derivatives was replenished every 24 h for three days. Then cell viability was assayed using crystal violet assays. When MDA-MB-231 cells were treated with these four constructs, we observed a 30-40% Trifecta-dependent reduction in viability ( Figure 6A). Although the EGFR aptamer reportedly inhibits proliferation and induces apoptosis of cancer cells [16], these mechanisms require more than 10 days to reveal their effects. To corroborate these results, we performed the same assay with two more cell lines, BT-20 and MCF-10A. BT-20 is another breast cancer cell line known to express EGFR at a high level. When BT-20 was used in place of MDA-MB-231, we observed a similar level of cell lysis ( Figure 6B). In contrast, MCF-10A, a non-tumorigenic mammary epithelial cell line, has a very low level of EGFR expression. We did not observe any loss of viability of MCF-10A cells in the presence of "Trifecta" ( Figure 6C). To further demonstrate the requirement of complement in these assays we performed another experiment. Some critical factors in the complement activation pathway are known to be sensitive to high temperature. Incubation of the serum at 56 • C for 30 min has been routinely used for complement inactivation [29]. When we used heat-treated serum in the cell viability assays (indicated by an asterisk in Figure 6A-C), Trifecta-dependent cell lysis was abolished, and cell viability was the same with all four constructs. Interestingly, similar results were obtained when mouse serum was used in place of human serum, indicating that the aptamer for C3b/iC3b interacts equally well with mouse complement. This is consistent with the results of our binding assays ( Figure 1D). Microscopically, disintegrating cells were observed within 24 h of incubation ( Figure 6D). To corroborate these results, we performed the same assay with two more cell lines, BT-20 and MCF-10A. BT-20 is another breast cancer cell line known to express EGFR at a high level. When BT-20 was used in place of MDA-MB-231, we observed a similar level of cell lysis ( Figure 6B). In contrast, MCF-10A, a non-tumorigenic mammary epithelial cell line, has a very low level of EGFR expression. We did not observe any loss of viability of MCF-10A cells in the presence of "Trifecta" (Figure 6C). To further demonstrate the requirement of complement in these assays we performed another experiment. Some critical factors in the complement activation pathway are known to be sensitive to high temperature. Incubation of the serum at 56 °C for 30 min has been routinely used for complement inactivation [29]. When we used heat-treated serum in the cell viability assays (indicated by an asterisk in Figure 6A-C), Trifecta-dependent cell lysis was abolished, and cell viability was the same with all four constructs. Interestingly, similar results were obtained when mouse serum was used in place of human serum, indicating that the aptamer for C3b/iC3b interacts equally well with mouse complement. This is consistent with the results of our binding assays ( Figure 1D). Microscopically, disintegrating cells were observed within 24 h of incubation ( Figure 6D). Discussion The immune system contains two types of components: the "designators" and the "effectors." The former tag the pathogenic targets, and the latter destroy or eliminate them. In this manner, the immune system functions like our body's built-in physician to "diagnose" (i.e., to tag) and "treat" (i.e., to attack) diseases. A "designator" is an adaptor that makes a specific connection between the target and an effector mechanism. Therefore, designators are many and special, such as the opsonins, and the effector mechanisms are few and general, such as the membrane attack complex (MAC) and natural killer (NK) cells. Because the dynamic relationship between pathogens and immuno-effector mechanisms is controlled by the designators, developing synthetic designators to modify or create specific pathogen-effector interactions is a promising strategy to harness the power of the immune system for treating recalcitrant diseases such as cancer. The data presented here support the approach of eliciting a synthetic immune response using aptameric adaptors, and address major concerns by providing evidence that neither EGFR internalization nor mCRPs are sufficient to neutralize the complement attack in this aptamer-based system. However, for several reasons the observed cytotoxic efficacy of the bi-functional aptameric construct only delineates the lower bound of its potency. First, these results were obtained from a single molecular configuration. Different spatial arrangement and different relative valency of the two aptamers may yield a more potent construct. Second, only one effector mechanism, the formation of MAC, could be enacted in this preliminary study because no effector cells were provided to carry out other cytotoxic mechanisms. Third, many of the plasma complement factors are precipitated out during blood clotting and therefore are not present in serum, and conversion of inactive C3 to C3b is not as efficient in vitro as in vivo [30]. Mechanisms other than MAC formation may be elicited to enhance the effect observed here. Previously, using a natural RNA aptamer construct we established a system to induce phagocytosis of a molecular target by the macrophage-like THP-1 cells. A similar set-up may be used to explore complement-dependent cell-mediated cytotoxicity (CDCC). However, the interaction of macrophages and tumor cells is very complex. Tumorinfiltrating macrophages may be induced to become tumor-associated macrophages (TAM) to assist tumor progression [31,32]. Therefore, implementation of macrophage-focused strategies in cancer treatment requires more information and consideration in the future. Regarding the approach to combining aptamers and complement proteins to elicit MAC formation, it is informative to compare our work with another attempt. Bruno [33] used a biotin-conjugated DNA aptamer against MUC1 and a streptavidin-C1q fusion protein to trigger the classical complement pathway, and achieved a moderate killing effect on breast cancer cells. Our approach has two critical advantages. First, in addition to the MUC1 aptamer, Bruno's strategy requires delivery of an amount of exogenous tagged C1q significantly exceeding endogenous C1q, which is difficult to achieve in a living organism. In contrast, our approach does not involve exposure to exogenous proteins. Second, we utilize the aptamer to commandeer endogenous C3 and its derivatives rather than C1q; C3 is the point of convergence for all three complement pathways while C1q functions only as the starting point of the classical pathway. This may at least partially account for the different efficiency of the two methods in similar cell-based assays. It is interesting to notice the feasibility of using our aptamers and their derivatives in vivo in rodents or humans for eventual therapeutic applications. We already used mouse serum as the source of effectors, which makes future testing of our construct feasible in rodent models. The ability of aptamers to penetrate the tumor tissue has been well documented [34,35], and uptake of aptamers can be monitored by PCR or a fluorescent marker [36]. We should be able to use EGFR+ and EGFR-cancer cells grown as xenografts in nude mice to directly extend the in vitro data, and expand the work into more clinically relevant patient-derived xenografts (PDX) [37] in severe combined immunodeficient (SCID) mice, which lack B-and T-cell function but retain innate immunity including the complement system [38]. Nude mouse models were used to demonstrate that a combined regimen of mAb and β-glucan induced regression of xenografted human neuroblastoma cells [39], indicating that sufficient iC3b is present to activate its receptor and trigger tumor cell killing. Some EGFR+ TNBC cells display or develop resistance to EGFR inhibitors [19,20], and clinical trials of small molecule EGFR inhibitors in TNBC have been disappointing [21]. However, our strategy could be more effective in the induction of tumor regression, as it causes direct cell damage rather than solely targeting EGFR kinase activity, and should work in resistant tumors that retain membrane EGFR. region, we performed 9 cycles of selection. As before, the target was purified C3 protein. This experiment yielded 3 enriched sequences later proved to be aptamers. They were named AptC3-I, AptC3-II, and AptC3-III (Table A1). AptC3-V (9/90) AptC3-VI (5/90) AptC3-III * (3/90) ++ + +++ ++++ ++ + * These aptamers compete with each other in binding assays. ** Four "+" signs indicate a K d of~15 nM; one "+" sign indicates a K d of~50 nM. C3-derived opsonins are a group of proteolytic products of C3. C3 was used in previous selections to avoid isolating inhibitory aptamers for C3b/iC3b, and the natural RNA aptamer AptC3-1 was able to recognize C3b and iC3b with a slightly weaker affinity [12]. To utilize this information, we synthesized two "doped" pools based on the sequence of the previously isolated AptC3-1: a 38-nt segment was doped at 30%, and on either its 5 (pool I) or 3 side (pool II) a 20-nt unbiased random segment was added to create a 58-nt variable region. These two pools were mixed in a 1:1 ratio and used in a second in vitro selection experiment with iC3b as the target. We also performed one negative selection against C3 to promote selection of aptamers for its proteolytic products rather than intact C3. After 12 cycles, we isolated one new aptamer named AptC3-IV. AptC3-II were also present in this selected pool (Table A1). To isolate additional aptamers we performed a third experiment for which the starting material was the RNA pools that had undergone one cycle of selection and amplification in the two experiments described above. A 1:1 mixture of these two pools was selected and amplified for 15 cycles with iC3b as the target. The first, second and eleventh generation were treated with Hybridase and a set of oligos to remove the 4 previously isolated aptamers. Negative selection against C3 was performed in each cycle. In this experiment we isolated three aptamer sequences. Two of them were novel, and were designated AptC3-V and AptC3-VI. The third had already been isolated in the first experiment as AptC3-III. As shown in Table A1, we isolated six aptamers in total for C3 and its derivatives in the form of 2 -F Py RNA.
2021-12-31T16:21:19.713Z
2021-12-29T00:00:00.000
{ "year": 2021, "sha1": "237fd8dd4f88ed097c0d262feb213e0ea57e91fe", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2073-4425/13/1/86/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "836474e2b99c88f59a5ee1283f97d24dc4cbda8a", "s2fieldsofstudy": [ "Biology", "Chemistry" ], "extfieldsofstudy": [ "Medicine" ] }
119280399
pes2o/s2orc
v3-fos-license
Ultraslow scaled Brownian motion We define and study in detail \emph{utraslow scaled Brownian motion (USBM)\/} characterised by a time dependent diffusion coefficient of the form $D(t)\simeq 1/t$. For unconfined motion the mean squared displacement (MSD) of USBM exhibits an ultraslow, logarithmic growth as function of time, in contrast to the conventional scaled Brownian motion. In an harmonic potential the MSD of USBM does not saturate but asymptotically decays inverse-proportionally to time, reflecting the highly non-stationary character of the process. We show that the process is weakly non-ergodic in the sense that the time averaged MSD does not converge to the regular MSD even at long times, and for unconfined motion combines a linear lag time dependence with a logarithmic term. The weakly non-ergodic behaviour is quantified in terms of the ergodicity breaking parameter. The USBM process is also shown to be ageing: observables of the system depend on the time gap between initiation of the test particle and start of the measurement of its motion. Our analytical results are shown to agree excellently with extensive computer simulations. Introduction In the wake of the development of modern particle tracking techniques strong deviations of the time dependence of the mean squared displacement (MSD) from the linear law x 2 (t) t derived by Einstein [1] and Smoluchowski [2] have been observed in a variety of complex fluidic environments [3,4,5,6,7]. Typically, anomalous diffusion of the power-law form is observed, where, depending on the value of the anomalous diffusion exponent α, we distinguish subdiffision with 0 < α < 1 and superdiffusion with α > 1 [8,9]. Accordingly, subdiffusion was observed in the cytoplasm of living cells [10,11], in artificially crowded liquids [12,13], and in structured or functionalised environments [14]. Also superdiffusive motion was found in living cells [15,16]. Recently, interest in ultraslow diffusion processes with the logarithmic form x 2 (t) log γ (t) (2) of the MSD with different values for the exponent γ has been revived [6]. Ultraslow diffusion may be generated by periodically iterated maps [17] and observed for random walks on bundled structures [18]. A prototype model for ultraslow diffusion is provided by Sinai diffusion in quenched landscapes with random force field, for which γ = 4 [19,20,21,22]. In the context of Sinai diffusion ultraslow continuous time random walks with super heavy-tailed waiting times with γ > 0 [22,23,24,25] were discussed. Ultraslow scaling of the MSD of the form (2) were obtained in aperiodic environments (variable γ) [26] and vacancy induced motion (γ = 1) [27]. Moreover, it occurs in heterogeneous diffusion processes with exponentially varying diffusivity (γ = 2) [28], or interacting many-body systems in low dimensional disordered environments with γ = 1/2 [29], the dynamics of the latter being governed by an ultraslow, ageing counting processes [30]. The logarithmic time dependence (2) with γ = 1 of the MSD is also observed for the self diffusion of particles in free cooling granular gases with constant, subunity restitution coefficient in the homogeneous cooling state [31]. Granular gases are rarefied granular systems, in which particles move along ballistic trajectories between instantaneous collisions [31]. They are common in Space, for instance, in protoplanetary discs, interstellar clouds and planetary rings [32]. At terrestrial conditions granular gases may be obtained by placing granular matter into containers with vibrating [33] or rotating [34] walls. If no net external forces (gravitation, etc.) are acting on the granular system, the motion of granular particles gradually slows down due to dissipative collisions between them [31]. This microgravity condition can be achieved, inter alia, with parabolic airplane flights or satellites [35,36,37] or by the use of diamagnetic levitation [38]. We note that in very dense two-dimensional lattice gas systems, ultraslow diffusion emerges, as well [39]. Figure 1 shows the crossover from the ballistic to the ultraslow form (2) of the MSD of a granular gas with constant restitution coefficient ε = 0.8 in the homogeneous cooling state. Haff's law demonstrates that the kinetic temperature of such a free granular gas with constant restitution coefficient decays inverse-proportionally with time, T (t) 1/t [40]. For the effective self diffusion of the gas particles-mediated by particle-particle collisions-this property translates into the time dependent diffusion coefficient D(t) 1/t [41,42,43]. We note that a diffusivity of the form D(t) = D 0 + D 1 /t with a component decaying inverse-proportionally with time was used in the modelling of the motion of molecules in porous environments [44] as well as of water diffusion in brain tissue measured by magnetic resonance imaging [45]. Here we study in detail the process of ultraslow scaled Brownian motion (USBM) with time dependent diffusion coefficient D(t) 1/t. Starting from the Langevin equation for USBM and a summary of the simulations procedure we present analytical and numerical results for the MSD and the time averaged MSD for the cases of Figure 1. Time dependence of the ensemble averaged MSD x 2 (t) obtained from event driven molecular dynamics simulations of three-dimensional force-free granular gases [43]. At short times the particles follow ballistic trajectories, while for longer times the ensemble averaged MSD has a logarithmic time dependence. The inset focuses on the logarithmic long time behaviour. unconfined (Section 2) and confined (Section 3) motion. We analyse in detail the disparity between the ensemble and time averaged MSD and quantify the statistical scatter of the amplitude of the time averaged MSD of individual realisations of the USBM process. Moreover we study the ageing properties of USBM, that is, the explicit dependence of the physical observables on the time difference between the initiation of the system and the start of the observation. In Section 4 we present our Conclusions. In the Appendix we present details of the calculation of higher order moments and the ergodicity breaking parameter. Overdamped Langevin equation for ultraslow scaled Brownian motion Anomalous diffusion processes with power-law form (1) of the MSD are often modelled in terms of scaled Brownian motion (SBM) characterised by an explicitly time dependent diffusivity of the power-law form D(t) t α−1 with 0 < α < 2, see, for instance, references [46,47,48,49,50,51] as well as the study by Saxton [52] and further references Figure 2. Schematic of the motion of a Brownian particle in a bath with decreasing temperature T (t) t 2α−2 for 0 ≤ α < 1. The diffusion coefficient of the Brownian particle decays with time as D(t) t α−1 . USBM corresponds to the case α = 0, while standard SBM is strictly limited to 0 < α < 2 [54]. therein. In SBM this form of D(t) is combined with the regular Langevin equation [53] in which ζ(t) represents white Gaussian noise with the normalised covariance and zero mean ζ(t) = 0. While for a system connected to a thermal reservoir a description in terms of a time dependent temperature underlying SBM is unphysical [54], time dependent diffusion coefficients appear naturally in systems that are open or dissipate energy into other degrees of freedom such as the granular gases discussed above, see the schematic in figure 2. In fact, granular gases with a viscoelastic, relative particle speed-dependent restitution coefficient correspond to SBM with α = 1/6 [31,43]. Diffusion in media with explicitly time dependent temperature can, for instance, also be observed in snow melt dynamics [55,56]. A diffusion equation with a time dependent diffusivity proportional to t 2 was originally introduced by Batchelor [57] to describe the anomalous Richardson relative diffusion [58] in turbulent atmospheric systems. SBM with diffusivity D(t) t α−1 was studied extensively during the last few years [59,60,61,54,62]. In particular, the weakly non-ergodic disparity between ensemble and time averages in SBM as well as its ageing behaviour were analysed [60,61,54,62], see also below. Processes with both time and position dependent diffusion coefficients were also reported [63]. SBM is a Markovian process with stationary increments ζ(t), however, it is rendered non-stationary by the time dependence of the coefficient D(t). SBM is therefore fundamentally different [6,54] from seemingly similar processes such as fractional Brownian motion or fractional Langevin equation motion [64]. Following the motivation from our studies of granular gases with constant restitution coefficient [43] we here consider USBM with the time dependent diffusion coefficient The time scale τ 0 defines the characteristic time beyond which the long time scaling D(t) ∼ D 0 τ 0 /t sets in. We here introduce τ 0 to avoid a divergence of D(t) at t = 0. The case (5) is explicitly excluded in the allowed range for the scaling exponent α in SBM and, as we will see, constitutes a new class of stochastic processes. In the following we solve the overdamped Langevin equation (3) with the time dependent diffusion coefficient (5) analytically and perform extensive computer simulations of the corresponding finitedifference analogue of the Langevin equation. In this procedure, at each time step the increment of the particle position takes on the value where W i+1 − W i is the increment of the standard Wiener process and D(i) is the value of the time dependent diffusivity (5) at the time instant i. We simulated N = 10 3 independent particles (runs) with the parameters τ 0 = 1 and D 0 = 1/2 in all graphs presented below. Ensemble and time averaged mean squared displacements From direct integration of the Langevin equation (3) with the time dependent diffusivity (5) we find the ultraslow, logarithmic growth of the ensemble averaged MSD. USBM therefore reproduces the asymptotic behaviour of the MSD for granular gases in the homogeneous cooling state and with constant restitution coefficient [43], as shown in figure 1. In addition to the ensemble averaged MSD x 2 (t) of the particle motion, it is often useful to compute the time averaged MSD Here, the lag time ∆ defines the width of the averaging window slid over the time series x(t) of the particle position of overall length t (the measurement time). Time averages of the form (8) are often used in experiments and large scale simulations studies based on single particle tracking approaches, in which typically few but long trajectories are available [10,11,65]. The careful analysis of the time averaged MSD (8) provides additional important information on the studied process as compared to the ensemble averaged MSD x 2 (t) , see, for instance, the analyses in references [11,65]. Often one takes the additional average over N individual particle traces δ 2 i (∆), For ergodic processes ‡ such as Brownian motion, fractional Brownian motion, and fractional Langevin equation motion the time averaged MSD converges to the ensemble averaged MSD in the limit of sufficiently long times, lim t→∞ δ 2 (∆) = x 2 (∆) [6]. This property is due to the stationarity of the increments of these processes [66]. The ergodic behaviour lim t→∞ δ 2 (∆) = x 2 (∆) of these processes holds for unconfined motion when the system is in fact out-of-equilibrium, an advantage of the particular definition (8). Moreover, ergodic systems fulfil the equivalence even at finite t [6]. Systems in which we observe the disparity δ 2 (∆) = x 2 (∆) and therefore also lim t→∞ δ 2 (∆) = x 2 (∆) are called weakly non-ergodic [4,5,6,7,67,68]. § To calculate the time averaged MSD (9) for USBM we do not need to consider the mixed position autocorrelations in the definition of the time averaged MSD, as the expression in the angular brackets simplify as follows, This is due to the property for stochastic processes whose increments are independent random variables. We thus find the exact form for the time averaged MSD of USBM, ‡ We consider processes ergodic in the Boltzmann-Khinchin sense when the long time average of a physical observable converges to the associated time average. § Note that also transiently non-ergodic behaviour may become relevant as it may mask intrinsic relaxation times when time averages are measured [12,69]. In contrast, this is not valid in the case of granular gases, where particles move ballistically in between instantaneous collisions [43], or for processes driven by long-range correlated increments such as fractional Brownian motion or fractional Langevin equation motion [6,64,70]. where we introduced the auxiliary function The time averaged MSD (13) thus crosses over from the limiting behaviour at short lag times τ 0 ∆ t combining a linear with a logarithmic ∆ dependence, to the purely logarithmic law We see that as the lag time ∆ approaches the measurement time t, the time average MSD approaches the MSD (7), δ 2 (t) → x 2 (t) . The results of our simulations of the USBM process for both ensemble and time averaged MSDs agree very well with the above analytical results, as demonstrated in figure 3. In that plot the thin grey curves depict the simulations results for the time averaged MSD for individual trajectories. The amplitude spread between different trajectories is fairly small for ∆ t and increases when the lag time ∆ approaches the trace length t due to worsening statistics. Stochasticity of the time averaged mean squared displacement and ergodicity breaking parameter Even ergodic processes such as Brownian motion exhibit a certain degree of stochasticity of time averaged observables for shorter measurement times. The amplitude fluctuations at a given lag time ∆ of the time averaged MSD as compared to the trajectory average (9) is quantified in terms of the ergodicity breaking parameter [6,70,71,72] where in the second equality we introduced the relative deviation [71] ξ = δ 2 (∆) The necessary condition for ergodicity of a stochastic process is that the ergodicity breaking parameter vanishes in the limit of infinitely long trajectories. Brownian motion provides the basal level for the approach to ergodicity according to [70] Fractional Brownian motion and fractional Langevin equation motion are ergodic [64,70]. Weakly non-ergodic processes, which are characterised by the disparity δ 2 (∆) = x 2 (∆) [4,5,6,7,68,71] include continuous time random walks with scalefree distributions of waiting times [4,5,6,68,71] and heterogeneous diffusion processes [74,82]. In the limit of long traces, the value of their ergodicity breaking parameter remains finite, which is indicative of the intrinsic randomness of time averages of these processes. In contrast, the ergodicity breaking parameter for SBM vanishes in the limit of long trajectories [61]. The ergodicity breaking parameter for USBM is derived in the Appendix. The final expression in the relevant limit τ 0 ∆ t reads where the constant C = π 2 /6 − 1 0.645. Thus, the time averaged MSD for USBM becomes increasingly reproducible as the length of the time traces is extended, albeit the approach to zero is logarithmically slow. We demonstrate the functional form of the ergodicity breaking parameter as function of the lag time ∆ for two different measurement times and the approach of EB to its asymptotic behaviour (20) in figure 4. The ergodicity breaking parameter quantifies the statistical spread of the time averaged MSD. An important indicator for different types of stochastic processes is also the complete distribution φ(ξ) [6,68,71,73]. As shown in figure 5 this distribution has an asymmetric bell-shaped curve approximately centred around the ergodic value ξ = 1. The tail at larger ξ values appears somewhat longer compared to the tail at shorter ξ. ¶ For longer lag times at fixed overall length t of the time series the width of the distribution φ(ξ) grows. This is consistent with the fact that at larger value of ∆/t the time averages become more random. In figure 5 we also show a fit to the function which appears to capture the functional behaviour reasonably well. We note that the shape of φ(ξ) appears narrower compared to the one of heterogeneous diffusion processes with power-law space dependence of the diffusivity [74] which was fitted by a three-parameter Gamma distribution [74,75]. In comparison, the distribution φ(ξ) for standard SBM is quite narrow, although it widens as the exponent α approaches zero and particularly as the lag time ∆ grows [54]. Ageing ultraslow scaled Brownian motion For processes with stationary increments such as Brownian motion or fractional Brownian motion, if we initiate the system at t = 0 but start recording it only at some later time t a , the physical observables will not explicitly depend on the ageing time t a . + However, for several anomalous processes pronounced ageing effects are found. These include continuous time random walk processes with scale free distributions of waiting times [77,78], correlated continuous time random walks [79], non-linear maps generating subdiffusion [80], systems with annealed and quenched disorder [81], heterogeneous diffusion processes [82], or standard SBM [62]. In contrast to subdiffusive continuous time random walk processes, in which ageing emerges due to the divergence of a characteristic waiting time [77], in ultraslow SBM the non-stationarity of the system stems from the explicit time dependence of the diffusion coefficient. When the recording of the particle position starts at a finite time t a , this ageing time explicitly appears in the particle's MSD. For the aged MSD [77,6] in analogy to equation (7) we find that In the limit of strong ageing, t a t, this expression yields the linear scaling of the MSD with time t, the ageing time t a rescaling the effective particle diffusivity. The transition between this ageing-dominated linear scaling for the MSD and the anomalous logarithmic time dependence in the weak ageing limit t t a is clearly seen in figure 6. For the aged time averaged MSD [6,77] we obtain the result where the auxiliary function (t) was defined in equation (14). In the limit τ 0 ∆ t and ∆ t a the aged time averaged MSD factories into a term containing all the information on the ageing and measurement times t a and t, and another capturing the physically relevant dependence on the lag time ∆ and the measurement time t, This factorisation is analogous to that of heterogeneous diffusion processes [82], scalefree subdiffusive continuous time random walks [77], and standard SBM [62]. However, in contrast to these processes the aged time averaged MSD for short lag times does not factorise into the product of the non-aged time averaged MSD (16) and a factor containing the ageing time. For strong ageing t a t we obtain the linear scaling In this limit, that is, the system becomes apparently ergodic and we observe the equality δ 2 a (∆, t a ) = x 2 a (∆, t a ) , as can be seen from comparison with equations (23) and (26). Figure 7 shows the convergence of the time averaged MSD to the limiting behaviour (26). Such a behaviour was previously observed for aged subdiffusive SBM [62], heterogeneous diffusion processes [82], and continuous time random walk processes [77]. In the case of USBM this phenomena has a clear physical explanation: at the beginning of the experiment the diffusion coefficient D(t) significantly decreases during the measurement time t τ 0 from D(0) = D 0 to D(t) ∼ D 0 τ 0 /t, and the system is strongly nonstationary. In contrast, after a long ageing period t a t the diffusion coefficient remains practically unchanged during the measurement time, D(t a + t) D(t a ) = D 0 τ 0 /t a . Figure 7 explicitly shows how the amplitude of the time averaged MSD is reduced due to ageing in the system. How do the fluctuations of individual time averaged MSD traces change in the presence of ageing? The derivation of the ergodicity breaking parameter for the aged process is provided in the Appendix. The final result in the limit ∆ t and ∆ t a assumes the form . In the strong ageing limit t a t the ergodicity breaking parameter EB a is independent of the ageing time t a , and it asymptotically converges to the result (19) of Brownian diffusion. Our theoretical results agree well with the simulations, as witnessed by figure 8. For weak ageing t a ∆, t the result (13) of the non-aged USBM process is recovered. Confined ultraslow scaled Brownian motion The motion of particles in external confinement is an important physical concept for applications of stochastic processes, and it is also relevant from an experimental point of view. Namely, the motion of particles in cells may repeatedly hit the cell wall, or the tracer particles may experience a restoring force in particle tracing experiments by help of optical tweezers. Here we consider the generic case of confinement in an harmonic potential. USBM in the presence of such a linear restoring force is governed by the overdamped Langevin equation with additional Hookean force term −kx, Ensemble and time averaged mean squared displacements The ensemble averaged MSD follows directly from this stochastic equation, and we obtain Here we defined the auxiliary function where in the second line we used the definition of the exponential integral The asymptotic behaviour of the MSD for long times t 1/k has the time dependence Reflecting the temporal decay of the temperature encoded in the time dependent diffusion coefficient (5) we observe the 1/t scaling of the MSD in confinement. This underlines the highly non-stationary and athermal character of this process [60,54,62]. The time averaged MSD for confined USBM is obtained from the relation The covariance of the position for ultraslow SBM in confinement can no longer be simplified according to equation (12) but has the time dependence Introducing relations (29) and (34) into equation (33) we obtain For long times and strong external confinement, {t, t a , ∆} {1/k, τ 0 } this expression simplifies to The time averaged MSD has a pronounced plateau for ∆ t, t a , (29), while the blue line denotes equation (35). The red line shows the asymptotic behaviour (36), and the horizontal dashed line the leading term (37). The symbols correspond to the simulations of equation (28). that is, in this regime the time averaged MSD is independent of the lag time, compare the discussion in references [54,62]. Simulations based on the Langevin equation with the Hookean forcing are in excellent agreement with these analytical results, as shown in figure 9. Ageing ultraslow scaled Brownian motion in confinement 3.2.1. Ensemble averaged mean squared displacement. For confined ageing USBM, in which we measure the MSD starting from the ageing time t a until time t, the result for the MSD becomes where E (x) is defined in equation (30). Expression (38) reduces to equation (29) for vanishing ageing, t a = 0. However, even in the presence of weak ageing, t a 1/k, at long times t 1/k the behaviour of the MSD reads Figure 10. Ensemble averaged MSD x 2 a (t, t a ) for confined ageing USBM at different ageing times: t a = 0 (no ageing, black line), t a = 0.1 (weak ageing, red line), and t a = 10 5 (strong ageing, blue line). Note that for better visibility the curve for t a = 10 5 was multiplied by a factor of 10 3 . contrasting the behaviour in equation (32). The ensemble averaged MSD for ageing USBM at different ageing times is depicted in figure 10. At short times t < 1/k the weakly aged MSD follows the non-aged behaviour. Eventually it attains the plateau given by the first term in equation (39), instead of decaying towards zero as in the non-aged case. In the analysis of experimental data times the exact moment of the system's initiation may often not be known, for instance, when measuring biological cells. The apparent plateau revealed here for confined ageing USBM dynamics may thus erroneously be mistaken as a signature of a stationary process. Expanding the exponential integral in equation (38), in the strong ageing limit t a {τ 0 , 1/k} we find For t 1/k we recover the unconfined result (23). In the opposite limit t 1/k the behaviour of equation (40) crosses over to In this case we recover a transition between two plateaus, as it was observed for subdiffusive SBM [62]. Namely, for short measurement times t t a we find from Figure 11. Ensemble and time averaged MSDs x 2 a (t, t a ) and δ 2 a (∆) for confined ageing USBM. The symbols depict simulations of equation (28). The blue line corresponds to the theoretical result (44), and the red line shows the asymptotic (45). The horizontal dashed line shows the leading term (46). while at long measurement times t t a this turns to This behaviour, which appears unique for USBM and SBM, is depicted in figure 10). Time averaged mean squared displacement. The time averaged MSD for ageing confined USBM is derived analogously to the non-aged case, yielding Figure 12. Ergodicity breaking parameter EB as function of t/∆ in the non-aged (t a = 0) and aged (t a = 3000) cases. In the limit of strong confinement 1/k {t a , t, ∆} this expression can be significantly simplified to obtain For ∆ t, t a we again find an apparent plateau, In the case of strong ageing t a t we find Comparison to equation (42) shows that the time averaged MSD becomes equal to the ensemble MSD in this strong ageing regime, and ergodicity is apparently restored as in the unconfined case. The behaviour of the ensemble and time averaged MSDs for confined ageing USBM are depicted in figure 11. The ergodicity breaking parameter EB for confined USBM is depicted in figure 12 for both absence and presence of ageing. It is a decreasing function of the ratio t/∆ for large t/∆, while at small values of t/∆ it remains practically unchanged. Conclusions We proposed and studied ultraslow scaled Brownian motion, a new anomalous stochastic process with a time dependent diffusion coefficient of the form D(t) 1/t. Formally USBM corresponds to the lower bound α = 0 of scaled Brownian motion with diffusivity D(t) t α−1 (0 < α) [59,60,54,61,62], yet its dynamical behaviour is significantly different. We showed that USBM yields a logarithmic time dependence of the MSD rather than the power-law scaling of SBM. USBM's time averaged MSD was shown to acquire a combination of power-law and logarithmic lag time dependence. USBM is weakly non-ergodic and ageing. The ergodicity breaking parameter quantifying the random character of time averages of the MSD has a weak logarithmic dependence on the ratio ∆/t of lag time ∆ and length t of the recorded trajectories, tending to zero in the limit of infinitely long traces and/or short lag times. In the case of strong ageing the system tends to usual Brownian motion and the behaviour of the system becomes apparently ergodic. Under external confinement the behaviour of the USBM dynamics exhibits an apparent plateau for the time averaged MSD, while the ensemble averaged MSD decays proportionally to 1/t at longer times, reflecting the highly non-stationary character of USBM. Ageing produces an apparent plateau for the ensemble averaged MSD and a crossover between two plateaus for the time averaged MSD. USBM adds to the rich variety of ultraslow processes with logarithmic growth of the ensemble averaged MSD yet displays several unique features in comparison to other ultraslow processes. Potential applications of USBM are foremost in the description of random particle motion in intrinsically non-equilibrium system such as free cooling granular gases or systems coupled to explicitly time dependent thermal reservoirs. On a more general level we hope that the discussion of ultraslow processes will lead to a rethinking of claims in diffusion studies that certain particles appear immobile. Namely, one often observes a population splitting into a (growing) fraction of immobile particles and another fraction of particles performing anomalous diffusion of the form (1) [83]. Ageing continuous time random walks [77] or heterogeneous diffusion processes [28,82] give rise to such a behaviour. However, given the tools provided here on ultraslow diffusion it might be worthwhile checking whether the observe "immobile" particles may in fact perform logarithmically slow diffusion. In the case of ageing EB a (∆) = lim Expanding the integrand for t 1 > t a 1, we get Evaluating the integral for t a ∆, t ∆, we obtain , (A. 12) and the ergodicity breaking parameter EB a (∆) is then given by equation (27).
2015-03-27T16:14:43.000Z
2015-03-27T00:00:00.000
{ "year": 2015, "sha1": "11f454215112e3f6edfc316e071ccb3dced46c96", "oa_license": "CCBY", "oa_url": "https://doi.org/10.1088/1367-2630/17/6/063038", "oa_status": "GOLD", "pdf_src": "Arxiv", "pdf_hash": "11f454215112e3f6edfc316e071ccb3dced46c96", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
45936
pes2o/s2orc
v3-fos-license
Time-to-Credit Gender Inequities of First-Year PhD Students in the Biological Sciences A national sample of female PhD students logged significantly more hours conducting research than their male counterparts. However, males were 15% more likely to be listed as authors on journal articles per 100 hours of research time, reflecting inequality on an essential metric of scholarly productivity that directly impacts competitiveness for academic positions. INTRODUCTION Training the next generation of scientists is critical to the continued advancement of human knowledge and economic development (U.S. Department of Labor, 2007;Wendler et al., 2010). An important and historically challenging component of growing the scientific workforce is ensuring equitable gender representation to secure a sufficient number of individuals and diversity of perspectives to meet projected workforce demands (National Science Foundation [NSF], 2000). Despite advances made over the past few decades, inequality in wages, promotion, evaluation, and recognition between women and men continues as a general trend in the United States. These trends are mirrored in many fields, both overtly and subtly (Roos and Gatta, 2009). Scholars have repeatedly documented gender bias against women in academic science across key status markers, including the evaluation of research (Barres, 2006;Budden et al., 2008;Knobloch-Westerwick et al., 2013) and the distribution of scientific awards and honors (Lincoln et al., 2012). Tenured female faculty are often expected to take on more mentorship and service, which is generally uncompensated and undervalued (Hirshfield, 2014). These disparities can in part be attributed to stereotypes and bias that are influential at the individual interaction and organizational David F. Feldon, † * James Peugh, ‡ Michelle A. Maher, § Josipa Roksa, ∥ and Colby Tofel-Grehl ¶ levels (Ridgeway and Correll, 2004;Weyer, 2007). Biases give rise to status beliefs regarding gender, wherein men are often viewed with greater confidence in their choices, abilities, and potential (Wagner and Berger, 1997;Ridgeway, 2001;Foschi, 2009). Even when objective criteria indicate equivalent performance, men are typically judged as being more competent or performing better on various tasks (Foschi, 2000). When men and women work equal hours, men are more readily perceived as being more dedicated to their work and more productive than women, receiving more positive performance evaluations (Heilman, 2001;Reid, 2015). Hiring trends also reflect gender discrepancies, even in fields with equitable levels of PhD attainment across gender. In the biological sciences, women have accounted for more than 50% of all PhD recipients each year since 2008 (NSF, 2015), but according to current estimates, only 29-36% of tenure-line assistant professorships in the discipline are held by women (Nelson and Brammer, 2007;Sheltzer and Smith, 2014). Sheltzer and Smith suggest that the lower rate of women securing university faculty positions in the biological sciences is attributable to the disproportionate success of men in attaining postdoctoral research positions at top U.S. laboratories-especially those run by male principal investigators. In explaining their findings, the authors speculate that women's propensity to underrate their own skills (e.g., Correll, 2001;Pallier, 2003;Steinmayr and Spinath, 2009) or male biases to undervalue women's work contributions (Bowen et al., 2000) may lead to decreased application and hiring rates for postdoctoral positions at elite laboratories. Irrespective of possible explanations, equitable rates of degree completion do not translate into equitable attainment of employment as university faculty or postdoctoral researchers, suggesting the continued existence of gender inequalities. In the current study, we examine the potential contribution of graduate training experiences to these trends and further inform understanding of potential underlying causes. A number of previous investigations into doctoral education in the sciences also report patterns of gender disparity. Despite the centrality of doctoral mentoring as a key component of scientific training (Paglis et al., 2006;Barnes and Austin, 2008), women report receiving less faculty guidance than their male peers in designing research (Nolan et al., 2007), writing grant proposals (Fox, 2001), and collaborating on publications (Seagram et al., 1998). A track record of scholarly productivity is essential for securing academic employment (Ehrenberg et al., 2009), and, increasingly, it is expected that graduate students will have a strong track record of publishing before completing their degree programs (Nettles and Millett, 2006). Further, longitudinal studies indicate that the number of publications generated during graduate school significantly predicts subsequent productivity after degree completion (Kademani et al., 2005;Paglis et al., 2006), in keeping with the cumulative advantage of early publication for increased scholarly recognition observed among faculty (i.e., the "Matthew effect" [Merton, 1968, p. 56]). Therefore, female graduate students' access to publishing opportunities may have direct impact on their future success in multiple phases of the career pipeline. In this study, we compare the reported hours spent on research activity by participants and the rates of scholarly pro-ductivity across gender for a national cohort of 336 first-year PhD students in laboratory-based biological research programs (i.e., microbiology, cellular and molecular biology, developmental biology, genetics) from 53 research institutions to assess the extent to which gender inequities may manifest at the earliest stages of research training. Thus, our research questions are as follows: 1. Do men and women report different amounts of time spent on supervised research? 2. Are there differential reported influences associated with research time spent for men and women? 3. Is there a differential publication yield for men and women per time spent on supervised research? METHODS In contrast to many previous studies of gender differences in academic science (e.g., Seagram et al., 1998), the current study focuses on a single discipline with a constrained range of research practices (i.e., laboratory-based biological sciences, excluding field-based research) to avoid conflation of trends across distinct disciplinary subpopulations. Further, our analyses use multilevel modeling of individuals nested within institutions to appropriately account for normative cultural practices that may vary by university (i.e., nontrivial intraclass correlations), such as the programmatic use or exclusion of formal lab rotations, and to avoid inflated type I (false-positive) error rates (Musca et al., 2011) that can occur when such nesting is not taken into account (e.g., Kaminski and Geisler, 2012). Time spent on research tasks was reported biweekly as participants completed the first years of their academic programs. At the conclusion of the academic year, participants reported the number of journal articles, conference papers, and published abstracts for which they received authorship credit during that time. They also completed survey items to provide weighted attributions for the factors affecting time spent on research and levels of confidence they had in their abilities to perform specific research skills (i.e., self-efficacy). Participant Recruitment and Characteristics Participants were recruited for the study in two ways. First, program directors and department chairs for the 100 largest biological sciences doctoral programs in the United States were contacted by email to describe the study and request cooperation for informing incoming PhD students about the research project. Specifically, students entering "bench biology" programs, such as microbiology, cellular and molecular biology, genetics, and developmental biology, were targeted. Those who agreed either forwarded recruitment information on behalf of the study or provided students' email addresses to project personnel for recruitment materials to be disseminated. In instances in which incoming cohorts were six students or more, campus visits were arranged for a member of the research team to present information to eligible students and answer questions during program orientation or an introductory seminar meeting. Second, emails describing the student and eligibility criteria were forwarded to several listservs, including those of the American Society for Cell Biology and the CIRTL (Center for the Integration of Research, Teaching, and Learning) Network for broader dissemination. Those individuals who responded to the recruitment emails or presentations were screened to ensure that they met the criteria for participation (i.e., beginning the first year of a PhD program in microbiology, cellular biology, molecular biology, developmental biology, or genetics in Fall 2014) and fully understood the expected scope of participation over the course of the funded project (4 years with possible renewal). It was further explained that all data collected would remain confidential, that all data would be scored blindly, and that no information disseminated regarding the study would individually identify them in any way. Participants signed consent forms per the requirements specified by the institutional review board for human subjects research. Participants who remained active in the study received a $400 annual incentive, paid in semiannual increments. Participants were informed that if they failed to provide two or more consecutive annual data items (i.e., annual surveys) or more than 50% of the biweekly surveys in a single academic year, they would be withdrawn from the study. In addition, any participants who took a leave of absence from their academic program greater than one semester would be withdrawn. All data points were checked and followed up by research assistants for timely completion and meaningful responses. Three participants were withdrawn during the time these data were collected (two due to low response rate; one due to taking leave from the degree program). Three participants left the study when they withdrew from their academic programs. Overall sample size was N = 336 participants sampled from C = 53 institutions, with an average of 6.34 (336/53; SD = 5.69) participants per institution. Participant characteristics (i.e., distribution by gender, race/ethnicity, and prior research experience) are presented in Table 1. A large majority of participants (84.2%) reported rotating through multiple laboratories as part of their first years of doctoral training. The distribution of participants by specific program area (cell biology, developmental biology, etc.) can be found in Supplemental Table S1. Although not pertinent to the current analyses, participants also provided additional data on hours spent fulfilling teaching responsibilities, presented in Supplemental Table S2. The distribution of participants within institution by gender is presented in Table 2. The distribution of institutions across Carnegie research classifications is available in Supplemental Table S3. Data Collection Upon submitting informed consent paperwork, participants completed biweekly online surveys that focused on information specific to the preceding 2-week period. They also received additional surveys that were completed once per year. These instruments are described under the following headings: biweekly surveys, annual survey 1, annual survey 2, and annual survey 3. To address the first research question (reported time differing by gender), we drew data from the biweekly surveys and annual survey 1 (i.e., research self-efficacy). To address the second research question (gender-differential influences on time spent), we drew data from annual survey 2. To address the third research question (gender-differential publication yield), we drew data'from annual survey 3. Biweekly Surveys. Biweekly surveys asked participants to report the number of hours spent teaching, engaging in supervised research, and writing for publication in collaboration with a faculty member or other senior researcher during the preceding 2-week period. Specifically, participants were provided with the prompt "Over the last two weeks, approximately how many hours have you spent engaged in supervised research activities (e.g., working in a lab)?" and a drop-down menu with integers from 0 to 150. Although some methodological research on the collection of time data from work contexts indicates that time diaries are a more precise measure than surveys (Robinson and Bostrom, 1994), the level of intrusion into participants' daily work processes rendered that approach impractical for the current study. The same research also raised tentative concerns that women's responses might reflect an upward bias in reported hours relative to men, as measured by the discrepancy between survey-based and diary-based work hours. However, subsequent studies do not find discrepancies to be associated with gender or any other demographic variable (Jacobs, 1998). Further, a critical reading of Robinson and Bostrom's (1994) study raises questions about the conclusions and their applicability to the current sample. First, their data were drawn from workforce studies conducted in 1965, 1975, and 1985, and the authors note that data from each subsequent decade reflected increasing reporting discrepancies across all participants (i.e., male and female) due to increases in general cognitive "busyness" in work environments. Given that the proportion of women in the workforce increased substantially from 1965 (35% of the U.S. workforce) to 1985 (44% of the U.S. workforce) (U.S. Bureau of Labor Statistics, 2016), the reported bias likely reflects the collinearity of the increasing relative proportion of women in the workforce sample and the increasing hours bias across genders over time. That is, without additional data (e.g., comparisons of time discrepancy between genders within time periods), there is no way to determine that the increase in observed discrepancy between sources of reported time is due to gender rather than uniform increases in cognitive busyness across genders, accompanied by coincidental but unrelated increases in the proportion of women in the workforce over time. Because Robinson and Bostrom theorize that the increase of busyness in the workplace accounts for increasing bias over time and do not have an articulated theoretical position that could establish a causal relationship between respondent gender and bias, we conclude that the noted correspondence between gender and bias is an artifact of the approach taken to the statistical analysis of their data rather than a durable trend that would skew the data collected for the current study. Additional limitations on the applicability of Robinson and Bostrom's (1994) work to the current study include several aspects of the sample characteristics. First, the Robinson and Bostrom data were drawn from all sectors of the workforce, which differs from the graduate school environment substantially in terms of the population age, level of education, number of hours, and work setting of university research laboratories. Additionally, respondents reporting 30 hours of work or less per week had negligible discrepancies between diary and survey methods of data collection. Given that unadjusted mean weekly times reported by most students in our sample were ∼20 hours (male = 20.99, SD = 9.90; female = 19.50, SD = 10.35), upward bias is even less likely on the basis of the 1994 analysis. Annual Survey 1. During the Spring semester of 2015, participants received the Research Experience Self-Rating Survey (Kardash, 2000), which asked them to self-rate their abilities to perform each of 10 research-related tasks ("To what extent do you feel you can…?") on a Likert scale of 1-5 ("not at all," "less capable," "capable," "more capable," "a great deal"): "Understand contemporary concepts in your field," "Make use of the primary science literature in your field (e.g., journal articles)," "Identify a specific question for investigation based on the research in your field," "Formulate a research hypothesis based on a specific question," "Design an experiment or theoretical test of the hypothesis," "Understand the importance of 'controls' in research," "Observe and collect data," "Statistically analyze data," "Interpret data by relating results to the original hypothesis," and "Reformulate your original research hypothesis (as appropriate)." Additional items included in this survey asked participants to report the number of months spent participating in research activities before entering their PhD programs. Specific categories included formal research in high school, undergraduate research, research during a previous graduate degree program, and research conducted in industry. Annual Survey 2. Participants also received a survey asking them to respond to the prompt "What kinds of things affect your time spent on research on a weekly basis? Please categorize by percentage." Ten possible responses were provided, and the assigned percentages were required to sum to 100%. The response options were "Required hours," "Changes in workload based on project demands," "Comfort in lab," "Personal judgment/discretion," "Opportunity to contribute more to the research effort," "I'm not a good fit," "I'm not taken seriously," "Course work," "Familial responsibilities," and "Non-research obligations." Annual Survey 3. At the conclusion of the Spring semester, participants received another survey that asked them to identify any journal articles, conference papers, or published abstracts for which they had received authorship credit during the academic year. Responses were validated through independent researcher verification of citation information provided in the surveys against conference proceedings and journal tables of contents. Respondents were contacted regarding any observed discrepancies, and finalized information was subsequently used for analysis. Data Analysis Data analyses are reported in the following three sections. Analysis of Time Spent on Research (RQ1). The first goal of a longitudinal analysis is to quantify how the response variable changes over time, and polynomial trend components are the best way to capture when and how response variable changes occur (Raudenbush and Bryk, 2002;Singer and Willett, 2003). Hierarchical linear modeling (HLM) for longitudinal data was performed in two steps. In the first step, the most parsimonious longitudinal polynomial trend (linear, quadratic, cubic, etc.) that best modeled average changes in the response variable across participants during the study was selected. This was accomplished by adding a lower-order trend component to the model (e.g., adding a fixed linear slope to the model), followed immediately by a test of whether that trend component showed significant variation across participants (i.e., adding a linear slope random effect to the analysis model). The next polynomial trend component was then added to the model and tested in similar manner. Before the examination of changes in time spent on supervised research activity over time, corrective measures (i.e., "Type = Complex" in Mplus) were taken to guard against type I inferential errors that could result from ignoring the nesting of participants within universities, and missing data were handled via the default (Maximum Likelihood Regression [MLR]) parameter estimation algorithm in Mplus (version 7.4). Specifically, the Mplus command determines the proportion of variance in the response variable that can be attributed to institutions due to clustering of individual participants within universities and applies a multiplier to inflate estimated SEs to prevent erroneous statistical significance attributable to the influences of clustering rather than the targeted independent variables. However, the notable variation observed in hours spent on research over time coupled with the computational SE increases generated by "Type = Complex," could have artificially inflated p values, resulting in type II inferential errors. Specifically, large observed variance statistics for the number of hours spent on supervised research activities, combined with average sample sizes and design effect-corrected SEs will result in wide 95% confidence intervals (CIs). Therefore, meaningful significance for interpretational purposes was made based on effect sizes (see Table 3). Three guidelines were observed during this process. First, if a trend component fixed effect was nonsignificant, but the random effect for that trend component was significant, both were retained in the analysis model. Second, if a trend component fixed effect was significant, but the random effect for that trend component was not significant, only the fixed effect was retained in the analysis model. Third, this process continued until both the fixed and random effects of a given trend component were nonsignificant. Following this strategy, polynomial functions of time (i.e., linear time, quadratic time [time 2 ], cubic time [time 3 ], etc.) were added to the level 1 linear analysis model as fixed effects (i.e., γ) to best capture and model average change in hours spent on research across participants over time. This process continued for a possible (T = 13 − 1 = 12) 12 fixed effects and (T = 13 − 2 = 11) 11 random effects possibly needed to adequately model changes in hours spent on research over time. Missing data for both males and females ranged between 1.2 and 17.4% across the 13 time points and were handled via the default longitudinal HLM parameter estimation algorithm (MLR). Participants, on average, completed 12.44 biweekly time allocation surveys out of a possible 13 used for analysis. Participants as individuals accounted for 31.2% of variance in reported time spent on supervised research activities, and universities within which participants were nested accounted for 22.6% of variance. Data from biweekly periods 7 and 8 were excluded from analysis, because they coincided with the winter holidays, which introduced confounds related to the physical accessibility of university facilities, the personal preferences of supervising faculty, and atypical family obligations. Next, a multivariate analysis of variance (MANOVA) was conducted to test for gender differences in the Kardash (2000) survey items (annual survey 2) assessing self-efficacy for specific research skills described earlier. Missing data ranged from 1.5 to 16% across analysis variables and were handled via the default maximum-likelihood parameter estimation algorithm. In the analysis, 1000 bootstrap samples were requested to generate empirical rather than observed SEs. Based on the outcomes of the MANOVA, two factors (i.e., self-efficacy scores for "Formulate a research hypothesis based on a specific question" and "Design an experiment or theoretical test of the hypothesis") were added to the polynomial models for males and females as predictors of all significant trajectory components, and indicators for gender, ethnicity, and previous research experience were added to the model as control covariates (i.e., specifying the level-2 model). The final linear model used for both males and females is presented in the Supplemental Material. Analysis of Reported Influences on Research Time (RQ2). The next set of analyses examined potential explanatory factors that could account for observed differences in time spent on research by gender. Survey items asking participants to indicate perceived influences on the amount of time they spent in supervised research as percentages were analyzed using a MANOVA approach in Mplus that controlled for nesting of participants within institutions (i.e., "Type = Complex"). Because participants needed to make their cumulative responses sum to 100%, individual items were not independent, but the multivariate structure of the analysis permitted items to intercorrelate freely. Publications (RQ3). The final analysis examined gender differentials in authorship during the first year of graduate study. Missing data were observed in four of the data analysis variables: the categorical self-efficacy in designing experiments and formulating research hypotheses both had 4.5% missing data, and the binary indicators for published articles and published abstracts showed 9.8 and 10.1% missing data, respectively. Missing data were handled via multiple imputation for categorical variables in Mplus (version 7.4) and M = 100 imputed data sets were used for all analyses. Before analyses, the variable "total hours spent on research" from T9 to T15 was both rescaled (i.e., a 1-unit increase reflected an additional 100 hours spent on research) and grand-mean centered to facilitate interpretation. Further, specific analysis commands in Mplus (i.e., "Type = Complex") were used so that the nesting of participants within universities could be ignored without fear of type I inferential errors. Main effects for gender, total hours spent on research, and designing experiments and formulating research hypotheses self-efficacy scores as main effects, gender by total hours spent on research, gender by self-efficacy "designing experiments," and gender by self-efficacy "formulating research hypotheses" interactions, were all entered into the model as predictor variables (independent variables). Finally, a multivariate binary logistic regression analysis was conducted in which both of the binary indicator response variables for article publication and abstract publication (dependent variables) were entered into the model and allowed to correlate, because it was possible that a participant could have published both an abstract and an article, making both correlated rather than independent. Reported Time Spent on Supervised Research Differs by Gender The data on time that doctoral students invested in research activities within laboratories were captured by having participants complete biweekly online surveys in which they were asked to report the number of hours spent teaching, engaging in supervised research, and writing for publication in collaboration with a faculty member or other senior researcher during the preceding 2-week period. In separate annual surveys, participants also reported the amount of their prior research experiences and their levels of confidence in performing each of 10 criterial research tasks (Kardash, 2000). Gender differences in response patterns were evident only for confidence in designing experiments and formulating research hypotheses, respectively, with men reporting significantly greater levels of confidence than women. Accordingly, these values were entered into the level 2 (individual) model equations describing the relationship between gender and time spent on supervised research, described earlier, as predictors of significant intercept, linear slope, quadratic change, and cubic change variance while controlling for ethnicity and previous research experience. Across 13 time points, changes in time spent were modeled independently for men and women. All polynomial fixed-effects (level 1) coefficients were significant for women, as was variation on all but the quartic change term around each of the growth trajectory fixed effects (p < 0.05). In contrast, the model of men's hours included a significant fixed-effects coefficient only for intercept (p < 0.001) in the polynomial model. However, these nonsignificant fixed effects were retained in the model due to significant intercept ( ; p < 0.05) variances. In short, trajectories of male and female time spent on research differed to the extent that different polynomial models were necessary to describe them at the group level. To better illustrate these findings, Figure 1 shows the models of males and females, respectively. Estimated effect sizes by time point for T9-T15 ranged between Cohen's d = 0.63 and d = 1.72, representing consistently large effects (Cohen, 1988). Interactions with self-efficacy are reflected in Figure 1 as separate trend lines by gender for participants more than 1 SD above mean self-efficacy for experimental design, more than 1 SD below, and within 1 SD of the mean. Supplemental Figure S1 shows each model separately and includes SE estimates around each time point. Tabular representations of level 1 and level 2 models for males and females, respectively, are presented in Supplemental Tables S4-S7. Reported Influences on Research Time Differ by Gender To gain insight into the factors that participants perceived to influence the amount of time they spent on research, we administered a survey during the Spring semester, asking participants to respond to the prompt "What kinds of things affect your time spent on research on a weekly basis? Please categorize by percentage." Ten possible responses were provided, and the assigned percentages were required to sum to 100%. A MANOVA detected a significant difference between male and female responses on only one response item: women showed a significantly higher score (mean = 27.85) than men (mean = 21.49) for the response option "demands required for the task determining the amount of time spent on research" ( X p d 6.36, 0.001; 0.28 ∆ = < = ), representing a small but significant effect (see Figure 2). Men More Likely to Receive Authorship Credit per Hours Worked At the conclusion of the academic year, participants received another survey that asked them to identify any journal articles or published abstracts for which they had received authorship credit during the academic year. Of 303 responding participants, 68 reported authorship on a published journal article (22.4%), and 40 reported authorship on a published abstract (13.2%). Logistic regression analysis evaluated the likelihood of authorship (dependent variable) by gender, total hours spent on research, self-efficacy for experimental design and framing hypotheses respectively, and gender interactions with research hours and the two self-efficacy variables (independent variables). No significant results were observed for predicting abstract publication. However, a significant gender by total hours spent on research interaction effect (b = 0.144; p < 0.05; 95% CI: [0.027, 0.262]) was found for journal articles, indicating that for every 100 h spent on research and compared with females, males were 15% more likely (odds ratio = exp[0.144] = 1.15) to receive authorship credit for a journal article (see Figure 3). Neither the inclusion of variables in the logistic regression model reflecting confidence in designing experiments and formulating research hypotheses nor their interactions with gender in the logistic regression model yielded significant Participants provided survey responses in which they indicated having received authorship credit on journal articles, conference papers, and/or published abstracts. Logistic regression analyses for authorship on each type of publication, respectively, included gender, research time spent in the second semester, self-efficacy for experimental design and hypothesis framing skills, and gender interactions with each as predictors. The only significant predictor of journal article authorship was the gender by research time interaction (b = 0.144; p = 0.016; exp[0.144] = 1.15), indicating that males were 15% more likely to receive authorship credit than females per 100 hours of reported research time. coefficients (p > 0.05), indicating that participant confidence in research skills did not influence the likelihood of authorship for either men or women. DISCUSSION Our results show that, after controlling for variance at the institutional level, men spend significantly less time engaging in supervised research, are less likely to attribute their time allocation to the demands of assigned tasks, and are 15% more likely to author published journal articles than their female counterparts per 100 hours of research time. Collectively, these findings suggest that gender inequality manifests in the form of differential time-to-credit payoff as early as the first year of doctoral training. The men in our sample were better able to procure or were provided with better opportunities to capitalize on publishing prospects as a function of time spent on research than their female counterparts despite the reverse trend for time spent on research. These results provide convergent evidence for the conclusions of Smith et al. (2013), who found that female graduate students perceive a greater investment of effort to be necessary for success in their academic programs compared with their male counterparts. Although perceived effort and time invested are not identical constructs, it is possible that experiences of discrepant time-to-publication ratios may contribute to such beliefs. The finding of significance for journal articles is notable, because these publications are typically the most highly valued as indicators of scholarly productivity for professional evaluation in academe (McGrail et al., 2006;Ehrenberg et al., 2009). Given the importance of scholarly productivity in the evaluation of candidates for academic positions and the cumulative advantage that early publications provide over time (Merton, 1968(Merton, , 1988Kademani et al., 2005), these findings may account-at least in part-for the inequitable hiring rates for postdoctoral research positions reported in prior research (Sheltzer and Smith, 2014). Such cumulative advantage has been documented with graduate student populations across STEM (science, technology, engineering, mathematics) disciplines, in which both skills (Feldon et al., 2016) and faculty recognition of students' ability (Gopaul, 2016) increase geometrically from small initial advantages. The failure of confidence in research skills (i.e., self-efficacy) to explain any significant variance in either the amount of time spent on research by women or the likelihood of publishing by women or men is also of interest. These patterns in the first year of doctoral study indicate that, in contrast to suggestions in previous studies (e.g., Correll, 2001;Sheltzer and Smith, 2014), there is no evidence that lower self-efficacy prompts women to self-select out of professional opportunities in the first years of their doctoral studies. While it is possible that this pattern changes over the course of PhD attainment, caution should clearly be used in applying this explanation to underrepresentation of women in professional academic science. In contrast, our finding that confidence in research skills affected only men's time investment in research has two possible implications. First, the relevance of confidence in experimental design skills to time spent on research may point to a greater relevance of those skills in the tasks assigned to male graduate students within the laboratory environment. If men are more likely than women to engage in methodological decision-making tasks, it could explain the observed difference in publication rates. It would also better position men to discuss their contributions to laboratory research when applying for postdoctoral positions, increasing their competitiveness for those positions, above and beyond possible differential rates of publication. Second, the significant gender difference on this specific aspect of research and the lack of observed differences on confidence related to other aspects suggest that the ability to engage successfully in laboratory experimental design efforts may be differentially important in the training of graduate students for the purposes of setting career trajectories. Future research may inform the extent to which the nature of assigned research tasks differ and expand the scope of the current findings. With peer-reviewed publications serving as the proverbial "coin of the realm" (Wilcox, 1998, p. 216) for assessing research prowess, the ability of early-career researchers to convert time spent into publications leads to an increased likelihood of career success (Merton, 1968(Merton, , 1988Kademani et al., 2005). Because the results of this study reflect gender inequality with long-term ramifications in a scientific field that awards more doctorates to women than men, attention to degree completion rates reflects a necessary, but not sufficient, metric by which to evaluate gender equity in graduate training for the biological sciences. To best improve the equitable access of men and women to professional academic success, understanding the ways in which research training tasks differently enculturate men and women is essential. Increasing professional awareness of gender disparities is an important first step toward eliminating the effects of gender bias in the field. It may be further valuable for faculty who supervise graduate students to increase vigilance in their management of publications coming from their laboratories to ensure that both opportunities for authorship and recognition of invested effort toward publishable findings are allocated appropriately and equitably.
2018-04-03T02:00:46.908Z
2017-03-20T00:00:00.000
{ "year": 2017, "sha1": "931a7881769f648180aeb3dfdb1872846855b30a", "oa_license": "CCBYNCSA", "oa_url": "https://www.lifescied.org/doi/pdf/10.1187/cbe.16-08-0237", "oa_status": "HYBRID", "pdf_src": "PubMedCentral", "pdf_hash": "38f6fb0cb1df945c65cdf87744df89375c10bc33", "s2fieldsofstudy": [ "Biology", "Education" ], "extfieldsofstudy": [ "Medicine" ] }
221655737
pes2o/s2orc
v3-fos-license
Exploring the Hierarchy in Relation Labels for Scene Graph Generation By assigning each relationship a single label, current approaches formulate the relationship detection as a classification problem. Under this formulation, predicate categories are treated as completely different classes. However, different from the object labels where different classes have explicit boundaries, predicates usually have overlaps in their semantic meanings. For example, sit\_on and stand\_on have common meanings in vertical relationships but different details of how these two objects are vertically placed. In order to leverage the inherent structures of the predicate categories, we propose to first build the language hierarchy and then utilize the Hierarchy Guided Feature Learning (HGFL) strategy to learn better region features of both the coarse-grained level and the fine-grained level. Besides, we also propose the Hierarchy Guided Module (HGM) to utilize the coarse-grained level to guide the learning of fine-grained level features. Experiments show that the proposed simple yet effective method can improve several state-of-the-art baselines by a large margin (up to $33\%$ relative gain) in terms of Recall@50 on the task of Scene Graph Generation in different datasets. Introduction As a basic visual scene understanding task, scene graph generation (Lu et al. 2016;Xu et al. 2017;Li et al. 2017a;Zhang et al. 2017;Chen et al. 2018;Zellers et al. 2018;Li et al. 2018;Yang et al. 2018;Chen et al. 2019c) generates the scene graph including the located objects as nodes and the corresponding relationships between objects as edges from the image. Generally, the common solution for scene graph generation task is detecting the objects first and further inferring their pair-wise relationships, denoted as predicates. In this way, both the object detection and the predicate recognition are formulated as the classification problems, where there is only one single ground truth label assigned to each instance, assuming every category is independent and orthogonal to each other. However, different from object labels, which can be clearly defined, the predicates' boundaries are fuzzy. Sometimes they have overlaps in semantics. For example, "sitting next to" and "standing next to" are two different predicate labels but express similar spatial relations in semantics. Copyright c 2021, Association for the Advancement of Artificial Intelligence (www.aaai.org). All rights reserved. Figure 1: Examples of semantic-overlapping predicate labels . Left: scene image with predicate labels. Right: constructed hierarchy examples. Texts in red and bold are the clustered parent classes, which expresses the similarity connection between predicate labels. Best viewed in color. Therefore, treating these semantic-overlapping predicates as independent classes fails to leverage these kinds of inherent structure of the predicate labels. To explore the inherent connections in the predicate labels, a simple and direct solution is to cluster the predicate labels into parent classes to represent these semantic-overlapping connections. However, the semanticoverlapping connection is not the only property of predicate labels. Still take the "sitting next to" and "standing next to" as examples, as shown in the Figure 1, although they are semantic-overlapping, they correspond to slightly different relative spatial positions in the image level. Therefore, the properties of both similarity and difference exist in the predicate labels. How to take both the similarity and difference into consideration? To solve the problem, we propose to build the hierarchy based on comprehensive semantic understanding in two different perspectives, including human understanding and machine understanding. The built hierarchy contains both the clustered parent classes and the cleaned predicate labels. The cleaned predicate labels in the lower hierarchy serve as the fine-grained labels with richer descriptions, and the clustered parent classes in the higher hierarchy serve as the coarse-grained labels which contain the information of semantic-overlapping connections. Based on the built hierarchy, we propose the Hierarchy Guided Feature Learning (HGFL) strategy that learns better region features by simultaneously training two branches supervised by coarse-grained and fine-grained labels respectively. Besides, we also propose the Hierarchy Guided Module (HGM) to better leverage the correlations between the coarse-grained and fine-grained region features. Extensive experiments have shown that our method could remarkably improve the performance in terms of Recall@50 for about 18.9% gain on Scene Graph Generation task by simply applying our HGFL strategy. This strategy only requires marginal computational cost during training and no extra computational cost for inference. Another at least 5% gain in terms of Recall@50 on the Scene Graph Generation task could be achieved by embedding our HGM into the network. To showcase that the proposed method is general and robust, experiments are performed on different datasets and different frameworks. Experiment results show that our method could outperform the state-of-the-art methods on the VG-MSDN (Li et al. 2017b) dataset and the VG-DR-Net (Dai, Zhang, and Lin 2017a) dataset. Besides, we validate that our method is general that could remarkably improve 4 state-of-the-art frameworks. The contributions of this paper are as follows: (1) To better explore the inherent semantic-overlapping connections in predicate labels, we propose to build the predicate label hierarchy based on the semantic meaning of labels. (2) To learn better region features on both coarse-grained and fine-grained levels, we introduce the Hierarchy Guided Feature Learning (HGFL) strategy. (3) To better utilize the correlations between region features of coarse-grained and fine-grained levels, we further propose the Hierarchy Guided Module (HGM) to reason both the region-wise and channel-wise correlations and finally refine the region features with these two correlations. Related Work Scene Graph Generation. To improve the performance of scene graph generation task, recent works mainly focus on several perspectives such as solving the labeling problems in the dataset (Zhang et al. 2019;Zhan et al. 2019;Chen et al. 2019d;Peyre et al. 2019), importing external text knowledge as conditions (Yu et al. 2017;Gu et al. 2019a), exploring more information (motif, correlations) in existing labels (Zellers et al. 2018;Chen et al. 2019a;Wang et al. 2019) and increasing the diversity of feature information (e.g. visual feature, spatial features, linguistic features etc.) (Qi et al. 2019a;Dai, Zhang, and Lin 2017a;Hung, Mallya, and Lazebnik 2019). Specifically, there are several labeling problems such as ambiguous labels or instances, imbalanced classes and incomplete annotations existing in the Visual Genome Dataset (Krishna et al. 2017), which is a very large and the most commonly used dataset for the scene graph generation task. (Zhang et al. 2019) focus on the ambiguous instances problem which includes the Entity Instance Confusion problem and the Proximal Relationship Ambiguity problem, and propose to tackle the above problems by utilizing the Graphical Contrastive Losses to explicitly force the model to disambiguate related and unrelated instances. (Zhan et al. 2019;Chen et al. 2019d;Peyre et al. 2019) choose to generate the labels based on external knowledge and internal existing labels to ease the problem of incomplete annotations. Besides, (Zellers et al. 2018) analyzes and utilizes the motifs which are regularly appearing substructures in scene graphs to help detect relationships. In the paper, we also follow the idea of exploring more information from existing labels. However, instead of directly analyzing the motifs among labels, we build the predicate label hierarchy and further utilize the hierarchy to guide the feature learning. Hierarchical knowledge. Hierarchical information has been validated useful for many tasks (Chao et al. 2019;Yang et al. 2019;Bugatti, Saito, and Davis 2019;Chen et al. 2019b). But there are only very few works utilizing hierarchical information on the relationship detection related task. Bugatti, Saito, and Davis (2019) utilizes the hierarchical relationship between image class, image superclass, and object bounding boxes to predict the global class of image such as bookstore and backery. Our work is different from their work in three folds. First, their goal, predicting the global image class, is completely different from ours that generates a scene graph which requires prediction of both objects and their pair-wise relationships in the image. Second, the basic hierarchical knowledge is different, they mainly focus on the image class and object bounding box levels while we build the predicate label hierarchy. Third, we leverage the correlation between region features of different granularities, which is not utilized in their work. Yin et al. (2018) also try to resolve ambiguous and noisy object and predicate annotations by building the Intra-Hierarchical trees (IH-tree). However, our approach is different from theirs in the way of clustering labels and using hierarchical information. (1) Clustering approach: in (Yin et al. 2018), labels are clustered based on the same verb, preposition or adjective regardless of their semantic meanings, which may deteriorate the problem of semantic ambiguity. Our method cluster the predicate labels based on the semantic meanings. For example, "on a", "on side of", "on end of" will be all assigned to a parent class "on" in (Yin et al. 2018) but clustered to different parent classes by ours. "on side of", "next to", and "by" have different parent classes in (Yin et al. 2018), but have the same parent class by ours. Our approach is better at handling semantic ambiguity. (2) Using hierarchical information: Yin et al. only use the losses of different granularities for the same feature, which does not distinguish features from different granularities. In comparison, our HGFL learns deep features for each granularity independently so that they are distinguishable, and then our HGM refines features by utilizing the correlations among these distinguished features. Message passing module. In scene graph generation task, relationship is highly dependent on both object and region features as it is represented as a subject-predicate-object phrase triplet, which is generated based on object and region Figure 2: Overview of the Method. The whole framework can be divided into four stages. 1) Predicate label hierarchy construction. 2) Basic object and region feature generation. 3) Hierarchical branches with HGFL strategy, in which HGM is settled to leverage the correlations among the coarse-grained and fine-grained region features. The Message Passing(MP) module is used to pass messages between object and region features. The Relation Inference(RI) module generates relationship features by fusing corresponding region and object features. 4) Scene graph generation. Best viewed in color. features. Therefore, message passing modules for objectobject, object-region, region-region are extensively studied (Xu et al. 2017;Qi et al. 2019b). (Vaswani et al. 2017;Kipf and Welling 2016) are two common and classic templates of attention modules. However, all these message passing modules are performed on the same granularity level. Our work is complementary to these works. Specifically, we propose the HGM to pass messages between features on different granularities levels, so that the features on the coarsegrained level can provide abstract guidance for the features on the fine-grained level. And the features on the finegrained level can offer more detailed information for the features on the coarse-grained level in return. Method The Entire Framework An overview of our method shown in Figure 2 could be summarized as: 1) Construct the predicate label hierarchy. 2) Given an image with potential objects and predicates, Faster RCNN (Ren et al. 2015) is first applied to extract the basic object and region features. The region feature, extracted for predicting the categories of predicates, refers to the feature of a certain region containing multiple objects. 3) The hierarchical branches, trained with HGFL strategy, contain two branches. One branch extracts the region features named coarse-grained region features for coarse-grained predicate prediction, while another branch extracts the region features named fine-grained region features for the fine-grained predicate prediction. Between the two branches, our HGM is settled to leverage the correlations among the coarsegrained and fine-grained region features. Besides, following Fnet , object and region features pass messages through the Message Passing (MP) module and then the Relation Inference (RI) module generates relationship features by fusing the corresponding region and object features. To guarantee the two branches extracting the required features, we force them to be supervised by their cor-responding coarse-grained and fine-grained predicate labels respectively. 4) The coarse-grained and fine-grained predicate categories are predicted using corresponding region features, and the object categories are predicted using the object features. Then predicates and objects are formulated into the final scene graph. Predicate label hierarchy construction The overall hierarchy is built in a bottom-up manner by clustering the correlated predicate labels into each independent parent class. The parent classes serve as the coarse-grained predicate labels and the cleaned predicate labels serve as the fine-grained predicate labels. These two hierarchical sets of labels are utilized for designing losses for the coarse-grained and fine-grained region branches. In this paper, we introduce two ways of building the predicate label hierarchy(human understanding and machine understanding). More details are in Section Hierarchy Construction. Hierarchy Guided Feature Learning(HGFL) Based on the built hierarchy, where the coarse-grained labels contain the semantic-overlapping connections between predicates and the fine-grained labels have the detailed information of predicates, we design the hierarchical branches to learn better region features of corresponding granularities. In the hierarchical branches, given the features extracted from the ROI Align process, 2 branches (coarse-grained and fine-grained region branch) are used for extracting the corresponding coarse-grained region features A and fine-grained region features B. Then HGM is performed between A and B to exploit the correlations and utilize it to refine B. After HGM, MP module is used between each kind of region features and object features for refining features of each other. Then RI module generates relationship features by fusing region features and corresponding object features. For our HGFL strategy, one cross-entropy loss is calculated in the fine-grained region branch using the fine- grained predicate labels, and another cross-entropy loss is applied in the coarse-grained region branch using the coarsegrained predicate labels. In this way, the features of different branches correspond to predicates of different levels. Note that when the network is implemented without the HGM, the coarse-grained branch can be removed during inference. This indicates that the improvement of our strategy is actually a freebie because our model will not introduce any extra computational expenses for deployment. Hierarchy Guided Module (HGM) Hierarchy Guided Module (HGM) is designed to leverage the correlations of the coarse-grained and fine-grained region features. As shown in Figure 3, it contains the Correlation Reasoning Stage and the Feature Refinement Stage. Correlation Reasoning. In this stage, feature transformation is firstly applied before the region-wise and channelwise correlation reasoning. Feature transformation. Denote the region features of the coarse-grained and fine-grained level by A ∈ R N ×C×L + and B ∈ R N ×C×L + respectively, where N, C, and L respectively denote the numbers of regions, channels, and pixels. Two separate 1 × 1 convolutional layers are performed to transform A and B into the same embedding space. The whole transformation process can be formulated as: where the notation σ(·) represents the non-linear activation function like ReLU, f t a (·; W t a ) and f t b (·; W t b ) respectively denote the convolution operations for transforming region features A and B as with learnable weights W t a and W t b . A t and B t in Eq. (1) denote the corresponding transformed region features. The constant C 1 and C 2 are the numbers of channels of the transformed region features. Region-wise correlation reasoning. The region-wise correlation reasoning infers the correlations between the coarse-grained features of a region and the fine-grained features of all other regions in the same image. For example, the fine-grained features from regions with similar predicates (e.g. "close to" and "around") will have higher correlations with the coarse-grained features of a region (e.g. "near"). To reduce the computational cost, a convolution is applied to squeeze B t ∈ R N ×C2×L + into just one channel as follows: where f s (·; W s ) denote the convolution function with learnable weights W s , B sqz ∈ R N ×1×L + denotes the output. To calculate the regionwise correlations, denoted as S, between all channels of each region feature in A t and the accumulated B sqz , the B sqz will firstly be reshaped into matrix B M sqz ∈ R N ×L + . Then the following formulation is used: where × denotes matrix multiplication, (·) T denotes the transpose operation, and the function θ(·) denotes the downsampling function implemented by max or average pooling. In Eq. (3), there are two steps. First, matrix multiplica- to obtain the full region-wise correlationS i between A t i and B M sqz , and the size ofS i is (C 1 × N ). Second, to further get the most related region of B t for each region of A t , downsampling operation θ(·) is applied to reduce the second dimension ofS i from N to 1. Therefore, the size of S i is (C 1 ). The output S i (i = 1, . . . , N ) corresponds to the correlation between the ith region in the coarse-grained features and all the other regions in the fine-grained features. When all regions in the coarse-grained features are considered, the size of the full region-wise correlations S is (N × C 1 ). Channel-wise correlation reasoning. In the region-wise correlation reasoning above, all channels of B t are squeezed before calculating the correlation, which may lead to loss of channel information. To preserve and utilize the channel information, and to focus on the correlations between the coarse-grained and fine-grained features of the same region, we also calculate the channel-wise correlations. Denote B t j ∈ R C2×L + and A t j ∈ R C1×L + as the jth region in B t and A t respectively. The channel-wise correlation matrix C j of the jth region is obtained as follows: where σ(·) represents the activation function and f c (·; W c ) denotes a 1D-convolutional function with learneable weights W c . There are three steps in Eq. (4). First, the correlation matrixC j ∈ R C2×C1 + for each corresponding jth region in A t and B t can be obtained through matrix multiplication B t j × (A t j ) T . Second, to further globally gather and map the information in channels of each region in B t to the domain of A t , the 1D-convolutional function f c (·; W c ) is applied to reduce the first dimension ofC j from C 2 to 1. In this way, the size of C j is (1 × C 1 ). Third, activation function σ(·) is applied. Note that C j is for the jth region, thus the size of whole channel-wise correlation C = [C 1 . . . C j . . . C N ] is (N × C 1 ) considering all regions j = 1, . . . , N . Feature refinement. In the feature refinement stage, the calculated region-wise and channel-wise correlations are firstly back projected into the region feature space and then stacked residual structures are utilized for feature refinement. There are three steps in this stage. First, denote A t l ∈ R N ×C1 + as the lth pixel in A t , the channel-wise correlations C is projected into the space of A t l through element-wise multiplication A t l * C, then the projected information is utilized as: where * denotes the element-wise multiplication, the function f ac (·; W ac ) denotes the convolutional operation with learnable weights W ac . Element-wise sum is utilized to fuse the A t l with the projected information A t l * C. Then the convolution operation f ac (·; W ac ) further non-linearly processes the fused features to the feature A out l . The size of A out = [A out 1 , ..., A out L ] is (N × C 1 × L) for all pixels l = 1, . . . , L. Second, similar to Eq. (5), the region-wise correlations S is utilized as follows: where f bs (·; W bs ) denote the convolutional function with learnable weights W bs . The size of A out = [A out 1 . . . A out l . . . A out L ] is (N × C × L) for all pixels l = 1, . . . , L. The role of f bs (·; W bs ) is to transform the fused features for refining the fine-grained region feature B. Third, the A out serves as the correlated information from coarse-grained region features and is added to the finegrained region features B as follows: where the refined feature B out refers to the output of HGM. Hierarchy Construction We introduce two clustering methods to build the predicate label hierarchy, Manual Clustering (MC) based on human understanding and Automatic Clustering (AC) based on machine understanding. The clustered parent classes serve as the coarse-grained level and the cleaned predicate labels serve as the fine-grained level in the hierarchy. Manual Clustering In this process, clusters are made based on the understanding of the text meaning of labels, scenes in the corresponding image and the synsets in Wordnet (Miller 1995). For phrase-format predicate labels which include multiple words, we extract and consider the keyword based on the criterion whether the word can reflect the relation between subject and object or not. Specifically, these labels can be categorised into three types, including verb-prep. (e.g. "walking by"), prep.-prep. (e.g. "in between") and stereotyped expression (e.g. "inside of", "out of"). For the verb-prep., the verb will be chosen as the keyword only if it results in a static status, for example, we pick "topped" and "covered" as the keyword of "topped with" and "covered by". Otherwise, the verb usually depicts the action and can be taken as the attribute of subject, so the prep. which usually contains the relative spatial relations will be taken as the keyword such as "in", "at" are the key role of "riding in", "sitting at" respectively. For the prep.-prep. and stereotyped expressions, the corresponding image scene examples are taken as the important evidence for deciding the keyword because it is hard to tell the keyword from the text directly. Automatic Clustering This method clusters the predicate labels based on the machine understanding through utilizing the pretrained word2vec (Mikolov et al. 2013) model. Specifically, each predicate label is encoded as one 300-dimensional embedding vector with the word2vec pretrained model. Then Kmeans is applied to cluster these embedding vectors into parent classes. Because this method clusters features by comparing the calculated distances between embedding features, xivthe advantage is that the verb-prep. style relationship labels with the same preposition (e.g. sit on, walk on, stand on) are easily clustered into same parent class because they have the same word. While the disadvantage is that the model will mainly focus on the pure text information or language prior instead of corresponding scene information. For example, "park", "park near" and "near" are clustered into one parent class because both "park" and "near" are closely related to "park near", but "park" and "near" are actually not semantically close. Experiments Implementation details Dataset. The Visual Genome (VG) dataset is selected as the base dataset due to its great capacity (108K images) and complexity (35 objects and 21 pairwise relationships within each image). Before the hierarchy construction, we firstly Table 2: Experiment results of component analysis on the cleaned dataset (VG-H), VG-MSDN and VG-DR-Net. PredDet, PhrDet, SGGen denote Predicate Detection, Phrase Detection, and Scene Graph Detection ,R K denotes the Recall for the top K predictions, which also applies for all other Tables in this paper. perform the fundamental cleaning operations including filtering low-frequency predicate labels, removing or correcting meaningless predicate labels (e.g. "no") and merging replaceable predicate labels (e.g. merge "alongside" and "are alongside" into "alongside"). By applying the above cleaning method, the cleaned predicate labeling set has 275 categories. The clustered parent classes set has 30 categories. We evaluate our results based on the cleaned predicate labels described above if not specified. This dataset is denoted as VG-H. Except for VG-H, we also evaluate our method on other datasets, e.g. VG-MSDN, VG-DR-Net. Comparison among these datasets is shown in Table 1. Technical details of the framework. The whole network is trained end-to-end, VGG16 (Simonyan and Zisserman 2014) is selected as the backbone to extract basic features. As for the normalization within the whole framework, we apply synchronized BatchNorm only onto the HGM and the object bounding box head. During training, the number of object and region proposals are set to be 256 and 512. The number of channels of all object and region features are set to be 512. Both weights of two cross-entropy losses on coarse-grained and fine-grained region branches are set to 1. Evaluation. The model is evaluated on three tasks, all assuming ground-truth object bounding boxes are not provided, Predicate Detection (PredDet), Visual Phrase Detection (PhrDet) and Scene Graph Generation (SGGen). Predicate Detection aims to detect the predicate category based on the RPN proposal. Visual Phrase Detection is to detect the subject-predicate-object phrases requiring the IoU (intersection over union) value between region bounding box and ground truth region bounding box should be at least 0.5. Scene Graph Generation is also to detect both the objects and their pairwise relationships, but it requires the IoU between object pairs and the ground truth to be higher than 0.5. The Top-K Recall(Recall@K) is chosen as the main evaluation metric. It calculates the fraction of ground-truth relationships hit in the top K predictions. K is set to 50 and 100 in our evaluation. Table 3: Comparison between Manual Clustering (MC) and Automatic Clustering (AC) with different number of groups (clusters) on VG-H dataset. MC and AC have comparable performance, which validates that hierarchy construction is consistently useful for refining features. Component analysis For component analysis, we perform experiments on the proposed HGFL strategy and the HGM on the datasets including the proposed VG-H dataset, VG-MSDN and VG-DR-Net. Fnet ) is taken as the base framework. Hierarchy Guided Feature Learning (HGFL). Experiment results of HGFL strategy on the VG-H dataset, VG-MSDN and VG-DR-Net are shown in Table 2. The baseline shown in the first row refers to training the Fnet (Lin et al. 2017) only with the fine-grained predicate labels, and the result "+HGFL" refers to training the model with both the coarse-grained and fine-grained predicate labels through our HGFL strategy. As we can see, on VG-H dataset, the results of Recall@50 and Recall@100 can respectively achieve 18.5% and 15.35% gain on PredDet, 27.53% and 25.98% gain on PhrDet, 32.24% and 29.84% gain on SGGen. Similar gains are achieved in the experiment results on the VG-MSDN and VG-DR-Net datasets. All above results show that HGFL strategy is a simple but very effective method. Hierarchy Guided Module (HGM). HGM is used for reasoning the cross-correlations between coarse-grained and fine-grained region features, which is complementary to HGFL strategy. Since other methods do not have coarsegrained predictions, to fairly compare with them, the output of HGM is the refined fine-grained region features. As shown in Table 2, "+HGFL HGM" denotes adding HGM and training the model with HGFL strategy. Compared with the model trained only with HGFL strategy, on VG-H dataset, the pure gain of HGM on the Recall@50 and Recall@100 can achieve 7.64% and 5.12% on PredDet, 5.19% and 2.33% on SGGen. Besides, on VG-H dataset, the overall gain of both HGFL and HGM on Recall@50 and Recall@100 can achieve 27.55% and 21.26% on Pred-Det, 29.75% and 25.7% on PhrDet, 39.11% and 32.5% on SGGen. Ablation study Hierarchy construction method. Experiment results on Manual Clustering (MC) and Automatic Clustering (AC) are shown in the first two rows of Table 3, the conclusion is that these two methods have comparable performance. Our analysis is as follows: the verb-preposition style relationship account for a large portion of all relationship labels, and the preposition is usually taken as the keyword when clustering, thus the phenomenon that most verb-preposition rela- tionship labels with the same preposition (e.g. "stand next to", "lying next to") will be clustered into the same group (e.g. "next to") is the common property of these two methods. While the difference between these two methods is that the MC is mainly based on human understanding but the AC is mainly based on the machine understanding which comes from the pretrained model's language prior. However, it is very difficult for model to correctly fit the cluster related to human understanding or pure machine understanding, but it is easy for the model to capture the common regularity of clustering most verb-preposition relationship labels with the same preposition into one group. Besides, the comparable gains on both MC and AC also confirm that the method of hierarchy construction and employment is effective. Clustering group number. When building the hierarchy manually, the number of clustering groups(30) is not predefined but the result of manually clustering. Therefore, the number of clustering groups in AC is a factor to be explored. We perform experiments on 30, 50, and 100 respectively. Experiment results are shown in the last three rows of Table 3, which show that the number of clustering groups is not that important. And there is slight performance drop when the number of clusters increases. Message passing module. The experiment results are shown in Table 4. The baseline Fnet model plus our HGFL (HGFL in Table 4) is used as the baseline in this experiment. The investigated message passing methods include (Vaswani et al. 2017) (HGFL MPS in Table 4), directly concatenating both region features in the coarse-grained and fine-grained level (HGFL concat in Table 4), and HGM (HGFL HGM in Table 4). All these message passing methods are applied on the same position of model while training with the HGFL strategy. As shown in Table 4, the performance of general message passing method or the concatenation does not provide improvement compared with the baseline HGFL. And HGM provides improvement on all evaluation metrics, Comparison with the state-of-the-art. Our method could be generalized for any framework and any dataset, in which labels have the inherent semanticoverlapping connections. Different datasets. Since the results based on hierarchy from the MC and AC are similar, we directly build hierarchy through AC for other datasets which include VG-MSDN (Li et al. 2017b) and VG-DR-Net (Dai, Zhang, and Lin 2017a). The clustering group number is set to 8 and 6 respectively. The experiment results are shown in Table 5. As we can see in the table, our method outperforms the state-of-the-art approaches on both datasets. Results on VG-MSDN and VG-DR-Net also show that our baseline Fnet is a strong baseline approach. Besides, compared with the IH-Tree (Yin et al. 2018), experimental results show that our method is more effective. For VG dataset, our pure averaged gain of Recall(@50 and @100) among three tasks is 4.27%, while the corresponding gain of IH-Tree (Yin et al. 2018) is 2.01%. Different frameworks. In Table 6, we further provide the results of applying our method onto different frameworks which include ISGG (Xu et al. 2017), MSDN (Li et al. 2017b) and Graph-RCNN (Yang et al. 2018) on the VG-H dataset. The Fnet ) is the original baseline in this paper. All these frameworks are reimplemented. In the table, for each framework, "+ ours" denotes the results obtained by training the baseline with our method (the HGFL strategy and the HGM). Based on the experimental results, we find that our method can further improve the performance on different frameworks. Conclusions In this paper, to explore the semantic-overlapping connections in the predicate labels, we firstly propose to build the language hierarchy, which contains both the coarse-grained and fine-grained levels, based on comprehensive semantic understanding in two different perspectives including human understanding and machine understanding. Then we introduce a Hierarchy Guided Feature Learning strategy to learn better region features of the two levels. Besides, we further propose the Hierarchy Guided Module to better utilize the cross-level correlations. Experiment results show that our method is a general method, which not only outperforms the baseline on the predicate labeling sets but also outperforms state-of-the-art methods on other public datasets.
2020-09-15T01:01:20.257Z
2020-09-12T00:00:00.000
{ "year": 2020, "sha1": "1cbd9a374598544e4ebafc69cbc07d1be7c1a702", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "1cbd9a374598544e4ebafc69cbc07d1be7c1a702", "s2fieldsofstudy": [ "Computer Science" ], "extfieldsofstudy": [ "Computer Science" ] }
59124038
pes2o/s2orc
v3-fos-license
Dynamic analysis of sugar metabolism in different harvest seasons of pineapple ( Ananas comosus L . ( Merr . ) ) In pineapple fruits, sugar accumulation plays an important role in flavor characteristics, which varies according to the stage of fruit development. Metabolic changes in the contents of fructose, sucrose and glucose and reducing sugar related to the activities of soluble acid invertase (AI), neutral invertase (NI), sucrose synathase (SS) and sucrose-phosphate synthase (SPS) were studied in winter and summer pineapple fruits in this paper. Sucrose was significantly increased in most of the harvesting winter fruits which reached the peak of 64.87 mg·g-1 FW at 130 days after anthesis, while hexose was mainly accumulated at the 90 day of the summer fruits in July. The ratio of hexose to sucrose was 5.92:0.73 from the winter fruit in February. Interestingly, the activities of SPS and SS synthetic direction of the harvested fruits in February were significantly higher than those in July, whereas the invertase activities were exactly opposite. NI activity showed a similar trend to AI, but the amount of NI activity was higher than AI in both months. Therefore, NI appears to be one of the vital enzymes in pineapple fruit development. Conclusively, the enzyme activities related to sugar play key roles in the eating of quality pineapple, which could be improved by cultivation in different seasons. So we can arbitrate different temperature to improve the quality of pineapple fruits according to market demand. INTRODUCTION Fruit taste and quality depends on such factors as sugars, organic acids, firmness, amino acids and aromatic compounds.Sugars synthesized in source tissues are one of the most important sugars, which are transported into sink tissues such as fruit, shoots and other tissues (Itai and Tanahashi, 2008).Sucrose serves an integral role as both a source of carbon and energy for non-photosynthetic tissues, which is central to plant metabolism and the most *Corresponding author.E-mail: gm-sun@163.com.Tel: +86-759-2859205.Fax: +86-759-2859124. However, primary sucrose metabolism is governed by several enzymes like sucrose phosphate synthase (SPS), sucrose synthase (SS) and acid invertase (AI) (Yamaki and Moriguchi, 1989;Moriguchi et al., 1992;Tanase and Yamaki, 2000).SPS has been suggested to be the key enzyme responsible for sucrose accumulation in pineapple fruit (Chen and Paull, 1998;Nilprapruck et al., 2008).AI such as soluble acid invertase (SAI) and neutral invertase (NI) can cleave sucrose to glucose and fructose (Davies and Robinson, 1996).SS can either cleave sucrose to UDP-glucose and fructose or catalyze the reverse, synthetic reaction in the control of sucrose import and fruit growth parameters (D'Aoust et al., 1999). The composition of sucrose, glucose and fructose plays a key role in determining the sweetness in tomato (Bruyn, 1971), papaya (Gomez et al., 2002), peach (Colaric et al., 2004), apple (Róth et al., 2007;Veberic et al., 2007), and pineapple fruits (Bartolomé et al., 1996;Kermasha et al., 1987;Shinjro et al., 2004).According to some reports, there are considerable differences in sucrose content and commercially important cultivars such as 'Red Spanish', 'Smooth Cayenne' and 'Kew' (Bartolomé et al., 1996;Kermasha et al., 1987).Chen and Paull, (1998) reported that glucose and fructose were the predominant sugars during early fruit development.Sucrose began to accumulate 6 weeks before harvest at a higher rate in the interfruitlet tissue.Nilprapruck et al. (2008) found that total sugars and reducing sugars of pineapple treated by methyl jasmonate (MeJA) on chilling injuries were not significantly different from that of the control pineapple.Liu et al. (2009) reported that the flavor in summer pineapple fruit was better than that of the winter fruit.Joomwong (2006) showed that the fruit harvested in winter had the highest content of total soluble solid (TSS) and titratable acid (TA), and the lowest ratio of TSS : TA than any other seasons.However, the correlation between development and sugar metabolism in the 'smooth cayenne' cultivated in the different harvest seasons is yet unknown. This study aims to gain a better insight into the relation between the different harvesting seasons and sugar accumulation, the enzymes related to sucrose metabolism, effect of different harvest seasons on the sugar accumulation and its physiology during pineapple fruit development.The experimental design used in this study was under the same management conditions such as irrigation, fertilization, soil management, disease control and pruning.One hundred and fifty fruit samples in each season had been selected after the full florescence period.The fruits were randomly sampled every 10 days from the 20th day after anthesis, and cut transversely into 3 sections after the size and weight of crowns and fruits were measured.Only the flesh of the middle section was used for the determination of the sugar content and sucrose synthase activity in this study.These sliced fleshes of 30 fruits were pooled together as one of five replications at each harvesting time.After collection, the tissues were immediately frozen in liquid nitrogen and stored at -80°C before being analyzed. Determinations of fructose, glucose and sucrose content Soluble sugars, sucrose, glucose and fructose were determined by high performance liquid chromatograph (HPLC) as previously described by N'tchobo et al. (1999) with slight modifications.Approximately, 1 g frozen fruit fresh tissue was homogenized with 5 ml deionized water, incubated at 80°C in a water bath for 15 min, and centrifuged at 15000 g for 15 min.The pellet fraction was redissolved with 4 ml sterile water and recentrifuged at 15000 g for 15 min, combining the two clear supernatant and adding sterile Zhang et al. water to make it 10 ml.Individual sugars were quantified by injecting a 10 µl aliquot of sample into a HPLC (PE Corp., America) equipped with an analysis column (Series200, 250 × 4.6 nm, 5 µm), a differential refractometer (PE200) and a reporting integrator.The mobile phase was degassed CH3CN:H2O=70:30 (V/V), at a flow rate of 1.0 ml•min -1 and 35°C.Peak height measurements were used to quantify the soluble sugars by comparing them to peak height of a standard solution. Invertase activities were measured as described by Qi et al. (2007).The soluble and insoluble acid and neutral invertases activities were assayed in a final volume of 25 ml, containing 0.2 ml of dialyzed enzymatic extraction, 0.8 ml of reaction solution (pH 4.8 or 7.2, 0.1 mol•L -1 Na2HPO4, 0.1 mol•L -1 sodium citrate, 0.1 mol•L -1 sucrose for acid invertase and neutral invertase, respectively).The activities were measured by the quantity of reducing sugars released in the assay media with dinitrosalicylic acid. Activity of SPS was assayed according to the method of Miron and Schaffer (1991) using 0.15 ml of reaction medium and 0.2 ml of enzyme sample.The reaction medium is composed of 50 mmol•L -1 Mops-NaOH (pH 7.5), 10 mM MgCl2, 5 mmol•L -1 glucose-6phosphate, 10 mmol•L -1 fructose-6-phosphate and 5 mmol•L -1 uridine diphosphate glucose (UDPG).The mixture was incubated for 30 min at 37°C and the reaction was stopped by adding 0.1 ml 30% (w/v) NaOH and kept in boiling water for 5 min.When cooled to room temperature, the resorcinol solution (12%, v/v) of 0.5 ml and HCl (12 mol•L -1 ) of 0.5 ml were added into the mixture and held in an 80°C water bath for 10 min.Blank controls were obtained by adding the sterile water to the reaction medium containing resorcinol.The procedure for the sucrose synthase assay was identical to that of SPS except the reaction mixtures that contained 10 mm fructose and did not contain fructose 6-phosphate or glucose 6phosphate. Statistical analysis All data were analyzed using different software, DPSv3.01 for the variance analysis and the correlation analysis by SAS9.0 according to different requirements was done.The relationship between sugars and enzyme activities were described with linear correlation analysis. The effect of different harvesting seasons on the fruit development dynamics Pineapple fruits growth measured as changes in weight showed that the development of the summer fruits was a typical S curve (slow -fast -slow), but the winter's was not a typical S curve (fast -slow) (Figure 1) in different seasons.During the 30 to 80 days, the summer fruits increased rapidly, then remained stable.However, the fast growth period of winter fruits were from the 20 to 90 days, then its variance was also small (2.13%) at the mature period. The development of the winter fruits (harvested in February) were 40 days longer than that of the summer fruit (harvested in July), but its weight was opposite. The changes of fruit sugar content in the different harvesting seasons As the fruit developed, the sucrose content was increased firstly, and then declined in the whole development in the summer fruits (Figure 2A).The sucrose content was low during the former 40 days (lower than 5.63 mg•g -1 FW), then accumulated slightly, reached the peak at the 70 days (20.95 mg•g -1 FW), dropped at the maturity (16.67 mg•g -1 FW) at the end.Compared with sucrose content, the ratio of hexose (fructose and glucose) to sucrose was 5.92.The hexose was the predominant sugar and gradually increased with the highest level presented at ripe stage.Hexoses accumulated to higher lever in 90 days after anthesis.The content vales were 54.68 and 43.89 mg•g -1 FW, respectively.The changes of glucose and fructose were basically consistent.It suggested that the summer fruits mainly accumu-lated hexose, and the total sugar content was 115.23 mg•g -1 FW. In the early stage of development of the winter fruits, the fructose content accumulated rapidly from 5.03 to 15.50 mg•g -1 FW during 20 to 50 days, and gradually increased to the peak in the 130 days (24.31 mg•g -1 FW) (Figure 2B).The glucose content significantly improved from 3.71 to 22.09 mg•g -1 FW at the beginning of 60 days, then reached the maximum.At this period, glucose accumulation was in agreement with fructose accumulation.The accumulations of glucose and fructose were fluctuant till the harvest.However, the content of sucrose slightly increased during the young fruits and reached the peak (64.87 mg•g -1 FW) at the ripe fruits, which was the 60.54% of total sugar.During this process, the main sugar accumulation was sucrose, and the content of total sugar was 107.16 mg•g -1 FW. The SPS activity in different seasons The changes in SPS activity of pineapple fruit, harvested at different development stages and different ripening conditions, are shown in Figure 3.In the winter and summer fruits, peak times and values of SPS activity were obviously different, though their tendencies all increased.In the winter fruits, the SPS activity was not detected in the beginning of 30 days, sharply improved from the 90 to 100 days.Peak activity with the maximum value of 67.90 µmo1•h -1 g -1 FW appeared in the 120 days after anthesis, and then dropped off at the harvest.In contrast, no significant difference could be observed in the activity of SPS in the summer fruits with the exception of the assessment of the 40th day.It started to go up to the maximum of 13.79 µmo1•h -1 g -1 FW, and similarly, was down to the lowest level at maturity.The total activity in winter ripened fruits was 5.83 times than in the summer. The SS activity in different seasons SS activities that can act both in synthesizing and cleaving sucrose were measured in the winter and summer fruits.In young fruits, the activity of synthesis direction of SS showed low activity that was more stable and lower -1 FW until 60 days in both season fruits.However, it rapidly rose up from 4.45 to 98.16 µmo1•h -1 g -1 FW with the range of 60 to 100 days (Figure 4).During mature stages, the SS activity of fruit harvested in winter, which increased continuously and reached the peak plateau from 100 to 120 days, declined slightly at the harvest.In summer fruit, the low value occurred on the whole development stages and fluctuated between 2.72 and 6.78 µmo1•h The AI activity in different seasons The AI activity in the winter fruits generally rose up from 11.38 to 41.23 µmo1•h -1 g -1 FW during 20 to 40 days after anthesis, then decreased rapidly to the level of 4.57 µmol•g -1 h -1 FW at the harvest (Figure 5).The changes were opposite to that of sucrose.In the summer fruits, the activity showed a slight fluctuation in the first growth period, sharply went up from the 40 to 70 days and finally dropped off after the 70 days.Its activity changes ranged from 8.2 to 31.72 µmol•g -1 h -1 FW in the stage of fruit development.Though the downward trend of AI activity was consistent between the different seasons, the total activity of the summer fruits was higher. The NI activity in different seasons The NI activity was basically similar with AI activity in the winter fruits (Figure 6).In the winter fruits, the activity remarkably improved from 34.13 to 68.04 µmol•g -1 h -1 FW, then dropped off to 16.21 µmol•g -1 h -1 FW at the harvest.In the summer fruits, the activity was lowered to 14.5 µmol•g -1 h -1 FW before the beginning of development (from the 20 to 30 days), then rose up significantly from the 40 to the 70 days at the peak of 60.41 µmol•g -1 h -1 FW, followed by a little drop (down 13%) at the 80 days, and finally backed up.The activity in the summer ripen fruit was 3.63 times as high as that in the winter fruit. DISCUSSION In this study, a typical single 'S' curve (slow-fast-slow) was revealed during the whole development of the summer fruit, whereas the winter fruit was gentle from fast to slow.Moreover, the development period of the winter fruit was 40 days longer than that of the winter fruit (Paull and Rohrbach, 2002).We thought that those were maybe due to environmental factors, especially temperature, heavy rainfall and strong light in summer leading to the different tendency of sugar accumulation, which suggested that different harvest seasons had serious influence on the growth and size of fruit in the different stages of fruit development.Chen et al. (2007) suggested that the development period of the strawberry harvested in February was shorter than in January by 17 days, which was also consistent with our results.However, Cruz-Castillo et al. (1991) influence on the kiwifruit's double 'S' growth and its fruit development.It indicated that different fruit had their own growth characteristic, though environmental factors such as water, temperature and light also had effect on fruit development (Léchaudel and Joas, 2007). Sucrose, glucose and fructose are the main soluble carbohydrates.The content and the ratio of those elements played a vital role in deciding the flavor and quality of fruits.A large reports suggested that temperature is one of central factors involved in promoting the accumulation of sucrose in potatoes (Kumar et al., 2004), other vegetables (Bhowmik et al., 2001) and strawberry (Chen et al., 2007).However, in the present study, the sucrose content mainly accumulated in the winter fruit, and the sucrose metabolism enzyme activity may be affected by the different climatic conditions and inherent physiological characteristics. The capability of sugar accumulation in the fruit was determined by the fruit sink strength, in which the activity of sucrose metabolism enzymes was an important physiological index.The enzymes that regulated sugar accumulation and metabolism in grape berry included AIV, SS and SPS (Copeland, 1990).Our experiment showed that a low invertase level and the high sucrose content appeared in the fruit early development, then the content of glucose and fructose was increased by the promotion of invertase activity.The change rule was similar with the result of peach (Moriguchi et al., 1990), kiwifruit (Macrae et al., 1992), and strawberry (Xie et al., 2007).The invertase activity was low in the early 70 day after anthesis in the summer fruit, which was beneficial for the accumulation of sucrose.But the AI activity declined in the late development, which was a typical characteristic of the sucrose accumulation fruit and consistent with golden pear (Li et al., 2007), strawberry (Hubbard et al, 1991) and myrica rubra (Xie et al., 2005). At maturity, the high activity of SPS and SS was a main reason of the winter fruit sucrose accumulation.In general, SPS activity had a high activity during plant maturation, which correlated with an enhanced accumulation of sugars in the root during its maturation (Pavlinova et al., 2002).Differences of SPS in winter and summer, may lead to the accumulation of sugars.In winter, pineapple had a long growth stage, so can accumulate more sugar than that in summer.Meanwhile, the changes of activity of SS were similar with SPS (Egger and Hampp, 1993). Sugar is an important factor of fruit quality.Its content was determined by interaction between the gene, the natural environment and cultivation measures.The sugar accumulation was decided by the activity change of sucrose synthase and decompositase.Our research suggested that cultivation could change the composition of sugar in pineapple fruit, and this was done by the influence on the enzymes related to sucrose metabolism.The molecular regulation metabolism and how the external factors worked on those enzymes need further research. Field -grown pineapples [Ananas comosus L. (Merr.)cv.smooth cayenne] were cultivated in the pineapple resource bank of South Subtropical Crops Research Institute.Winter and summer fruits were collected during the fruit development seasons, from November 2005 to February 2006 and from May 2007 to July 2007, respectively. Figure 1 . Figure 1.Effects of different harvest seasons on fruit developments of 'smooth cayenne 'pineapple. Figure 2 . Figure 2. Effects of different harvest seasons on sugar contents in the developing pineapple fruits.A: Fruits harvested in July; B: Fruits harvested in February. Figure 3 . Figure 3. Effects of different harvest seasons on SPS activities in the developing pineapple fruits. Figure 4 . Figure 4. Effects of different harvest seasons on SS (synthesis) activities in the developing pineapple fruits. Figure 5 .Figure 6 . Figure 5. Effects of different harvest seasons on AI activities in the developing pineapple fruits.
2018-12-15T18:03:43.918Z
2011-04-04T00:00:00.000
{ "year": 2011, "sha1": "77e4f2851b8df8d9e06f943e067315d151858c21", "oa_license": "CCBY", "oa_url": "https://academicjournals.org/journal/AJB/article-full-text-pdf/F377C9C25449.pdf", "oa_status": "HYBRID", "pdf_src": "Anansi", "pdf_hash": "77e4f2851b8df8d9e06f943e067315d151858c21", "s2fieldsofstudy": [ "Agricultural and Food Sciences" ], "extfieldsofstudy": [ "Chemistry" ] }
10333037
pes2o/s2orc
v3-fos-license
The Drosophila spitz gene encodes a putative EGF-like growth factor involved in dorsal-ventral axis formation and neurogenesis We describe the molecular characterization of the Drosophila gene spitz (spi), which encodes a putative 26-kD, EGF-like transmembrane protein that is structurally similar to TGF-alpha. Temporal and spatial expression patterns of spi transcripts indicate that spi is expressed throughout the embryo. Examination of mutant embryos reveals that spi is involved in a number of unrelated developmental choices, for example, dorsal-ventral axis formation, glial migration, sensory organ determination, and muscle development. We propose that spi may act as a ligand for cell-specific receptors, possibly rhomboid and/or the Drosophila EGF receptor homolog. . Mutations in the spitz group genes perturb the fate of structures derived from the blastoderm region located dorsally to the mesectoderm. In addition to their role in dorsal-ventral axis formation, some members of the spitz group genes are involved in other developmental pathways, such as sensory organ specification and glial migration. Embryos mutant for rho, spi, S, or pnt lack two of the five lateral chordotonal organs in the peripheral nervous system (PNS) (Jan et al. 1986;Bier et al. 1990). Additionally, in the central nervous system (CNS) of rho, spi, and S mutant embryos, the anterior and posterior commissures are fused, possibly owing to the failure of some glial cells to migrate (Kl~imbt et al. 1991). In this paper we present the molecular and phenotypic description of spi. We show that spi encodes a protein that is structurally similar to a factor that is epidermal growth factor (EGF)-like. On the basis of protein structures, comparison of phenotypes, and spatial and temporal expression patterns, we propose that spi encodes a ligand that functionally interacts with the products of the rho and, possibly, Drosophila EGF receptor (DER) genes. The locations of the Ddc gene and the genes immediately adjacent to spi are indicated above the lines (see also Gilbert et al. 1984;Contamine et al. 1989). The spi locus maps proximal to the ref(2)P locus, which confers resistance to sigma virus, and distal to the lethal complementation group 1(2)E42. The spi locus is linked very tightly to another lethal complementation group, 1(2)E146; the relative map order of these two loci has not been determined. (B) Cloning strategy and localization of deficiency breakpoints. The spi region was cloned B both by chromosomal walking, by use of a subclone from the Ddc phage walk to "jump" into the DNA proximal to the Df(2L)VA17 deficiency, and by P-element rescue. Plasmid p1022 (Gilbert et al. 1984) was used to isolate a "junction" fragment from a library made from Df(2L)VA17/CyO flies. A phage walk with Oregon-R DNA was initiated in both directions, and the approximate alignment of the phage is shown here. Identification of the spi-containing ), kb spi GR883 spi IDB7 phage was confirmed by localization of two P-element insertions, spi cR883 and spi IDB7, in phage R37 , and by P-element "rescue" of spi IDBz and isolation of phage hybridizing to genomic DNA flanking the P-element insertion. The broken line interrupting Df(2L)VA17 indicates that the deficiency is not drawn to scale. well-characterized region of the second chromosome. The location of spi respective to overlapping deficiencies from that region is shown in Figure 1A, and the stocks used in this study are summarized in Table 1 (see Materials and methods). We have characterized eight alleles at the spi locus. Nfisslein-Volhard et al. (1984) originally characterized two alleles, and we found spi to be allelic to the previously identified 1(2)0E92 locus (Contamine et al. 1989), which includes three alleles. The chromosome containing the pupal-lethal dpp allele dpp 13 (Lindsley and Zimm 1985) is an inversion chromosome also mutant for spi. Finally, there are two spi mutations that Cloning of the spi region and identification of the spi gene The spi region was cloned using a twofold approach. Initially, a clone from the dopa-decarboxylase (Ddc) region at 37C 1,2 was used to cross the Df(2L)VA17 breakpoints and initiate a 70-kb phage walk in the 37F region (Fig. 1B). An EcoRI restriction fragment from the proximal region of the ref(2)P phage walk (clone 31E; Contamine et al. 1989) hybridizes to phage 5-2 (Fig. 1B, data not shown), delimiting the distal end of the spi region. Subsequently, we obtained the P-element mutation spi IDB7 (Bier et al. 1989), from which we isolated geno-mic DNA flanking the P-element insertion. The genomic DNA was mapped to the phage walk to confirm the identification of the phage containing the spi locus. Southern hybridization analysis revealed that the P-element insertions in both spi ~DB7 and a second P-element allele, spi GRss3, m a p to a 1.3-kb EcoRI restriction fragment in phage R37 ( Fig. 2A, B). The rearrangement allele spi dpp13 shows alterations in a 2.9-kb BamHI-EcoRI restriction fragment ~ 1 kb distal to this fragment (data not shown), indicating further that this region contains sequences essential for spi expression. To verify that the P-element insertions in this region are responsible for the spi mutations, we attempted to revert the P-element insertions by dysgenesis, scoring for the loss of rosy + (ry+) (for spi GRss3) or white + (w+) (for spi ~D~z) eye color. No r y -revertants of spi ~Rs~ were obtained, but w -revertants of spi ~DBz were isolated. A. R3 7 spidpp13 Ss~i i IDB7 Genomic DNA encompassing the spi region is indicated by the top line. Arrows above the line mark the boundaries of phage R37. The 2.9-kb BamHI-EcoRI restriction fragment is altered in spi dppta (data not shown). The open box below the line indicates the 1.3-kb EcoRI genomic fragment used to isolate the various spi cDNAs. This 1.3-kb fragment is the insertion site of the two P-element-induced spi mutations (spi cR88a and spiIDBT). The solid boxes numbered 1-6 represent the exons from cDNAs encoding the putative spi protein; the hatched region within exon 6 represents the coding sequence (see also Fig. 3). All of the cDNAs examined that encode spi are composed of two or more exons. The location of exon 1 in the genomic region has not been determined precisely, but it maps in R37 in the region proximal to the most proximal EcoRI site and distal to the SpeI site, as shown. Exon 6, containing the spi-coding region, is identical in every complete spi cDNA. Because they were all selected by hybridization to the 1.3-kb EcoRI fragment, all of the cDNAs contain an exon from within that fragment. MNR31, MNR62, cl-7, and MNR73 all contain exon 5 from the 1.3-kb EcoRI fragment and, thus, differ only in the 5'-most exon. MNR22 begins at the same site as MNR73 but reads through the splice signals utilized by MNR73. The final cDNA examined, c3-0, appears to be a partial unprocessed transcript (see text); it does not contain an open reading frame, and it is entirely derived from the genomic region from which the MNR31 and MNR62 transcripts arise. Shown below the cDNAs are the fragments cloned into pCaSpeR2 (Pirrotta 1988) and used for transformation. The proximal end of the Bam-8 fragment is a BamHI site created during the construction of phage R37 and is indicated on this map by the arrow marking the proximal limit of the genomic insert in can be seen. In both cases, the probes were single-stranded RNA probes. Cold Spring Harbor Laboratory Press on May 1, 2010 -Published by genesdev.cshlp.org Downloaded from Flies heterozygous for one of the revertant chromosomes and another spi allele or a spi deficiency chromosome are viable, although one or more unidentified mutations on the parental spi ebB7 chromosome cause inviability of flies homozygous for the revertant chromosomes or heterozygous for one of the revertant chromosomes and spi IDBz. Southern blot analysis revealed that the phenotypic reversion correlates with the loss of the novel restriction fragments hybridizing to the 1.3-kb EcoRI fragment in the spi 1DB7 chromosome (Fig. 2B). Rescue of the spi mutant phenotype by P-element transformation To determine the extent of the spi gene within this region, we used P-element-mediated transformation to rescue the spi mutant phenotype. Two P-element constructs were injected into Drosophila embryos. The first construct, Barn-8, contains a 12.5-kb BamHI restriction fragment from phage R37 , including the 2.9-kb EcoRI-BamHI fragment altered in spi app.3. The second construct, Not-3, is a Barn-8 derivative without the 3-kb NotI-BamHI fragment ( Fig. 2A). Transformant flies containing a single copy of the larger of the two constructs, Bam-8, are viable and fertile when trans-heterozygous for two spi alleles. The smaller construct, Not-3, fails to rescue under identical conditions. These results indicate that the spi gene is contained within the 12.5-kb BamHI restriction fragment and that regions essential for spi expression lie in the region from the proximal BamHI site to the NotI site in phage R37. Isolation and characterization of spi cDNAs To isolate spi cDNAs we used as a hybridization probe the 1.3-kb EcoRI fragment altered in the two P-element insertion mutations. Twelve cDNAs from a 0-to 4-hr embryonic cDNA library (including MNR22, MNR31, MNR62, and MNR73) and two cDNAs from a 9-to 12-hr embryonic cDNA library (cl-7 and c3-0) were isolated. With one exception (c3-0), the inserts in the recombinant phage are between 1.0 and 1.7 kb in size. Sequencing of genomic DNA from phage Ra7 and of five cDNAs with the largest inserts (MNR22, MNR31, MNR62, MNR73, and cl-7) revealed that the cDNAs are composed of two or three exons and are identical except in the 5' region ( Figs. 2A and 3). The spi-coding region is entirely contained in exon 6. The 3' ends of these cDNAs contain polyadenylation signals, and in one case the sequence ends with a poly(A) tract. Because the sizes of these cDNAs, with the addition of a poly(A) tail, are consistent with the sizes of transcripts seen on Northern blots (see below), we believe that these cDNAs represent fulllength or nearly full-length transcripts. The remaining cDNA from the 9-to 12-hr library, c3-0, appears to be unprocessed. One end of the cDNA is 1.2 kb upstream from the site where cl-7 begins, and the cDNA sequence extends 3 kb farther downstream with no splicing. DNA sequence analysis revealed no significant open reading frames. Because the 3-kb cDNA initiates within the first intron of the MNR31/MNR62 transcript, it could represent a partial, unprocessed transcript beginning either at the MNR31/MNR62 initiation site or at an additional initiation site that we have not yet detected. DNA sequence analysis of additional smaller cDNAs from the 0-to 4-hr library indicated that the smaller cDNAs are incomplete. Temporal expression of spi Northern analysis indicated multiple transcripts hybridizing to restriction fragments within the 12.5-kb geno- EcoRI subclones from the phage; primer extension was not used to determine the precise 5' ends for any of the cDNAs. The exon 1 sequence shown here is from MNR62. In MNR31, exon 1 starts 35 nucleotides farther 3', probably owing to premature termination of reverse transcription during the library construction. Exon 2 is present in cDNA cl-7. All three cDNAs are spliced to exon 5 and exon 6 at the same sites. Two additional cDNAs, MNR22 and MNR73, appear to begin at the genomic EcoRI site 400 bp downstream from exon 2 {Fig. 2); the actual transcriptional start site for these cDNAs may be farther 5'. The phage containing the cDNAs appeared to contain only a single EcoRI fragment in each case, and sequencing was performed by use of subclones from the phage. In MNR73, the first exon (exon 4) is precisely spliced to exon 5 after 33 nucleotides, before reaching the site of P-element insertion in strain spi IDBz (data not shown), at a splice junction donor site of ATG/gtacat. In MNR22 the sequence continues through the splice site, through the site of P-element insertion, and into exon 5. Both MNR22 and MNR73 are spliced from exon 5 to exon 6, with splice junctions identical to those in the other three cDNAs. In every case, the splice junction donor and acceptor sequences correlate well with the consensus sequences (Mount 19821. The putative coding region is entirely contained within exon 6. Numbering begins with the start of translation. No sequences homologous to TATA were found in the 5' region of the nucleotide sequence. In the 3'-untranslated region, there are four ATTTA sequences {underlined); this motif is correlated with rapid mRNA degradation and possible translational control (Shaw and Kamen 1986;Kruys et al. 1989). Shown by double underlines are the sequence AATAAA, identical to the canonical polyadenylation signal (Proudfoot and Brownlee 1976), and the dinucleotide CA, 20 nucleotides downstream, which appears to be the site where polyadenylation of the spi transcript occurs. All of the cDNAs described end at or immediately 5' to this site, and in one of the cDNAs (MNR22) the dinucleotide CA is followed by a poly(A) sequence of 9 residues. Translation is shown with the single-letter amino acid code. Determination of the extent of the putative signal sequence, shown in italics, was based on the rules outlined in yon Heijne (1983}, with the proteolytic cleavage window occurring between proline at position 19 and serine at position 27. The putative transmembrane domain is underlined; the extent of the transmembrane domain was based on the hydrophobicity as determined by the Kyte and Doolittle algorithm (1982). The EGF domain is indicated in boldface type, and a potential N-glycosylation site is indicated by an asterisk (*). Sequence data described here have been submitted to the EMBL/GenBank data libraries under accession number M95199. mic DNA used in germ-line transformation. At the distal end of the 12.5-kb genomic DNA is the 2.9-kb EcoRI-BamHI fragment altered in spi dppla. This fragment includes the entire open reading frame presumed to encode the spi protein and is contained within a 3.8-kb EcoRI fragment from phage R37 , which was used as a hybridization probe. The 3.8-kb EcoRI fragment hybridized primarily to transcripts of 2.5 and 2.0 kb, as well as to a 1.9-kb transcript (data not shown). The 1.3-kb EcoRI fragment, -1 kb proximal to the 3.8-kb EcoRI fragment, also hybridized primarily to a 2-kb transcript, as well as to a 3-kb transcript (data not shown). spitz encodes a putative EGF-like growth factor In a developmental Northern blot, a spi cDNA (cl-7) detected the same pattern of transcripts as did the 3.8-kb genomic EcoRI fragment, which includes the entire coding region (Fig. 2C). Because these hybridizations utilized single-stranded RNA probes and the same pattern was seen with both the cDNA and the 3.8-kb genomic fragment, we assume that all of the transcripts detected are spi transcripts. A likely explanation for the multiple transcripts is that ahemative splicing generates a variety of spi transcripts, differing slightly in size; this explanation is consistent with the results of cDNA analysis, which revealed variation in the 5' exons of different cDNAs. ATcAcTc¢AcAAcATaccc}aaTTccccT~cTAcTcATc~cc¢G¢cTcc~A~A¢cccT~CAccTc~Aa~ccTacTccAaccGTAccc¢Gccc~accac Transcripts hybridizing to the spi-coding region were seen throughout development, with peak expression in mid-embryogenesis, when the nervous system is being formed (Fig. 2C). Consistent with the previous finding that spi is maternally expressed (Mayer and Nfisslein-Volhard 19881, transcripts are seen in RNA from 0-to 1-hr embryos. Properties of the spi-coding region and the putative spi protein Sequence analysis of the various spi cDNAs and genomic DNA (Fig. 3) reveals a single long open reading frame, with two potential ATG initiation codons separated by 12 nucleotides. The sequence 5' to the first ATG codon in spi (TGTA) does not match the consensus sequence of the region 5' to a Drosophila translational start site [(C/ A)AA(C/A)] (Cavener 1987), although the sequence flanking the second ATG (CACA) matches the consensus sequence fairly well. Thus, like other Drosophila genes recently examined (e.g., scabrous, Mlodzik et al. 1990; Serrate, Fleming et al. 1990), spi probably initiates translation at the second ATG codon. Termination of translation occurs with the codon TGA, producing a coding region of 690 nucleotides. The spi gene encodes a potential protein of 26 kD with a putative signal sequence, an EGF domain, and a potential transmembrane domain (Figs. 3 and 4A, B). There is a potential N-linked glycosylation site at residue 70 (Asn-Ile-Thr), and between the EGF domain and the transmembrane domain there is a dibasic amino acid sequence, Lys-Arg, which could be a proteolytic cleavage site. The most striking feature of the spi protein is the single EGF domain, which is homologous to the EGF domains in other members of the EGF family (Fig. 4C). The six cysteines are conserved, with correct spacing, and the other invariant residues are either conserved (as with Tyr/Phe-13, Leu-15, Gly-36, Tyr-37, Gly-39, and Arg-41; nomenclature follows EGF convention) or differ by conservative substitutions in spi (e.g., Leu-47 to Ile). Like vaccinia virus growth factor (VGF) and unlike transforming growth factor-tx (TGF-~) or EGF (Venkatesan et al. 1982;Derynck et al. 1985;Bell et al. 1986), the EGF domain in the spi-coding region is not interrupted by an intron. Phenotypic analysis of spi As described previously (Mayer and N/isslein-Volhard 1988), embryos homozygous for spi show a partial fusion of denticle bands along the ventral midline, abnormal an~t incomplete formation of larval head structures, and displacement and/or reduction in specific cuticular sensory organs, such as the Keilin's organs. In addition to these epidermal defects, fusion of the anterior and posterior commissures (Kl~imbt et al. 1991;D. Smouse and N. Perrimon, unpubl.) is observed in spi mutants. Because these phenotypes have been described previously, we have concentrated instead on analyzing the effect of spi mutations on the PNS and on muscle development. The embryonic PNS in abdominal segments consists of three major clusters of sensory organs located at the dorsal, lateral, and ventral positions. In spi mutants, certain sensory organs are missing and there appears to be a ventral-dorsal gradient of severity of this missing sensory organ phenotype. The sensory organs of dorsal clusters in spi mutants appear to be quite normal both in number and in morphology (Fig. 5A, C,E,F). In the lateral clusters, spi mutants lack 2 of the 11 sensory organs found in wild-type embryos (Fig. 5B, D,E-JI. The missing sensory organs are always two of the five lateral chordotonal organs, a phenotype shared by mutants of a subset of the spitz group genes: rho, S, and pnt {Bier et al. 1990). A single precursor gives rise to all four cells required to form a chordotonal organ (Bodmer et al. 1989): a neuron, a scolopale cell that wraps around the dendrite of the neuron, and two support cells. These cells can be seen by marking all sensory organ cells with the enhancer trap line A37 (not shown) or by staining embryos with cell type-specific antibodies (Fig. 5A, B,E-H). The monoclonal antibody mAb44cl 1 labels neuronal nuclei (Bier et al. 1988) (Fig. 5A,B), anti-horseradish peroxidase (HRP) labels neuronal membranes and scolopales (Jan and Jan 1982;Bodmer et al. 1987) (Fig. 5E, F), and mAb21A6 labels the scolopale (Fig. 5G, H) (Zipursky et al. 1984). Results from immunocytochemical experiments with spi mutants indicate that all of the components of the two missing chordotonal organs are absent, suggesting that the spi mutation affects the precursors of the chordotonal organ. The five lateral chordotonal organs have similar morphology, but they are not identical. The anterior-most chordotonal organ does not stain with mAb49C4, an antibody that stains the remaining four lateral chordotonal organs (Bodmer et al. 1987). Immunocytochemical staining of spi mutants with mAb49C4 (Fig. 5I, J) showed that the anterior-most chordotonal organ is always one of the three remaining chordotonal organs. In the ventral regions of spi embryos, the PNS phenotypes are the most severe and variable. In wild-type embryos, there are 19 sensory organs in the V and V' clusters in the ventral region. The most commonly observed spi phenotype is that 5 of the 19 sensory organs are missing (Fig. 5B,D), although the number of missing sensory organs varies from segment to segment and from embryo to embryo (Fig. 5D). All three types of sensory organs (es organ, ch organ, and md neurons) are affected in spi mutants. An additional aspect of the mutant phenotype is that the locations of the V and V' clusters in spi mutants are not as regular as in wild type. In addition to alterations in the PNS, embryos mutant for spi have abnormal muscle development (Fig. 6) (Wharton et al. 1985), Delta (Wissin et al. 1987;Kopczynski et al. 1988), and lin-12 (Yochem et al. 1988); (bottom) the EGF domains from human EGF (hEGF; Bell et al. 1986), vaccinia virus growth factor (VGF; Brown et al. 1985}, human TGF-a (hTGF; Derynck et al. 1984), heparin-binding factor that is EGF-like (HB-EGF; Higashiyama et al. 1991), amphiregulin (AR; Plowman et al. 1990), and schwannoma-derived growth factor (SDGF; Kimura et al. 1990). The cysteines are numbered as they appear in human EGF. The conserved cysteines are shown in shaded boxes for all of the sequences. Comparisons in the EGF family are made relative to the spi protein, with conserved residues shown in shaded boxes and semiconserved residues shown in open boxes. Determination of which residues to designate as semiconserved was made on the basis of the BESTFIT program from the Wisconsin Genetics Computer Group sequence analysis programs. D H V ~L N G G T~---Q L K T L E E Y There are 30 muscle fibers per hemisegment in abdominal segments 2-7 (Crossley 1978;Hooper 1986;Bate 1990), which can be visualized by staining with mAb6D5 (Caudy et al. 1988). In spi mutants, the muscle pattern is altered primarily in two regions (data not shown): Several muscle fibers are consistently missing in the dorsolateral region (fibers 3, 4, 11, 19, and 20), and there are variable abnormalities in the number, shape, and attachment sites of the muscle fibers of the eight ventral oblique muscles (14.1, 14.2, 15, 16, 17, 26, 27, 29), such that it is difficult to identify individual muscle fibers (Fig. 6). In addition, near the somatic muscle layer in spi mutants there are a number of mAb6D5-positive cells that may represent myoblasts that failed to fuse with the muscle founder cells. Spatial expression of spi In situ analysis with single-stranded RNA probes indicated that spi transcript is expressed ubiquitously in all embryonic tissues, with enrichment in the procephalic region, ventral midline, mesodermal layers and, possibly, PNS cells (Fig. 7). Consistent with the in situ pattern, the ~-galactosidase expression pattern of the enhancer trap insertion spi z°Bz is fairly ubiquitous. The ~-galactosidase expression becomes detectable at stage 6 when gastrulation begins (data not shown). The spi gene encodes a putative EGF-like growth factor The putative spi protein shows the greatest similarity to proteins in the EGF family, with conservation of the EGF domain and other structural features characteristic of factors that are EGF-like. Like the EGF repeats in the Drosophila neurogenic genes Notch and Delta, the EGF domain in the predicted spi protein has absolute conservation of the 6 cysteines required to form three disulfide loops in EGF. Loop C, formed by a bond between cysteines 33 and 42 (Fig. 3C), is the most highly conserved region in the EGF repeat, with invariant residues Gly-36, Tyr-37, Gly-39, and Arg-41. The predicted spi protein differs from h u m a n EGF in loop C with a conservative Tyr ~ Phe change at position 37; recent work has shown that this substitution in h u m a n EGF has little or no effect on biological activity of the protein (Engler et al. 1991 ). In addition, spi protein shows conservation of residues Tyr/Phe-13, Leu-15, and Arg-41, postulated to lie at the interface of the receptor and growth factor (for review, see Campbell et al. 1990) and has a conservative Leu ~ Ile change at position 47. Thus, the single EGF domain in the predicted spi protein is homologous to the corresponding domain of h u m a n EGF. The overall structure of the spi protein is similar to that of the TGF-a precursor, which contains only one EGF domain, unlike the EGF precursor. TGF-a is structurally and biologically similar to EGF and binds with similar affinity to the EGF receptor (for review, see Massague 1990). It is interesting to note that TGF-a is active both as a membrane-bound protein and as a 50-amino-acid diffusible protein cleaved from the membrane-bound form (for review, see Massague 1990), raising the possibility that spi might not need to be cleaved to be biologically active. Potential function of the spi protein spi transcripts, although enriched in some tissues, have a fairly ubiquitous distribution. Because the predicted spi protein is structurally similar to an EGF-like factor, one model for spi function is that spi encodes a ubiquitously distributed ligand that interacts with spatially localized receptors, or molecules involved in its signal transduction. This model predicts that mutations in both spi and its receptor would have similar mutant phenotypes and that the receptor would be expressed in those tissues affected in mutant embryos. Obvious candidates for other proteins involved in the spi pathway are members of the spitz group, such as S, rho, and pnt. Of these, only rho has been characterized at the molecular level; rho encodes a putative transmembrane protein with three to seven membrane-spanning regions (Bier et al. 1990). Although there are some differences in the fine details, spi and rho have similar phenotypes. Both spi and rho cause similar pattern defects in the embryonic ventral ectoderm ( Fig. 4; Mayer and Nfisslein-Volhard 1988). The cuticular elements missing in these mutant embryos are derived from the same longitudinal strips of the blastoderm. The muscle phenotypes of the spi (Fig. 6) and rho mutants are also very similar. In both cases, the mutations affect primarily two groups of muscles: (1) Several muscles in the dorsolateral region are missing; and (2) the ventral oblique muscles have abnormal morphology and attachment sites. Finally, the CNS and PNS phenotypes are similar. With the CNS phenotype, K1/imbt et al. (1991) noted that in both spi and rho mutant embryos, the fusion of anterior and posterior commissures results from the failure of specific glial cells to migrate and separate the anterior and posterior commissures. In the PNS, both mutations delete two of the five chordotonal organs in the lateral cluster (Fig. 5) and one of the two chordotonal organs in the ventral region. The spi phenotype is more severe than the rho phenotype in that additional sensory organs are missing in spi mutants. In both mutants the defect is probably at the level of sensory organ precursor formation. Because the phenotypes of spi and rho are so similar, it is likely that these genes act in the same biochemical pathway. Since spi is ubiquitously distributed and rho is spatially localized to cells that require rho function, rho may be the receptor (or part of the receptor or a factor required for receptor-mediated signal transduction) for the spi product. Support for rho acting as a receptor derives from the complex expression pattern of rho, which correlates well with tissues showing the mutant pheno-spitz encodes a putative EGF-like growth factor type. Such a model is plausible because the rho gene product is a putative integral membrane protein with several transmembrane domains (Bier et al. 1990). A prediction from this model, not yet tested, is that spi mutations should be non-cell-autonomous while the requirement for rho should be cell-autonomous. Relationships between DER and spi Because of the structural similarity of spi protein to a factor that is EGF-like, the most obvious candidate for a spi receptor is DER (Livneh et al. 1985). However, it is unlikely that DER is the sole receptor that confers spi specificity. The spatial distribution of DER covers a broader area than the regions that are altered in spi mutants, and the range of tissues affected in a DER mutant is greater than the range affected by spi (Zak et al. 1990). Nevertheless, there are some interesting correlations between DER and spi. There is a phenotypic series of embryonic lethal faint little ball (fib) mutations of DER, with weak fib mutations showing phenotypic similarities to spi mutations. Like spi, the phenotype of weak fib mutations includes both cuticular abnormalities, with the most severe defects in the ventral regions, and CNS abnormalities. Immunohistochemical studies showed that DER protein appears to be restricted to a subset of glial cells in the ventral midline in retracted germ-band embryos (Zak et al. 1990). Further studies with cell-specific markers (Raz and Shilo 1992) have shown that the midline glial cells affected by the fib mutation include the midline glial cells that fail to migrate and separate the commissures in spi mutants (K1/imbt et al. 1991). A CNS phenotype similar to that of spi, S, and rho embryos, with fusion of the anterior and posterior commissures, is seen in temperature-shift experiments with fib 1F26 embryos (Raz and Shilo 1992). One explanation for the similarity of the fib and spi phenotypes in the CNS is that interaction of the spi ligand with DER on the surface of the midline glial cells stimulates the midline glial cells to migrate and separate the commissures. If DER is the receptor for spi, why do fib null mutations have a more severe phenotype than spi mutations? One possibility is that the null embryonic phenotype of spi has not been seen. Even if one of the characterized spi mutations is a null mutation, maternal protein is likely to be present in spi embryos. Previous work by Mayer and Niisslein-Volhard (1988) demonstrated a maternal requirement for spi, showing that spi is cell lethal in germ-line mosaics. In addition to spi, other ligands for DER undoubtedly exist; the other developmental functions of DER (Price et al. 1989;Schejter and Shilo 1989) suggest the existence of multiple DER ligands. To produce a receptor for the spi ligand, the rho product might function together with DER, perhaps to facilitate the ligand activation of the DER tyrosine kinase. Spatial regulation of this interaction would occur as a consequence of the localized expression of rho. Further genetic and biochemical tests of this model are in progress. Growth factors as conserved developmental signals acting as positional cues A substantial body of recent experimental evidence supports the idea that growth factor-related molecules play key roles in the establishment of anteroposterior and dorsoventral polarity during early Xenopus embryogenesis (for a recent review, see Melton 1991). When added to animal cap explants in varying combinations, Xenopus growth factors related to transforming growth factor-B (TGF-B) and basic fibroblast growth factor (bFGF) induce the formation of tissue types derived from different axial locations (Kimelman and Kirschner 1987;Smith 1987;Weeks and Melton 1987;Green and Smith 1990). The first evidence that growth factors might play similar roles in Drosophila development was obtained from analysis of the dpp locus, dpp plays a major organizing role in establishing dorsal tissue fates (Irish and Gelbart 1987) and encodes a protein that is related to the TGF-f~ class of growth factors (Padgett et al. 1987). The role for growth factors in establishing the dorsal-ventral axis is expanded by the finding that an EGF/TGF-cx-like growth factor encoded by the spi gene is required in a ventrolateral strip of blastoderm cells. These cells correspond to the initial domain of rho expression (Bier et al. 1990) and are ventral to those requiring dpp activity. It is tempting to speculate that graded levels of dpp and spi activity might act to set up more precise positional values in the dorsal and ventrolateral regions, respectively. It is also possible that cells near the boundaries of the dpp and ventrolateral domains are specified by combinations of dpp and spi activity. It will be interesting to determine whether a spi homolog is present in vertebrates and whether EGF-type diffusible factors participate in dorsoventral axis formation during Xenopus development. In Xenopus it appears that early patterning is initiated by the localized secretion of diffusible factors, which then activate various genes encoding transcription factors in restricted domains of the embryo. In contrast, early Drosophila embryonic patterning along both the anteroposterior and dorsoventral axis is based largely on cascades of localized transcription factors that diffuse short distances in the syncytial embryo. It is possible that the recent evolution of the syncytial blastoderm in long germ-band insects such as Drosophila has led to an elimination of some but not all of the dependence on diffusible factors. These diffusible factors may provide a mechanism to initiate patterning in cellularized embryos where transcription factors cannot directly diffuse from one cell to another. Thus, the apparent differences in the establishment of early patterning in Drosophila versus Xenopus may be superficial, reflecting the comparatively recent ability of transcription factors to diffuse directly in syncytial embryos. Stocks were maintained at 25°C on standard commeal/molasses/agar medium. Germ-line transformation The 12.5-kb BamHI fragment from phage R37 was subcloned into P-element vector pCaSpeR2 (Pirrotta 1988) to construct Bam-8. Note that the proximal BamHI site in phage R37 was created fortuitously in the EMBL3 library construction and does not correspond to a genomic site. The Barn-8 plasmid was digested with NotI and electrophoresed, and the larger of the two resulting restriction fragments was gel purified and religated to generate plasmid Not-3. Following standard protocols (Spradling 1986), embryos were injected with a mixture of recombinant plasmid (800 mg/ml) and helper plasmid (200 mg/ml). The helper plasmid used was pD2-3 (Laski et al. 1986). Isolation and hybridization analysis of genomic DNA Genomic DNA from Df(2L)VA17/CyO flies was cloned into 1-Dash (Stratagene) by use of the Gigapack packaging extract according to the manufacturer's specifications (Stratagene). A second library was made under identical conditions with Oregon-R wild-type flies. The Df(2L)VAI 7 library was screened with plasmid p1022 (Gilbert et al. 1984). The phage hybridizing strongly to the probe were isolated, and a phage containing DNA spanning the Df(2L)VA17 breakpoint was identified by restriction analysis. Restriction fragments from this phage were used to initiate a phage walk from the wild-type 1-Dash library, and the walk was extended by use of both this library and an EMBL3 library (Blackman et al. 1987). In Figure 1, phage 5-2, 4C, and R37 are from the EMBL3 library, and the remaining phage are from the 1-Dash library. Plasmid rescue of the P-element insertion in spi IDBz was carried out according to the protocol of Wilson et al. (1989). For Southern analysis, 5-10 ~g of genomic DNA was digested overnight, electrophoresed, and blotted according to standard protocols (Maniatis et al. 1982). DNA hybridization probes were radiolabeled by the random primer method (Feinberg and Vogelstein 1983). Isolation of cDNAs and RNA analysis The 1.3-kb EcoRI restriction fragment was used as a hybridization probe to isolate phage from a 0-to 4-hr embryonic cDNA library cloned into Kgtl0 (Frigerio et al. 1986). Twelve phage were isolated, and four were analyzed in detail. A 1.1-kb EcoRI-PstI derivative of the 1.3-kb EcoRI fragment was used to isolate phage from a 9-to 12-hr embryonic cDNA library cloned into kgtl 1 (Zinn et al. 1988). Three phage were isolated, and one was analyzed in detail. Each of the five phage analyzed appeared to contain a single EcoRI fragment, which was gel-purified and subcloned into pBSK + for sequencing. Total RNA was isolated from staged embryos, larvae, and pupae by the guanidinium/cesium chloride method {Maniatis et al. 1982) and affinity purified on oligo(dT)-cellulose (Collab-orative Research). Northern blot analysis was performed by using standard methods (Maniatis et al. 1982). DNA hybridization probes were radiolabeled by the random primer method (Feinberg and Vogelstein 1983), and single-stranded RNA probes were made from fragments cloned into pBSK + vectors, according to the method of Melton et al. (1984). DNA sequencing and computer analysis DNA sequencing (Sanger et al. 1977) was carried out on doublestranded plasmids by use of T 7 polymerase (Pharmacia). Templates were made both by subcloning different restriction fragments into pBSK + (Stratagene) and by generating nested exonuclease III deletions of subcloned fragments with the Erase-a-Base system (Promega). Specific primers were synthesized to extend the sequence on a given strand. The sequence of the genomic DNA was determined on both strands. DNA sequence analysis utilized the Wisconsin Genetics Computer Group sequence analysis package and the GenBank and GenPept data bases. Optimal amino acid alignment between two sequences was made by using the BESTFIT program (Devereux et al. 1984). Homology searches utilized the FASTA program (Pearson and Lipman 1988), and the hydropathy profile was generated using the Kyte and Doolittle (1982) algorithm. Production of polyclonal antibodies against TT--spi fusion protein The SpeI-EcoRI fragment from spi cDNA cl-7 was ligated into the bacterial T 7 expression vector pAR3040 and transformed into the BL21D3 strain (Studier and Moffatt 1986). After induction with IPTG, inclusion body preparations were electrophoresed with SDS-PAGE (Harlow and Lane 1988) and the protein band corresponding to the Tr-spi fusion was excised and injected into rats for production of polyclonal antisera. Positive sera were purified according to a standard protocol (Harlow and Lane 1988) with an antigen-coupled affinity column made by coupling activated agrose beads (Bio-Rad) to soluble fractions of inclusion body preparations from strains containing either the T 7 vector alone (nonspecific column) or the T7-spi construct (specific column). The antigen-bound antibodies were eluted at pH 2.5 and neutralized to pH 8.0 according to Harlow and Lane (1988). In situ hybridization In situ hybridizations were done by use of nonradioactive digoxygenin-labeled probes as described by Tautz and Pfeifle (1989). The DNA probes were made from gel-purified 1.6-kb fragment of the spi cl-7 cDNA clone. Strand-specific RNA digoxygenin probes were generated by in vitro transcription as described by Melton et al. (1984). Both DNA and RNA probes were made by using labeling kits from Boehringer Mannheim. Analysis of embryos Cuticle preparations were done as described by van der Meer (1977). To examine the CNSs of mutant embryos, we used an anti-HRP polyclonal antibody (Cappel) and mAbBPl02 (an antibody that stains CNS axons, from A. Bieber and C.S. Goodman, University of Califomia, Berkeley, CA). The lacZ expression pattern of spi ID87 was determined by use of a mouse anti-f~-galactosidase primary antibody (from Promega). To examine the PNS of spi embryos, we used a variety of cell markers: anti-HRP (which stains neuronal membrane as well as scolopale cells of the chordotonal organsl and three mouse monoclonal antibodies: mAb44C11, which labels all neuronal nuclei; mAb21A6, which labels the scolopale and the tip of dendrites of external sensory organs; and mAb49C4, which stains a subset of chordotonal neurons. The protocol for immunohistochemical staining with those antibodies is described in Bodmer et al. (1987). Cold Spring Harbor Laboratory Press on May 1, 2010 -Published by genesdev.cshlp.org Downloaded from
2018-04-03T00:11:27.665Z
1992-08-01T00:00:00.000
{ "year": 1992, "sha1": "1354f4e507757ddf3b3f151f8fa6345fd22e6a6f", "oa_license": null, "oa_url": "http://genesdev.cshlp.org/content/6/8/1503.full.pdf", "oa_status": "GOLD", "pdf_src": "MergedPDFExtraction", "pdf_hash": "92ebc48a0e5681b82434df8b071f1b142035ad23", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
265174582
pes2o/s2orc
v3-fos-license
Tobacco Vendors’ Perceptions and Compliance with Tobacco Control Laws in Nigeria Tobacco vendors are critical stakeholders in the tobacco supply chain. This study examined their perception, compliance, and potential economic impact of Nigeria’s tobacco control laws related to the retail setting. This was a qualitative study involving in-depth interviews of 24 purposively selected tobacco vendors. The face-to-face interviews were aided by a semi-structured interview guide, audio-recorded, transcribed verbatim, and analyzed using thematic analysis with NVivo version 12. Five themes emerged, encompassing reasons for selling tobacco, awareness, perception, compliance with tobacco sales laws, the potential economic impact of the laws, and law enforcement activities. Vendors commenced tobacco sales due to consumers’ demand, profit motives, and advice from close family relatives. They were unaware and non-compliant with most of the retail-related laws. Most participants had positive perceptions about the ban on sales to and by minors, were indifferent about the ban on Tobacco Advertising Promotion and Sponsorships (TAPS) and product display, and had negative perceptions about the ban on sales of single sticks. Most vendors stated quitting tobacco sales would not have a serious economic impact on their business. In conclusion, the vendors demonstrated limited awareness and non-compliance with various retail-oriented tobacco control laws in Nigeria. Addressing these gaps requires targeted educational campaigns and effective law enforcement strategies to enhance vendors’ compliance. Introduction Tobacco use is estimated to kill about 8 million people globally every year, and 80% of these deaths occur in low-and middle-income countries (LMICs), which includes Nigeria [1].These deaths are a result of several diseases associated with tobacco use [2].A major vector of the causative agent, tobacco products, is the tobacco industry and its front groups and allies [3].The tobacco industry aggressively recruits new customers as replacements for dying smokers and those who quit [4]. Tobacco vendors/retailers are critical stakeholders in the tobacco supply chain [5]; they are responsible for placing tobacco products into the hands of the users.The activities of tobacco vendors are important areas targeted by the tobacco industry and tobacco control laws [6,7].Tobacco control laws, including the ban on sales to and by minors, the ban on tobacco advertising, promotion, and sponsorships (TAPS), and the ban on product displays, are proven methods of reduce ng the prevalence of tobacco use [8][9][10][11].Hence, Section 15 (1) of Nigeria's National Tobacco Control Act, 2015 (NTCA) prohibits the sale of tobacco and tobacco products to and by a minor (below 18 years).Section 15 (3) also obligates a tobacco vendor to display signage stating that tobacco sales to minors are prohibited, and Section 15 (5) prohibits sales of single cigarette sticks.Similarly, Section 12 (1), (2) (b), and the First Schedule (3) ban TAPS, including product display, with the exception being consenting adults [12].However, despite these tobacco control laws in Nigeria [12], the prevalence of tobacco use continues to increase in the country, especially among children and adolescents [13].This suggests that there may be problems with the implementation or compliance with the laws.However, there has been a lack of empirical data to confirm or refute these possible reasons. One study carried out in the Ibarapa community of Oyo State, Nigeria, reported that about two-thirds of the vendors considered tobacco sales profitable, but the majority were willing to participate in tobacco control programs [7].However, there are gaps this study did not address.Firstly, the study did not report the vendors' awareness, perception, and compliance with the existing laws.Secondly, the study was conducted in a rural setting, and considering the differences in the socio-demographics of the populace in rural and urban areas in Nigeria, it is important to know what vendors in urban areas think about tobacco control laws.Hence, this study was conducted to provide insight into the perception of tobacco vendors in urban and peri-urban areas and the extent of their compliance with the laws relating to tobacco retailing in Nigerian settings.The potential economic implications of stopping tobacco sales on the vendors were also explored. Study Design This study employed a cross-sectional design.It was a qualitative study that used in-depth interviews.Interviews were aided by a semi-structured guide (Supplementary Materials). Study Area and Population The study was carried out among tobacco vendors in Ibadan, Oyo State, Nigeria.Ibadan is the biggest city in sub-Saharan Africa, has both urban and suburban areas, and is home to one of the biggest tobacco factories in West Africa, with a strong tobacco distribution network within the city [14].Tobacco vendors are commonly found near educational institutions, commercial parks, markets, and adjoining streets of these places [14].A sample of vendors selling on the streets and those who have kiosks/corner shops in and around educational institutions, commercial parks, and markets were recruited into the study. Participants' Eligibility and Selection A purposive sampling technique was employed to recruit a total of 24 participants from a diverse group of tobacco vendors.There was a good representation of the vendors based on demographics and location of retail outlets (roadside, near schools, commercial motor parks, and markets).Participants were adults (18 years and above) who sold one or more tobacco products, either alone or with other non-tobacco products in Ibadan, Nigeria.After interviewing the 24th participant, there was no new information, suggesting that data saturation had been reached [15]. Ethical Considerations Approval for this study was received from the Sefako Makgatho Health Sciences University's Research Ethics Committee (SMUREC/H/90/2021: PG) and the University of Ibadan/University College Hospital Ethics Committee (UE/EC/21/0201).A detailed explanation of the study was provided to all the participants.They all agreed to participate in the study and subsequently signed the informed consent forms. Data Collection One of the authors (OFF), with a Master's degree and previous experience in qualitative studies, and who was specifically trained for this study, conducted all the interviews between December 2021 and January 2022.The interviewer was assisted by a trained and experienced assistant.The interviews were face-to-face with each participant in a relaxed atmosphere, mostly at their sales location, but devoid of distractions.The interviews were guided by a validated semi-structured interview guide and audio recorded.The questions were carefully constructed to explore participants' perception and compliance with the laws prohibiting tobacco sales to and by minors, sales of single cigarette sticks, point of sale (POS) advertisements and product display, smoking in their stores/kiosks (public places), and having signage stating that tobacco sales to minors are prohibited.The participants' awareness and opinions about the laws, their enforcement, and their willingness to abide by them were also sought.The interviewer asked follow-up questions based on the responses from the participants.The average interview time was about 25 min, with the range being 15-35 min.The interviewer and research assistant also maintained an observation sheet to record information such as the presence of tobacco advertisements, tobacco product displays, and the display of signage indicating tobacco sales to minors are prohibited. Twenty interviews were conducted in the Yoruba language and three interviews were in the English language.The voice-recorded interviews were transcribed verbatim after each interview and those conducted in Yoruba were subsequently translated into the English language.This was done concurrently and iteratively with the data collection to explore emergent ideas further in subsequent interviews. Data Analysis The transcripts were checked for errors, after which they were imported into a qualitative data analysis software-NVivo version 12 where thematic analysis of the data was conducted.A codebook (Supplementary Materials) was developed using mainly the deductive approach.An initial set of codes, based on the interview guide, served as the foundation of the codebook.These codes were subsequently refined and expanded to incorporate emerging themes and subthemes throughout the process of data analysis. One of the authors (OFF) generated the initial codes, and after the coding process, a repeated pattern of the codes was systematically identified and grouped into potential themes.An initial thematic map was created, and the themes were sorted into the main themes and sub-themes.A co-author (COE) also independently analyzed the data from one transcript and generated themes which were subsequently compared to the initial themes by OFF and areas of discrepancies were resolved after discussing with the second co-author (OAA).The study participants' sociodemographic characteristics (age, sex, tribe, educational status, study area, location, and types of POS) were summarized and reported as frequencies and proportions. Sociodemographic Characteristics and Tobacco Use Status The participants' ages ranged from 31 to 70 years with a mean (±SD) age of 47.6 (±11.8)years.Most were females (n = 20), belonged to the Yoruba tribe (n = 22), and 12 (50.0%)had secondary-level education as the highest level of education.The majority were in urban communities (n = 16) and sold their tobacco products in shops (n = 14).Most participants had never used tobacco (n = 21) (Table 1). Themes and Sub-Themes There were seven main themes derived from the data inductively (Table 2).with money and we need to sell it and make our profit, so that's why we sell it for them at times. Potential economic effect of stopping tobacco sales Limited profit from tobacco sales P9: We can say that there is profit in it.However, it's not something much, but it sells fast.Let's say I buy six packs of cigarettes; it is possible to sell all six packs on the same day.If I gain ₦50 on each pack, this makes ₦300 [$0.72] in total. The proportion of overall profit from tobacco sales Interviewer: Assuming you made a total profit of ₦100 in a day, how much of this profit is usually from cigarettes/tobacco sale?P5: I think it will be ₦20 or ₦30 [20%-30%].That's why I said the sales are fast, but not that the profit is much. Vendors' perception of the potential economic impact of ending tobacco sales P5: Honestly, it won't have any effect.I said I don't make much profit, so if people are asked to stop selling tobacco today, it will not have any effect on my overall business. Vendors' experience with the activities of the enforcement agents Lack of education and enforcement of the laws relating to the retail settings by the enforcement agents P17: Nobody has ever come here to educate me about the laws [tobacco control laws] Enforcement agents' activities regarding the roll-your-own tobacco product (jedi) Interviewer: Oh, does that mean that the police arrest people who sell "jedi"?P12: Yes, they arrest them [jedi smokers] a lot.They had met people smoking cigarettes in my shop before; they searched all their bodies down to their underpants, but they didn't find anything like marijuana or jedi on them; that was when they released them. Theme 1: Vendors' Reasons for Commencing Tobacco Sales The most common reason the vendors provided for starting to sell tobacco products was the high demand from their customers who drink alcohol.The vendors felt compelled to start selling cigarettes because when those who intend to buy alcoholic drinks request cigarettes and the vendors do not have them, the potential customers usually go to other stores where they can buy both.Thus, the vendors lose out on selling their alcoholic drinks.P20: It is part of the business I am doing because I sell alcohol, and when people come to buy alcohol, they commonly ask for cigarettes too.Some will not want to buy alcohol if they do not see cigarettes, so that is the major reason why I added cigarettes sale to the business. The second most common reason was the need to feed their family.They believed tobacco sales would be very lucrative because many people, especially youths, were smoking cigarettes.Though many of them recognized that tobacco was harmful, the need to make a living and earn a profit was a greater motivation for them to sell. Finally, the influence of the vendor's close family relative also played an important role in their decision to start selling tobacco. P15: My mom used to sell cigarettes too, so I decided to add cigarettes sale to my pepper-grinding business, along with selling dry gins [alcoholic drinks]. Theme 2: Vendors' Awareness of the Tobacco Control Laws Related to the Retail Settings Most of the participants (n = 21) were unaware of any tobacco control law, even after the interviewer probed for the individual laws.The remaining three participants were only aware of the law banning tobacco sales to minors. Interviewer: What about a law stating that you do not sell cigarettes to young people below 18 years?P23: There is nothing like that.They send young children to buy it; their brother will send them, and their father will send them, so there is no law forbidding it. P14: Yes, it is written on it, 18 years and above.Apart from that, there is no other law. Theme 3: Vendors' Perception of the Laws After establishing the vendor's awareness of the various laws, the interviewer informed them about all the laws, and what they entail, and subsequently sought their views about each of the laws.The participants' perceptions of the laws could be categorized as negative, positive, and indifferent, and they differed depending on the individual laws. All responses to the ban on sales to and by minors were positive; the participants supported the bans. P16: It will be good if they enact a law banning tobacco sales to minors. However, the ban on the sale of single sticks attracted negative perceptions from the participants.The participants' major concerns were their lack of "capital" to buy products in bulk (cartons) so that they could sell in packs, as most were only able to buy a few rolls of cigarette packs.Secondly, most of their customers could not afford to buy cigarette packs.Finally, participants mentioned that selling cigarettes as single sticks was more profitable than selling them in packs. P12: The single sticks are the most profitable to sell.Because if you sell a whole cigarette pack at once, you will only gain ₦100 ($0.22), but if you sell it in single sticks, you may make as much as ₦250 ($0.55) or ₦200 ($0.44). Many of the participants were indifferent to the ban on tobacco advertisements and the law obligating vendors to have signage stating that minors are not allowed to buy tobacco.The vendors opined that cigarettes sell fast and do not need to be advertised.They stated that potential buyers would always seek out where they are sold.Also, they believed selling alcoholic drinks was enough advertisement since it is a popular notion that whoever is selling alcoholic beverages, especially those in sachets, would be selling cigarettes too.P20: Everybody knows that wherever they sell beer, hot drinks, and other alcoholic drinks, they will surely sell cigarettes there.So cigarettes sale do not need to post a banner [does not need to be advertised]. Theme 4: Vendors' Compliance with the Tobacco Laws The vendors' compliance with the existing point-of-sales-related tobacco control law was generally poor.None of the participants was compliant with the laws banning sales of single cigarette sticks and those obligating them to have signage stating tobacco sales to minors are prohibited (Table 3).Half of the participants engaged minors, including their children and grandchildren, to sell cigarettes on their behalf.A participant got parental consent to employ a 14-year-old girl as her salesgirl.P12: A child can sell.I said that because, for example, this child [pointed to a small child] was brought for me from Ogbomoso [another city in Oyo State] to assist me in selling my goods [including cigarettes] when I am not in the shop.She is less than 15 years old because I specifically asked for a minor. Ten out of the 12 participants did not have minors selling for them because they had no children available.Only two participants said they would not allow their children to sell tobacco, and their reason was not that they knew about the law banning tobacco sales by minors but that they wanted their kids to concentrate on their studies.Regarding the participants' compliance with the ban on sales to minors, 18 (75%) of them were not compliant with the law.Although most of them claimed they do not sell cigarettes to minors, when probed further, it was discovered that what they meant was that they do not sell cigarettes to children who intend to smoke them.But they readily sold to minors who told them they were running errands for adults.P15: I sell to children, as long as they are not the ones smoking it, they can be sent to come and buy it from me, and I will sell it. When the participants were asked how they confirm the age of their customers who may be minors, they all responded that their judgment was usually based on the person's facial maturity.P18: Once you see the person, you will know [if he/she is a minor], at least if you want to describe my age now, you'll know how to describe it, so that's how it is. Regarding their compliance with the ban on TAPS, most vendors did not have posters or stickers to advertise their tobacco products, although a few had these advertising aids.Similarly, about a third of the participants engaged in tobacco promotion by giving out free cigarettes to customers.They explained that sometimes the sales representatives of the tobacco industry give them free samples of their new products for their customers to try out.The aim was to provide new products, usually cheaper, as alternatives to the costlier tobacco brands.P21: I don't advertise cigarettes; they come to ask themselves. P13: When I go to Agbeni market [wholesalers], they might give me banners or table mats.The one I collected from "Yes" [cigarette brand] is there, I tied it around the counter for people to see it.Based on the interviewer's observation and responses from the vendors regarding product display, all the participants displayed tobacco products.These products (usually cigarette packs which are sometimes empty) were displayed where passers-by, including children, could easily see them.They were also often placed among non-tobacco products like biscuits, candies, and drinks (alcohol/non-alcohol) (Figures 1 and 2).Regarding their compliance with the ban on TAPS, most vendors did not have posters or stickers to advertise their tobacco products, although a few had these advertising aids.Similarly, about a third of the participants engaged in tobacco promotion by giving out free cigarettes to customers.They explained that sometimes the sales representatives of the tobacco industry give them free samples of their new products for their customers to try out.The aim was to provide new products, usually cheaper, as alternatives to the costlier tobacco brands.P21: I don't advertise cigarettes; they come to ask themselves. P13: When I go to Agbeni market [wholesalers], they might give me banners or table mats. The one I collected from "Yes" [cigarette brand ] is there, I tied it around the counter for people to see it. P17: Yes, those just manufacturing new cigarette brands might give us some [cigarettes]. Last month, some marketers came and gave us one pack to distribute to our customers to taste. Based on the interviewer's observation and responses from the vendors regarding product display, all the participants displayed tobacco products.These products (usually cigarette packs which are sometimes empty) were displayed where passers-by, including children, could easily see them.They were also often placed among non-tobacco products like biscuits, candies, and drinks (alcohol/non-alcohol) (Figures 1 and 2).Most of the vendors had transparent plasticware where they put the cigarette packs.Some of these plasticwares were branded by the tobacco companies and given to the vendors by the sales representatives.Some vendors also created special shelves where they arranged empty cigarette packs and placed the shelves in front of their shops to get the attention of potential buyers.P12: To make people know we sell it [cigarettes] here, we will display them outside.I displayed it outside, inside transparent plastic bowls/buckets. Theme 5: Reasons for the Participants' Compliance Status Most participants were unaware of the tobacco control laws guiding tobacco sales in Nigeria, which was the main reason they were non-compliant with the laws.This reason was bolstered by the fact that many of them expressed their willingness to comply with most laws, except the ban on sales of single sticks, when the interviewer informed them.Although, some of the participants said they were willing to comply with the laws, they did not expect these laws to affect their business negatively. P24: I have never heard of any laws. P14: Do we have any authority over the government?It is whatever regulation they bring that we will follow. Many of the participants stated that they usually do not sell tobacco to minors they believed wanted to use them because they were concerned about the potential health effects of tobacco use on children. P17: … That [addiction and health problems] is also the reason I don't sell to children because what a child has been accustomed to from childhood will be very hard to leave later on. However, irrespective of their reservations about the health effects of smoking on children, some vendors still sold cigarettes to children they knew intended to smoke it.The most common reason the vendors sold cigarettes to minors who smoked was their focus on making profits. Theme 6: Potential Economic Effects of Stopping Tobacco Sales The subthemes related to the potential economic impact of the tobacco business included "profit from tobacco sales", "proportion of overall profit from tobacco sales", and the "vendors' perception of the potential economic impact of stopping tobacco sales".Most of the vendors had transparent plasticware where they put the cigarette packs.Some of these plasticwares were branded by the tobacco companies and given to the vendors by the sales representatives.Some vendors also created special shelves where they arranged empty cigarette packs and placed the shelves in front of their shops to get the attention of potential buyers.P12: To make people know we sell it [cigarettes] here, we will display them outside.I displayed it outside, inside transparent plastic bowls/buckets. Theme 5: Reasons for the Participants' Compliance Status Most participants were unaware of the tobacco control laws guiding tobacco sales in Nigeria, which was the main reason they were non-compliant with the laws.This reason was bolstered by the fact that many of them expressed their willingness to comply with most laws, except the ban on sales of single sticks, when the interviewer informed them.Although, some of the participants said they were willing to comply with the laws, they did not expect these laws to affect their business negatively.P24: I have never heard of any laws. P14: Do we have any authority over the government?It is whatever regulation they bring that we will follow. Many of the participants stated that they usually do not sell tobacco to minors they believed wanted to use them because they were concerned about the potential health effects of tobacco use on children.P17: . . .That [addiction and health problems] is also the reason I don't sell to children because what a child has been accustomed to from childhood will be very hard to leave later on. However, irrespective of their reservations about the health effects of smoking on children, some vendors still sold cigarettes to children they knew intended to smoke it.The most common reason the vendors sold cigarettes to minors who smoked was their focus on making profits. Theme 6: Potential Economic Effects of Stopping Tobacco Sales The subthemes related to the potential economic impact of the tobacco business included "profit from tobacco sales", "proportion of overall profit from tobacco sales", and the "vendors' perception of the potential economic impact of stopping tobacco sales". Profit from Tobacco Sales The participants' responses concerning the profit they make from tobacco sales can be divided into low-profit, high-profit, and high demand.Most (15) of the vendors reported that profit on the sales of tobacco products, especially cigarettes was meagre.P14: How much is the profit of cigarettes?It is not much.You will even see some that will come to smoke and ask to pay later, and you can't fight them.When you sell a pack of cigarettes, maybe the profit you will make is ₦70 or ₦50 [$0.16 or $0.11].P20: From morning until night, one might make a profit of ₦500 [$1.2] on cigarette sales. Other than the low profit margin on cigarette sales, some vendors complained that the customers usually do not want to pay.The customers often want the vendors to give them a few cigarette sticks as a "gift" for buying other non-tobacco products, especially alcohol.P14: There is not much profit from it.If you are not careful, you will sell a whole pack and will not make any profit.When people come and pick up cigarettes without paying immediately, you will not want to fight them so that they don't see you as an angry person, but you may not see those debtors again. When the vendors were asked why they did not increase the prices to boost their profit, they explained that they find it challenging to raise cigarette prices unilaterally.This is because there are many tobacco vendors; thus, there is an increase in competition.P16: Yes, you cannot add [too much money] to the cigarettes prices.Customers will tell you they usually buy it cheaper somewhere else.And there are a lot of people selling cigarettes. In contrast to the low-profit margin, a few (3) participants reported a potential for a high-profit margin on cigarette sales, especially if the vendor has a huge capital to buy in cartons, similar to wholesalers.P15: When it comes to cigarettes, people buy them a lot from me.If I sell the different brands I have, sometimes, if God blesses my market, I make ₦1000 [$2.2] as profit daily.Like the one I am selling now, it's a pack that remains like three sticks now, and I will at least sell more before night. Most vendors reported that many tobacco users visit their shops to buy cigarettes and may also buy other products.For some of them, the only reason they were still selling cigarettes, despite its low profit, was because it increased the traffic of potential customers visiting their shops/point of sales (POS).P3: I don't profit from cigarettes sale, but it increases the number of customers who come to my shop. The Proportion of Overall Profit from Tobacco Sales Most participants reported that the proportion of their overall profit from tobacco sales compared to other non-tobacco products that they also sell was small, while only one participant mentioned that it was about average (50%).Generally, the profit they made from selling other non-tobacco products was higher than what they made from cigarette sales. Interviewer: But let's put it this way, if, at the end of today, you made ₦100 [$0.24] as profit from all the products (tobacco and non-tobacco) that you sold, how much out of that ₦100 would have come from tobacco/cigarettes sale? Vendors' Perception of the Potential Economic Effect of Ending Tobacco Sales on Their Overall Business None of the participants believed that completely stopping tobacco/cigarette sales would seriously affect their overall business.In contrast, many said it would not affect their overall business, with a few stating they were already considering stopping tobacco sales. Others reckoned that it might cause a reduction in the number of people who patronize them, but it would not be enough to put them out of business altogether.P9: It won't affect my overall profits because I have been thinking I want to stop selling it.This is because of how those smoking it behave.Some will even be commanding me, "go and find lighter or matches for us o", you know, talking like touts, so I'm starting to lose interest in it. P3: I won't have to close down my business, but sales will be dull.I won't have many customers visiting my shop. The vendors mentioned that they could sell other non-tobacco products instead of cigarettes. P10: If we are asked not to sell it [tobacco] again and pack up, I will go for other things.I used to cook Indomie and egg before and also drinks. Theme 7: Vendors' Experience with the Activities of the Enforcement Agents The following sub-themes emerged during the discussion around the effectiveness of the laws: Enforcement Agents Do Not Provide Education about the Laws All participants mentioned that they have never received any information or education about Nigeria's existing tobacco control laws, especially those related to the point of sale (POS). Law Enforcement Agents Do Not Enforce Laws around Cigarettes/Tobacco Sale at the POS The law enforcement agents had never visited most vendors' POS to ensure they complied with the tobacco control laws. Interviewer: Have the law enforcement agents come to your shop to see whether you comply with tobacco control laws?P9: No, I've never seen them. Even when the law enforcement agents visited the vendors' POS, they did not pay particular attention to cigarette sales.They were primarily interested in those who sold cannabis, which is banned in Nigeria. Law Enforcement Agents Are Concerned about the Sales and Use of "jedi" (RYO) The tobacco product that usually receives the agents' attention is "jedi" (shredded tobacco used as RYO).However, the vendors' account of how the enforcement agents deal with people who sell or use jedi was different.Some stated that the enforcement agents arrest those who sell or use "jedi", similar to how they treat those who sell or use cannabis.Other vendors stated that because it looked like cannabis, the agents only checked to confirm it was jedi and would leave them without making an arrest once they confirmed that it was not cannabis. However, most of the participants avoid selling "jedi" because it often attracts the attention of enforcement agents because of its semblance to cannabis. Discussion The success, or otherwise of tobacco control laws regarding retail settings largely depends on the tobacco vendors' compliance with the sections of the National Tobacco Control Act, 2015 (NTCA) relevant to their business.This study explored the vendors' compliance with the tobacco-control laws related to the tobacco retail business. Most of the study participants were females, and even though there was an attempt to purposively select a more gender-balanced study population, it was difficult to achieve this.This suggests that females dominate the tobacco retail business in Ibadan, Nigeria, and probably across the country.This finding is supported by a quantitative study conducted in Oyo State, Nigeria, where the authors reported that 95% of the tobacco vendors were females [7].Most studies on tobacco vendors do not pay attention to the sex of the vendors [16][17][18][19].Sex has been associated with compliance with existing laws, with females being more willing to comply with the laws than males [20]. The three most common reasons the vendors stated for selling tobacco products were the pressure/demand from their alcohol-consuming customers, the need to earn profit for daily living, and the influence of their close family members.Tobacco and alcohol use are two strongly associated behaviors [13,21].Those who drink alcohol are more likely than those who do not drink to smoke tobacco, and vice versa [13,21].Hence, the demand for tobacco from someone who sells alcohol would be high and can be a motivation for them to combine sales of tobacco products with their alcohol sales.Similar to the finding in this study, a previous study in Pakistan also reported that the demand for tobacco products was a major reason the vendors commenced the tobacco business [22].Some participants commenced tobacco sales due to the need to earn a living and the belief that selling cigarettes is a lucrative business.Thus, vendors should be discouraged from commencing tobacco sales by making it less lucrative and less profitable, and this can be done by increasing the tax on the products.However, the government should also support the vendors to engage in the sale of other non-tobacco products as a replacement for tobacco sales.Many of the vendors are likely to embrace this gesture, because, contrary to their initial belief before they commenced the tobacco business, most of the vendors reported that the profit from cigarette sales is very low and their overall profits from other non-tobacco products were higher.This study finding is similar to previous studies conducted in high-income countries (HICs) [23][24][25][26] and LMICs [27].The result of this study contradicts the usual claims of the tobacco industries that the vendors are making a lot of profit from tobacco sales, and stricter laws by the government will cause serious economic losses for the vendors [26,28,29].The tobacco industry claims that many vendors would be out of business, thus increasing unemployment [28].However, several studies have shown that this claim by the tobacco industry is largely incorrect [29,30].The participants in this study corroborated this stance by declaring that they would simply switch to non-tobacco alternatives and remain in business.Some were already considering stopping cigarette sales and focusing on other non-tobacco products. The reason many of the vendors have continued selling cigarettes, despite the limited profit, was because they believed tobacco sales generate a 'footfall'-an increase in the number of people/smokers visiting their stalls-who can potentially buy other non-tobacco products.This is one of the tobacco industry's claims [25,26] and has been reported by other tobacco vendors [23].However, there have been several arguments on footfall from tobacco sales in driving sales and profits from non-tobacco products.While the claims of the tobacco industry and the vendors have been largely subjective, empirical post-purchase surveys indicated that the footfall from selling tobacco products does not significantly contribute to sales and profits for non-tobacco products [24,[31][32][33].The vendors should be reassured that stopping tobacco sales and switching their focus to selling non-tobacco products is not likely to hurt their income. Awareness of laws is crucial for compliance, as vendors are less likely to comply with the law they never knew existed.Consistent with findings from a previous study in Oyo State, Nigeria [7] and other LMICs [34,35], most vendors in this study were unaware of Nigerian tobacco control laws affecting retail settings [7].Thus, the relevant government agencies and other stakeholders must carry out public sensitization and educational campaigns about the tobacco control laws in Nigeria.If these educational programs are effectively executed, they will positively influence vendors' compliance with the laws as it had been successfully used in other countries [36][37][38]. Considering the low level of awareness of the laws among the vendors in this study, the level of compliance was understandably, generally poor-a common finding in studies conducted in LMICs [6,34,[39][40][41][42][43][44].None of the participants in this study displayed the signage that tobacco cannot be sold to minors, and they all sold single sticks of cigarettes.Half of the participants had minors selling for them, and most of those who did not use minors were because they did not have access to children.Most participants in this study did not comply with the law banning sales to minors, which is a common practice in LMICs [34,43,44].Although some participants claimed they do not sell tobacco to minors, further interaction showed that these vendors only meant they do not sell to minors who intend to use the product.However, they usually sell to children they believe were sent on errands by adults.Sending minors on errands to purchase tobacco products is common in Nigeria [45], and many vendors do not ask for the age of the buyers [46].Instead, as this study has shown, the vendors usually decide the buyer's age solely on their appearance, and this contravenes the law which stipulates that the vendor should "verify the age of the purchaser by checking any form of official identification" [12]. In contrast to the low compliance with the sale of tobacco to minors in this study, a study in Mumbai, India, reported that many retailers complied with the law [6].However, the Mumbai study assessed self-reported compliance, and there were no indications that the vendors were asked specifically about selling to minors who were sent on errands.The findings in this study have exposed the possible error in self-reported compliance with the law banning tobacco sales to minors.Many vendors did not know that selling tobacco to minors running errands was illegal, so when they were asked if they complied with the ban, they simply answered "yes", albeit incorrectly.Thus, studies, especially those utilizing self-administered questionnaires, should note and address this potential bias. The reason most of the vendors did not sell to minors whom they believed intended to use the products themselves was their perceived "social obligation" to protect the children from the negative health effects of tobacco use, and this is a similar finding in previous studies [22,47].The perception of the participants about the different tobacco laws was directly related to their willingness to comply with the specific laws.Their perceptions of each law, after they were informed by the interviewer, could be classified into positive, negative, and indifferent.Most of the vendors had a positive perception of the ban on tobacco sales to and by minors.They considered it unhealthy for a child to use tobacco because of the increased risk of addiction and other harmful effects.This finding is similar to that of previous studies from HICs which reported that over 90% of the vendors had a positive perception of the ban on sales to minors [47,48].Thus, it is unsurprising that they were all willing to comply with the laws banning tobacco sales to and by minors.These findings agree with the result of a previous study conducted in Oyo State, Nigeria, where the majority of the vendors were also willing to support the legislation on the ban of sales to minors and were willing to display signage [7]. Many of the vendors in this study were also willing to comply with the law banning tobacco advertisements and mandating that signage showing that tobacco sales to minors are not allowed.Again, this is likely due to their 'indifferent' perception towards the two laws.Many participants in this study claimed they usually do not advertise because the tobacco business does not need to be advertised, particularly since they were also selling alcohol, and potential customers know that when a vendor sells alcohol, they would also have tobacco on sale. The implication is that the effective enforcement of the ban on tobacco advertisement and product display may not lead to the desired result if the same is not done for alcoholic drinks.Reports [49,50], have shown that the tobacco and alcohol companies have alliances to undermine policies and limit the impact of legislation on their businesses.They have various marketing strategies to jointly promote their products and reinforce brand appeals [49,50].Hence, as Jiang and Ling (2011) opined, a holistic approach, including the effective ban on both tobacco and alcohol advertisement and product display, should be adopted as one of the tobacco control strategies in Nigeria and other countries with similar practices [50]. The vendors had an overwhelmingly negative perception of the law banning the sale of single cigarette sticks, and most were not willing to comply with the law.One of their reasons was that they did not want retail sales of tobacco products to be a capital-intensive business.With a little money, the vendors could buy a few packs of cigarettes, sell them as single sticks, and make a profit.With a law compelling them to sell only packs, they would need more capital to buy rolls and cartons of cigarettes, and they believed it would be difficult for them to get such huge capital.Some of the vendors also mentioned that the profit they make when they sell 20 single sticks of cigarettes is usually higher than when a pack of 20 sticks is sold at once.Many of the vendors were also against this law because they believed most of their customers would not be able to afford cigarette packs, and this could result in a reduction in demand for cigarettes. The vendors' negative perception and unwillingness to comply with the ban on the sale of single sticks means that relevant agencies need to pay more attention to educating the vendors and enforcing the law.Studies [47,51,52], have shown that active enforcement of tobacco control laws, including frequent visits to the POS and effecting penalties for offenders, were positively associated with vendors' compliance with the law.The vendors in this study stated that the enforcement agents were not educating nor enforcing the tobacco control laws.Although the possible reasons were not stated and beyond the scope of this study, we believe that it may be due to the enforcement agents' lack of awareness of the laws or their roles in implementing them.We, therefore, recommend that the government and other stakeholders should ensure that all the relevant agencies are aware and equipped to carry out the implementation and enforcement of the tobacco control laws. This study is not without its limitations.Being a qualitative study conducted among purposively selected tobacco vendors in Ibadan, Nigeria; the result may not be generalizable to vendors in other settings.However, while the findings must be interpreted cautiously, the findings do not conflict with other studies. Conclusions This study showed that the overall level of compliance of the vendors to tobacco laws related to the retail business was very poor, and the major reasons for the lack of compliance were their lack of awareness of the laws and nonexistent enforcement.The participants expressed a positive view on the ban on sales to minors and sales by minors.They were indifferent to the ban on tobacco advertisement, product display, and displaying signages, but they all held a negative perception concerning the ban on sales of single cigarette sticks. Most vendors reported that while tobacco sales generate footfall, the profit is low, and the proportion of their overall profit from tobacco sales was minimal.Most of them did not foresee a serious economic effect on their overall business if they completely stopped selling tobacco.Overall, the participants were willing to comply with the laws. Supplementary Materials: The following supporting information can be downloaded at: https://www.mdpi.com/article/10.3390/ijerph20227054/s1, Figure S1: Question guide; Table S1: Codebook -tobacco vendors' compliance with the laws.Informed Consent Statement: Informed consent was obtained from all the participants involved in the study. Interviewer: You don't have any children who can help you sell?P17: My children are still very young. P17: Yes, those just manufacturing new cigarette brands might give us some [cigarettes].Last month, some marketers came and gave us one pack to distribute to our customers to taste. Figure 1 . Figure 1.Picture showing rolls of cigarette packs carefully arranged for easy visibility and displayed alongside candies and other edibles in Ibadan, Nigeria, 2022. Figure 1 . Figure 1.Picture showing rolls of cigarette packs carefully arranged for easy visibility and displayed alongside candies and other edibles in Ibadan, Nigeria, 2022. P24: I sell Time, Rothmans, Benson, Pall Mall, Esse; that's what I sell.I do not sell jedi or rizlar [tobacco rolling paper] because I do not want a problem. . . .because the moment you start selling jedi and rizlar, you are closer to selling cannabis, and I don't want anyone to arrest me. Author Contributions: O.F.F. and O.A.A.-Y.conceptualized the study, O.F.F.collected the data, O.F.F. and C.O.E.analyzed the data, O.F.F.wrote the first draft, O.F.F., C.O.E. and O.A.A.-Y.reviewed and edited several drafts of the manuscript and approved the final version.All authors have read and agreed to the published version of the manuscript.Funding: This research was funded by The African Capacity Building Foundation (ACBF) (Grant #333).Institutional Review Board Statement: The study was conducted in accordance with the Declaration of Helsinki and approved by the University of Ibadan/University College Hospital Ethical Review Committee (UE/EC/21/0201) on 31 June 2021. Table 1 . Sociodemographic characteristics and tobacco use status of the study participants. Table 2 . The themes and subthemes on tobacco vendors' perceptions and compliance with tobacco control laws in Nigeria. Focus on profit from sales P3: When you look at the age of the young boys who smoke cigarettes, you will see that they are about the age of 16 years.And at that age, they start smoking, which ought not to be.But we just overlook it because we bought it[cigarette] Table 3 . The participants' compliance with the tobacco control laws in Nigeria.
2023-11-15T16:03:02.349Z
2023-11-01T00:00:00.000
{ "year": 2023, "sha1": "a2909f92cd9d6b1a5f280a82a785450183513930", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/1660-4601/20/22/7054/pdf?version=1699780585", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "d777ae09dea591d26276b471c65f74fc3f17189d", "s2fieldsofstudy": [ "Business", "Sociology" ], "extfieldsofstudy": [] }
944434
pes2o/s2orc
v3-fos-license
The importance of immunophenotyping by flow cytometry in distinction between hematogones and B lymphoblasts Hematogones are normal B-lineage lymphoid precursors in the bone marrow. B lymphoblasts are immature neoplastic cells present in patients with precursor B-cell acute lymphoblastic leukemia (B-ALL). Hematogones and B lymphoblasts share characteristics, such as morphological similarity often indistinct and expression of the same antigens in immunophenotypic analysis. Increased numbers of hematogones in patients with B-ALL during regeneration of bone marrow after treatment for leukemia, in cases of disease relapse or marrow transplantation, may be subject to questions about the nature and prognosis of this immature cell. This article presents information about the morphological and immunophenotypic characteristics of B lymphoid precursors and verifies the relevance of immunophenotyping by flow cytometry (FC) in the distinction between those cells. This differentiation is essential to establish a correct prognosis and assist in medical decision about the most appropriate therapeutic scheme. intRoDuCtion Hematogones (HGs) are normal young cells of B lymphoid lineage that are present in small amounts in the bone marrow (BM). These cells are found in healthy individuals of all age groups, but appear in higher number in the BM of infants and children, declining significantly with increasing age (1)(2)(3) . HGs have a role in the regulation of blood cell production, participating in early B-cell ontogeny. The intense proliferation of these cells may represent a reaction of the immune system, permitting marrow restoration or regeneration (4)(5)(6) . This increase may occur in patients with autoimmune or congenital cytopenia, immune thrombocytopenic purpura, iron deficiency anemia, acquired immunodeficiency syndrome (AIDS), infiltrative neoplasias, after a viral infection, BM regeneration after chemotherapy, radiotherapy or marrow transplantation (1,2,(5)(6)(7) . Acute lymphoblastic leukemia (ALL) is a neoplasia characterized by altered development and proliferation of lymphoid cells (which differentiate into subtypes B and T) followed by a maturation blockage in the BM, with resulting accumulation of immature cells called blasts. In precursor B-cell acute lymphoblastic leukemia (B-ALL), B lymphoblasts accumulate in the BM and may disseminate to peripheral blood and/or cause infiltration to other body tissues (8,9) . After the end of B-ALL treatment or BM transplantation, the number of precursor lymphoid cells becomes higher. Such an increase does not indicate relapse or active infectious process, but may be a manifestation of immune recovery, represented by the presence of HGs (5,6) . In some clinical situations, HGs and lymphoblasts exhibit morphologically indistinguishable characteristics, making it difficult to distinguish minimal involvement of residual or recurrent blasts from a small population of normal progenitor cells in the BM (6,7) . A precise discrimination between these lymphoid precursors is fundamental to assist in the therapeutic management of B-ALL patients, because questions about prognosis may arise due to the morphological similarity of these cells in the regeneration phase (2,4,7) . Studies have found levels of HGs in the BM ranging from 8% to 55%, a common high result after chemotherapy or marrow transplantation (5,6) . The treatment for B-ALL may debilitate patients, increasing risks of immune and nutritional effects, with possible impaired therapeutic response (10) . Distinction between normal and malignant cells at the moment of BM regeneration is essential to establish a correct prognosis and avoid subjecting leukemic patients to unnecessary chemotherapy or radiotherapy sessions (2,11) . A possibility to enhance the cellular distinction between HGs and B lymphoblasts is immunophenotyping by flow cytometry (FC), because it provides a simple, rapid and adequate analysis in clinical screening procedures. This technique permits to identify a cellular profile even when it is present in small populations, contributing to diagnosis, classification, staging, prognosis and monitoring of hematological malignancies (12)(13)(14)(15) . MoRPhoLogy of B LyMPhoiD CELLS Hematopoiesis is the process responsible for blood cell proliferation, differentiation, and maturation. The hematopoietic stem cell generates several cell types in this system, such as the lymphoid progenitor cells, which have the property of maturing and differentiating into B, T and natural killer (NK) lymphocytes (16,17) . Lymphocytes have different morphological, cytochemical and immunophenotypical characteristics, essential for their recognition. B lymphocytes are small cells (6-10 micrometers of diameter) with a high nucleus/cytoplasm ratio and maturation in the BM (18) . Cell production is strictly controlled in the bone marrow, which is the primary organ able to generate and differentiate B lymphoid progenitors at different stages of maturation. As methodologies advanced, researches focused on the BM of healthy individuals, identifying under-represented cell subpopulations, such as HGs, and learning about the distinct maturation stages of these cells (16,19,20) . HGs cytological classification is better characterized in BM aspirate smears ( Figure 1A). These precursors are classified as small-sized lymphoid cells, which range from 10 to 12 micrometers in diameter during the immature phase (phase I), and 17 to 20 micrometers during the intermediate (phase II) and mature (phase III) phases (2,5,6) . The most mature HGs possess morphology similar to that of mature lymphocytes with condensed or coarse chromatin. However, the most immature ones are similar to B-ALL blasts, and in some cases are indistinguishable (2) . HGs nucleus is round or oval, sometimes indented, its chromatin is homogeneous, in certain cases has small indistinct nucleoli; cytoplasm is generally scant, varying from moderate to deeply basophilic, with no inclusions, granules or vacuoli (2,6,11) . Marrow aspirate samples are obtained mostly from patients with suspected neoplasia or hematologic abnormalities. As a rule there are no accepted reference values for HGs, because the BM examination is rarely performed in healthy individuals. Even so, a group of researchers considered values ≥ 5% elevated, because in this amount HGs become visible in BM smears (1)(2)(3) . Blast morphology in ALL was described according to the French-American-British (FAB) classification in 1976. Lymphoblasts were divided into three subtypes: L1, L2, and L3, and defined according to cell size, chromatin state, nuclear shape, nucleolus characteristics, amount of cytoplasm, presence of vacuoli, and degree of basophilia in the cytoplasm (21) . The diagnosis of this neoplasia is confirmed by the presence of 25% or more lymphoblasts in the total BM nucleated cells, according to the classification by the World Health Organization (WHO) ( Figure 1B). However, if a blast level lower than 20% is encountered in the marrow, other exams must be performed to confirm diagnosis (22) . During BM regeneration in a patient being treated for leukemia, the normal precursor cells and the neoplastic B lymphoblasts often exhibit morphologically undistinguishable features (6,7,23) . Staining techniques that use methylene blue, according to Romanovsky, to analyze HGs morphology result in the similarity between these cells and ALL blasts subtype L1 of FAB classification (24,25) . Although HGs have characteristics of immature cells, they must not be called lymphoblasts (5) . Morphological and immunophenotypic analysis is required to assess this marrow regeneration or in cases of relapse in individuals with B-ALL (2,14) . iMMunoPhEnotyPiC anaLySiS of B LyMPhoiD PRECuRSoR CELLS Normal and malignant B lymphoid precursors reveal morphological similarities when seen under an optical microscope, but their structure has several types of molecules with different functions, which need a more specific characterization to identify the cell origin. Immunophenotyping permits this distinction, because it identifies the exact type of cell that composes a certain tissue, by means of the interaction between monoclonal antibodies and membrane, cytoplasmic or intranuclear antigens (5,14,26) . FC permits immunophenotypic studies through multiple staining of the cells in analysis by fluorescent labeling, laser technology, and cell separation methods (27) . FC immunophenotypic analysis of young cells of lymphoid lineage permits identifying the cell lineage (B or T) and the phenotype of lymphoid cells in different maturation stages, besides making it possible to distinguish between HGs and B lymphoblasts. This technique assists in the diagnosis of B-ALL and is widely used in the post-treatment assessment for detection and monitoring of minimal residual disease, permitting the follow-up of leukemic patients (1,5,15,28,29) . Questions as to the nature of B lymphoid precursors may arise due to intense cell proliferation in B-ALL patients during BM regeneration after leukemia treatment, in cases of relapse, or marrow transplantation (2,14) . For this reason, distinction between HGs and blasts is essential to establish a correct prognosis. Determination of several cell types by immunophenotyping is performed using a group of molecules that stain the cells called cluster of differentiation (CD). HGs may resemble malignant lymphoblasts by expression of an immature B-cell phenotype, although these young cells exhibit significant differences as to expression patterns and specific antigen levels. These B lymphoid precursors express antigens in common in the BM, for example, CD10 (B lymphoid and B lymphoblastic precursor) and CD34 (hematopoietic precursor), but what differentiates them is the continuity of expression of these antigens (2,11,14) . HGs were initially defined as a population of cells with normal and heterogeneous progression in the BM by multiparametric FC (19) . Later on, immunophenotypic characterization confirmed the continuous and complete maturation pattern of HGs within the same cell population, with the expression of antigens typical of B-lineage precursor cells in all maturation phases (11,30) . The first antigen of B lymphoid lineage in normal BM, CD22, became known due to the FC technique (29) . In some clinical situations this antigen presents expressions of low and high intensity, which indicate the existence of lymphoid precursors with lower and higher maturity, respectively (11) . CD22 antigen is expressed after CD34 and precedes the expression of CD19 (B-cell marker) and CD10. Thus, the most immature B-cell progenitors would be absent from CD19, coexpressing CD34, terminal deoxinucleotidil transferase (TdT) -marker of immature B and T lymphoid cells -, and CD22. Just afterwards, the most immature lymphoid precursors intensify TdT expression, show high-intensity expression of CD34 and CD10, and low-intensity expression of CD19 and CD45 (leukocyte antigen). In the transition from immature to intermediate, these young cells lose expression of TdT and CD34, while decrease intensity of CD10 expression and increase CD45 to intermediate levels, with increased expression of CD19. During the mature phase, these cells acquire membrane immunoglobulins of class M (mIgM) in the membrane surface (mature B-cell marker), and present high CD45 and negative or low intensity CD10 expression (6,7,11,27,29,31) . Table, HGs were classified into phases according to the presence and intensity of antigen expression in the cells. The phenotypic patterns of these lymphoid precursors are divided into three maturation stages: immature, intermediate, and mature (2,6,11,19) . As shown in the According to the gradual expression of certain antigens, the progressive maturation pattern of HGs within a same cell population with positive expression of antigen CD19 may be observed in Figure 2. Adapted from Jorge et al. (11) and Lúcio et al. (29) . The distribution of B-cell subtypes between children and adults presents significant differences. Children have marrows rich in B cells -immature or in the intermediate phase of differentiation -(approximately 70% of all B cells of normal BM), while in adults mature B lymphocytes predominate (70% of total B cells) (29,32) . Several researchers, who divide B lymphoblasts into four classes, proposed a B-ALL immunological classification according to the expression of specific antigens: pro-B ALL, common ALL (cALL), pre-B ALL and mature B-cell ALL (25) . These classes were ordered based on the maturation degrees of leukemic cells in comparison with the normal lymphocytic differentiation path (25,(33)(34)(35) . Frequent comparisons of these immunophenotypings with the morphological subtypes of FAB classification showed a direct correlation between the subgroup L3 and mature B-cell ALL, whereas subgroup L1 more frequently correlates with pre-B ALL (25) . Out of concern for incorporating genetic alterations, WHO presented a group of clinical, morphological, immunophenotypic and genetic parameters used by pathologists, hematologists and oncologists to characterize malignant neoplasias (22,36) . Upon meeting these criteria for B-ALL, patients have been treated according to specific protocols. In the immunophenotypic classification, B lymphoblasts exhibit an incomplete maturation spectrum, represented by a single immature population (low intracellular complexity), characterized by the expression of CD34 antigen. These blasts also present positivity for B-cell-specific antigens, such as CD22, CD19, human leukocyte antigen DR (HLA-DR) (B-lineage lymphoid cells) and CD10 (2,11,34,37,38) . The TdT antigen may be absent, but, when present, displays variable expression intensity in most blastic cells. A feature restricted to B lymphoblasts is the high-intensity expression of CD10 with negative CD45. The presence of high CD10 with low-intensity CD45 frequently appears in these cells, but is not an exclusive aspect ( Figure 3A). The immature profile of blasts is marked by the high-intensity expression of CD34 with negative mIgM ( Figure 3B), but rarely the negative or low expression of CD34 and absent mIgM occurs (11,35,39) . The immunological analysis of these blastic cells indicates a larger proportion of immature cells, and few or no mature cells at all, with aberrant antigens not detected in normal B cells (2,37,38) . Among the phenotypic aberrations frequently found in B lymphoblasts, one may notice the presence of CD34 with low-intensity CD19, and in sequence, low expression of TdT with high-intensity CD10. Researches reveal that the phenotypic alterations found in neoplastic blasts bear a relationship with the genetic anomalies present in B-ALL (27,40) . ConCLuSion FC is widely used in ALL diagnosis and at the moment of prognosis definition after BM transplantation or treatment for leukemia. It is important for the distinction between regenerating B lymphoid precursor cells and residual neoplastic B lymphoblasts, what will guide therapeutical decisions. Normal and malignant B lymphoid precursors express some markers in common, but distinction between them is drawn by the continuity of these antigens expression at FC. This technique permits to understand the continuous and complete maturation profile of HGs within the same cell population, making it possible to detect the gradual expression of certain antigens. Conversely, B lymphoblasts display a unique profile represented by an immature population in the BM. Due to the production of new blood cells, the intense proliferation of HGs in the BM permits medullar restore or regeneration, what means an immunological response. For this reason, this clinical situation must not be interpreted as a harmful reaction to patients' therapy, since the presence of HGs represents a good prognosis and helps in the medical decision about the most adequate therapeutic scheme.
2017-09-10T12:24:46.846Z
2015-02-01T00:00:00.000
{ "year": 2015, "sha1": "5538aebf044e817d252c197ad2dea871f69f71df", "oa_license": "CCBY", "oa_url": "https://doi.org/10.5935/1676-2444.20150002", "oa_status": "GOLD", "pdf_src": "MergedPDFExtraction", "pdf_hash": "23144c21618f320c0d5db20504b0739b6ffb52cd", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Biology" ] }
8394402
pes2o/s2orc
v3-fos-license
Revisiting Mouse Peritoneal Macrophages: Heterogeneity, Development, and Function Tissue macrophages play a crucial role in the maintenance of tissue homeostasis and also contribute to inflammatory and reparatory responses during pathogenic infection and tissue injury. The high heterogeneity of these macrophages is consistent with their adaptation to distinct tissue environments and specialization to develop niche-specific functions. Although peritoneal macrophages are one of the best-studied macrophage populations, recently it was demonstrated the co-existence of two subsets in mouse peritoneal cavity (PerC), which exhibit distinct phenotypes, functions, and origins. These macrophage subsets have been classified, according to their morphology, as large peritoneal macrophages (LPMs) and small peritoneal macrophages (SPMs). LPMs, the most abundant subset under steady state conditions, express high levels of F4/80 and low levels of class II molecules of the major histocompatibility complex (MHC). LPMs appear to be originated from embryogenic precursors, and their maintenance in PerC is regulated by expression of specific transcription factors and tissue-derived signals. Conversely, SPMs, a minor subset in unstimulated PerC, have a F4/80lowMHC-IIhigh phenotype and are generated from bone-marrow-derived myeloid precursors. In response to infectious or inflammatory stimuli, the cellular composition of PerC is dramatically altered, where LPMs disappear and SPMs become the prevalent population together with their precursor, the inflammatory monocyte. SPMs appear to be the major source of inflammatory mediators in PerC during infection, whereas LPMs contribute for gut-associated lymphoid tissue-independent and retinoic acid-dependent IgA production by peritoneal B-1 cells. In the previous years, considerable efforts have been made to broaden our understanding of LPM and SPM origin, transcriptional regulation, and functional profile. This review addresses these issues, focusing on the impact of tissue-derived signals and external stimulation in the complex dynamics of peritoneal macrophage populations. Introduction Macrophages are resident cells found in almost all tissues of the body, where they assume specific phenotypes and develop distinct functions. Tissue macrophages are considered as immune sentinels because of their strategic localization and their ability to initiate and modulate immune responses during pathogenic infection or tissue injury and to contribute to the maintenance of tissue homeostasis (1)(2)(3). Macrophages were first identified in the late 19th century by Élie Metchnikoff (1845Metchnikoff ( -1916 and designated as large phagocytes (4,5). Based on their phagocytic activity, macrophages were first classified as cells from the reticuloendothelial system, which also comprised endothelial cells, fibroblasts, spleen and lymphoid reticular cells, Kupffer cells, splenocytes, and monocytes (6). However, because endocytosis performed by endothelial cells is a process that is distinct from phagocytosis, by the late 1960s a new classification system for mononuclear phagocytic cells as cells from "mononuclear phagocytic system" (MPS) was proposed (7). The MPS was defined as a group of phagocytic cells sharing morphological and functional similarities, including pro-monocytes, monocytes, macrophages, dendritic cells (DCs), and their bone marrow (BM) progenitors (7)(8)(9)(10)(11)(12). Although the phagocytic cells play similar roles in orchestrating the immune response and maintaining tissue homeostasis (11), they represent cell populations that are extremely heterogeneous (13), and the general classification of mononuclear cells in a unique system is currently under intense discussion (12,14). In this context, Guilliams et al. suggested a classification of MPS cells based primarily on their ontogeny and secondary on their location, function, and phenotype, promoting a better classification under both steady state and inflammatory conditions (14). Under steady state conditions, some tissues and serous cavities, including lung, spleen, and the peritoneal cavity (PerC), present distinct resident macrophage subpopulations. In the spleen, at least three macrophage subsets are found: red pulp, metalophilic, and marginal zone macrophages (30). In the PerC, two peritoneal macrophage subsets have been described: large peritoneal macrophage (LPM) and small peritoneal macrophage (SPM) (31). Mouse peritoneal macrophages are among the best-studied macrophage populations in terms of cell biology, development, and inflammatory responses (24,(31)(32)(33)(34)(35)(36)(37)(38)(39)(40)(41)(42). Peritoneal macrophages play key roles in the control of infections and inflammatory pathologies (43,44), as well as in the maintenance of immune response robustness (40). Therefore, this review will discuss recent advances in our understanding of peritoneal macrophage subsets characterization, origin and functions, and the accurate experimental approaches to analyze them. Identification of Peritoneal Macrophages Cohn and collaborators introduced the study of peritoneal macrophages (45)(46)(47)(48). Indeed, a representative portion of the current knowledge regarding macrophage biology, such as their function, specialization, and development stems from studies performed using peritoneal macrophages as a cellular source. However, the existence of two resident macrophage subsets present in the PerC was described recently (31). These macrophage subsets were designated LPM and SPM according to their size. LPMs and SPMs were initially identified based on their differential expression of F4/80 and CD11b, where LPMs express high levels of F4/80 and CD11b while SPMs show F4/80 low CD11b low phenotype ( Table 1). CD11b is an integrin that, together with CD18, forms the CR3 heterodimer (13,30,49), but is not exclusively expressed on macrophages and is found on several others cell types, including polymorphonuclear cells (50,51), DCs (52), and at low levels on B lymphocytes (53,54). F4/80, a 160 kD glycoprotein from the epidermal growth factor (EGF)-transmembrane 7 (TM7) family, is expressed by macrophages in several organs, such as the kidney (55), BM (56), epithelium (57), lung (58,59), lymphoid organs (60), and among others (61,62), and it is not found on fibroblasts, polymorphonuclear cells, and lymphocytes (63). However, peritoneal eosinophils show low levels of F4/80 (31) and some macrophage subpopulations exhibit low levels or do not express F4/80, such as white pulp and marginal zone splenic macrophages (30). Therefore, F4/80 expression levels distinguish macrophage subpopulations, including those residing in the same tissue, such as subsets found in the spleen and PerC (30,31,35). In this sense, the great majority (approximately 90%) of F4/80 + CD11b + cells present in the PerC from several mouse strains, including BALB/c, C57BL/6, 129/S6, FVB/N, SJL/J, and RAG −/− , express high levels of these molecules and correspond to the LPM subset, whereas the minor SPM subset expresses low levels of these markers (31). An accurate evaluation of SPMs and LPMs by flow cytometry and optical microscopy revealed that in addition to the differential expression of CD11b and F4/80, SPMs and LPMs display unique morphologies and phenotypes. LPMs assume the (31,35,36). Conversely, LPMs express higher levels of toll like receptor (TLR)-4 and co-stimulatory molecules in comparison to SPMs (31,35,36). Given that PerC is a singular compartment where specialized immune cells reside and interact, including macrophages, B cells, DCs, eosinophils, mast cells, neutrophils, T cells, natural killer (NK), and invariant NKT cells (31,32,35,36,64), the identification of myeloid cells from PerC based on cell surface molecules is still a complex matter, particularly in terms of distinguishing macrophage subsets from DCs and inflammatory monocytes. The expression of 12/15-lipoxygenase (LOX), Tim4, and Ly6B has also been examined to discriminate heterogeneous macrophage subsets in PerC under steady state conditions and during peritonitis (24,37,38,42). The high expression of 12/15-LOX and Tim4 was observed in peritoneal macrophages, which also express high levels of F4/80 and CD11b, correlating with the phenotype and frequencies observed for LPMs (24,31,37,38,42). Conversely, 12/15-LOXcells and SPM share the same CD11b + F4/80 low MHCII high phenotype; however, 12/15-LOXcells express high levels of CD11c and co-stimulatory molecules, suggesting that 12/15-LOXcells and SPMs are, at least in part, distinct populations (31,35,37). Despite similarities in cell morphology and MHC-II expression presented by SPMs and DCs, the possibility that SPMs may be part of the peritoneal DC pool is excluded by the smaller size, the distinct and lack of the CD11b and F4/80 expression presented by DCs and, primarily, by the lower expression of CD11c (HL3 or N418 clones of monoclonal anti-CD11c) on SPMs compared with LPMs or typical peritoneal DCs (31,35). Given the cell complexity present in PerC and the importance of the development of efficient strategies to correctly identify macrophage subsets as well as to avoid contamination by other cell populations and misinterpretation of peritoneal macrophage studies, our group has proposed a simple way to identify peritoneal macrophage subsets using a four-color flow cytometry staining panel. From doublet, CD19 high and CD11c high discarded selected cell populations; the analysis of F4/80 + cells based on MHCII expression defines three distinct subpopulations, F4/80 high IA b-neg , F4/80 low IA b-high , and F4/80 low IA b-neg , which correspond, respectively, to LPMs, SPMs, and granulocytes (35). Origin and Development of LPM and SPM The theories that explain the origin of macrophages have been completely reformulated in the last few years. The differentiation process of monocytes, macrophages, and DCs that occurs in the BM starts with the earliest progenitor, the hematopoietic stem cell (HSC), and follows the common myeloid progenitor (CMP) and the granulocyte and macrophage progenitor (GMP) (16). The clonotypic BM-resident precursor differentiated from GMP, termed the macrophage-DC precursor (MDP), expresses high levels of the fractalkine receptor CX3CR1, c-kit, and CD115, and gives rise to circulating blood monocytes, some macrophage populations and a common DC precursor (CDP), but does not originate granulocytes (15,65,66). The recruitment of monocyte subsets under steady state or inflammatory and pathological conditions depends on particular chemokines and the expression of their counterpart's receptors. The Ly6C + monocyte subset migrates via a CCR2-dependent pathway, whereas Ly6Cappears to migrate in response to CX3CR1 signaling (67). Under steady state conditions, extravasated monocytes do not contribute to the pool of resident macrophages in many tissues (3,15,16). In inflammatory settings, the Ly6C + monocyte subset differentiates into inflammatory macrophages and monocyte-derived DCs, such as Tip-DCs (15,16). Recent accumulating evidence supports the prenatal origin of tissue-resident macrophages and the idea that they are maintained locally by self-renewal throughout adult life, both in the steady state and after cell turnover, which is predominantly independent of hematopoiesis (17, 18, 23-27, 29, 68, 69). Microglia, Langerhans cells, Kupffer cells, red pulp splenic macrophages, lung, and peritoneal macrophages are originated from embryogenic precursor and proliferative cells maintained by self-renewal (23)(24)(25)(26)(27)(69)(70)(71). Fetal-liver monocytes or primitive macrophages found in the yolk sac, an extraembryonic tissue, have been related with the origin of tissue-resident macrophages. In this context, recent date using yolk sac macrophages depletion and fate-mapping models demonstrated that yolk sac macrophages, which are generated from early erythro-myeloid progenitors (EMPs), are important for development of macrophages in mid-gestation; however in adulthood, only microglia is maintained by these embryogenic precursor (69). In contrast, fetal monocytes that are derived from late EMPs give rise to tissue-resident macrophages from liver, lung, skin, kidney and spleen (69). The exception to the origin of resident macrophages is intestinal macrophages, which are continuously repopulated by circulating monocytes (72). Understanding the dynamics of maintenance and recruitment of peritoneal macrophages is of particular interest since these cells are involved in physiological as well as pathological processes, such as peritonitis, tumors, and pancreatitis (40,43,44). Early studies demonstrated that peritoneal macrophages are maintained in PerC through self-renewal in the steady state or under inflammatory conditions (73)(74)(75)(76). The omentum, a fat tissue that connects the abdominal organs, is also involved in peritoneal macrophage development through the proliferative capacities of omental macrophages (75,76). The combination of these early observations, which were acquired recently, with the technical advances to correctly identify the peritoneal macrophage subsets has permitted the ontogeny of the peritoneal macrophage subsets to be elucidated (24,31,36,39,40,42). GFP + cells in DC and SPM pool, but not in the LPM population. Conversely, in the CX 3 CR1CreRosa26R-FGFP mice, which show the active and past expression of CX3CR1, the presence of GFP + cells was found within DC, SPM, and LPM populations. These data indicate that SPMs are short-lived cells, whereas LPMs have a more distant ontogenic relationship with a CX3CR1 + progenitor, corroborating the idea that they originate from the yolk sac (36). However, in chimeric C57BL/6 mice reconstituted with C57BL/6-CD45.1 BM, around 80% of SPMs and more than 70% of LPMs are CD45.1-expressing cells, demonstrating that both peritoneal macrophage subsets differentiate from BM precursors after ablation of peritoneal macrophages induced by irradiation (36). Data from our group suggest that PerC recruited Ly6C + monocytes could give rise to SPMs during inflammatory conditions (31). Confirming that SPMs are generated via the differentiation of inflammatory monocytes recruited to PerC, reduced numbers of SPMs are found in the PerC of CCR2 −/− mice (40). The analysis of Ki67 and phosphorylated histone H3 (pHH3 at a discrete stage of mitosis) staining and the quantification of cell cycle and basal DNA content revealed that the number of proliferating F4/80 high CD11b high cells decreases in 12-week-old mice compared with proliferation capacity of this population in newborn mice (15 days to 4 weeks) (24). After 12-16 weeks, the number of F4/80 high CD11b high cells in PerC is maintained under a low rate of proliferation, which suggests that the number of F4/80 high CD11b high peritoneal cells increases during mouse development until PerC acquires sufficient homeostatic cell numbers (24). Indeed, BrdU-labeled LPM frequencies after a single BrdU pulse were 7 and 15-fold lower than those found in HSC and GMP, respectively. Moreover, the presence of BrdU + LPMs was detectable 14 days after BrdU pulse, suggesting that they are a long-lived population, i.e., maintained at low levels of proliferation (36). Conversely, the detection of low numbers of proliferating SPMs at 6-10 days after one pulse of BrdU suggests that these cells have a low proliferation rate under steady state conditions and are short-lived cells (36). Studies with mice deficient in CCAAT/enhancer binding protein (C/EBP)b also support the notion that LPMs and SPMs represent distinct ontogenies, because in the absence of this transcription factor, PerC did not contain LPMs and exhibited increased numbers of SPMs (36). Interestingly, adoptively transferred SPMs differentiated into LPMs in Cebpb −/− mice. However, in control mice that have normal numbers of LPMs, only a small frequency of transferred SPMs acquired the F4/80 hi MHCII low CD93 + phenotype of LPMs. Based on these results, the authors proposed that under physiological conditions, SPMs appear to contribute in only a small way to generate LPMs, but SPMs could be involved in the maintenance of LPMs in situations where this pool has been greatly reduced, such as under inflammatory conditions or following radiation ablation (36). These data are consistent with the findings of Yona et al. (26), which demonstrated the presence of monocyte-derived cells in the LPM compartment 8 weeks after the i.p. injection of thioglycollate. Together with LPMs, a subset of proliferating BM-derived inflammatory macrophage has also been associated with selfrenewal mechanisms during the resolution of peritonitis induced by zymosan and thioglycollate (42). Conversely, LPMs do not seem to contribute to the SPM pool, even during inflammation. Our group demonstrated that adoptively transferred CFDA-SElabeled LPMs 1 h after LPS stimulation retained its phenotype, and no CFDA-SE + cells were found in the SPM compartment until 2 days after stimulation (31). In the last year, a great advance in the understanding of the transcriptional control of peritoneal macrophages provided novel insights into this scenario (39,40). The zinc finger transcription factor GATA-binding protein 6 (GATA6) appears to regulate the majority of peritoneal macrophage-specific genes (PMSGs). Of note, GATA6 is selectively expressed by LPMs (40). Accordingly, the number of LPMs were greatly reduced in peritoneal lavages from GATA6-KO mye and Mac-GATA6 KO mice, which have a GATA6 deficiency in all myeloid cells or only in the macrophage lineages, respectively (39,40). Interestingly, retinoic acid (RA) is the extracellular factor that regulates GATA-6-specific gene expression in LPMs, because vitamin A depleted (VAD; the RA precursor) mice exhibited a decrease in GATA6 expression and LPM numbers (40). Moreover, the stimulation of peritoneal macrophages from VAD mice with all-trans RA restored the expression of GATA-6 and many PMSGs at levels found in peritoneal macrophages from control mice. In addition to the regulation of gene expression profiling in peritoneal macrophages, GATA-6 appears to be involved in the control of the proliferation, survival, and metabolism of these cells (39,77). GATA-6-deficient macrophages demonstrate an altered proliferation state during peritonitis (39). Moreover, Lyz2-Cre × GATA6 (flox/flox) mice also exhibit reduced numbers of peritoneal macrophages, which could be explained by the perturbation in their metabolism, culminating in the high frequency of cell death found in this compartment (77). Despite great contributions to our understanding in the involvement of GATA-6 in peritoneal macrophage development, metabolism, self-maintenance, and survival, the existence of distinct pathways that could govern the transcriptional regulation of SPMs remains largely unknown. In addition to transcriptional regulation, signaling factors derived from the microenvironment also play an essential role in promoting the development and phenotype of tissue-resident macrophages. For example, TGF-β1 signaling is required for the development of the microglia population and to regulate a microglia expression program through the Smad tissue factors (78)(79)(80). Heme has been shown to induce Spi-c, a transcription factor important for red pulp macrophage development (81,82). Finally, in PerC, omentum-derived RA promotes the expression of GATA-6 in the LPM subset, determining its localization and functions (40), even if the factors that maintain the SPM pool under steady state conditions still remain to be elucidated. Dynamics and Function of Peritoneal Macrophage Subsets Mouse PerC is a compartment where many cell types co-habitat and interact, similar to the secondary lymphoid organs. In addition, PerC is a unique body compartment that contains B-1 cells (83). Under steady state conditions, the peritoneal cells comprise LPMs, SPMs, B-1 cells, conventional B-2 cells, T cells, NK cells, DCs, and granulocytes (mostly eosinophils) (31,35). B1 cells constitute the majority of the PerC cell population, whereas the SPM and LPM frequencies represent 30-35% of total peritoneal cells (31,35) (Figure 2A). However, after inflammatory or infectious stimuli, there is a dramatic alteration in cell numbers and the frequencies of each of PerC cell subpopulation. With regard to the myeloid compartment, modifications in PerC cell composition include the disappearance of LPMs, increases in SPM frequency and numbers, and a massive recruitment of inflammatory monocytes (24,31,35,36,40) (Figure 2B). The "macrophage disappearance reaction" (MDR) in PerC has been extensively described during delayed-type hypersensitivity (DTH) and acute inflammatory processes (84). MDR has been associated with cell death, emigration to draining lymph nodes, or adherence of macrophages to structural tissues. LPMs are the unique peritoneal macrophage subset that disappears from PerC, which is attributed not to cell death but rather to their migration to the omentum (31,40). LPM disappearance in response to inflammatory stimuli is accompanied by an increase in SPM and inflammatory monocyte numbers (24,31,35,36,40) (Figure 2B), and has been correlated with the renewal and improvement of immune conditions of the PerC (35). Adherent peritoneal cells from naive mice, which are composed primarily of LPM, exhibit (31). LPMs, which are the major peritoneal macrophage population, appear to be responsible for phagocytosis of apoptotic cell and tissue repair (36). (B) At the outset of inflammation, the myeloid compartment is modified in general by disappearance of LPMs, increase of SPMs numbers, and monocytes influx (31,35,36,40). The changes in the myeloid cells from zymosan, T. cruzi, and LPS stimulated or thioglicollate-elicited PerC result in the gain of immune state (35,36). SPMs from zymosan and T. cruzi stimulated mice contribute to effector function of PerC through secretion of high levels of NO and presence of IL-12-producing cells (35). In response to LPS in vivo, SPMs produce several inflammatory cytokines, such as IL-12, MIP-1α, TNF-α, and RANTES, whereas LPMs produce enhanced amounts of G-CSF, GM-CSF, and KC (36). LPMs, which migrate to omentum by a retinoic acid and GATA-6-dependent way in response to in vivo LPS stimulation or vitamin-A deprivation, return to PerC and appear to be correlated with GALT-independent and TGF-β2-dependent IgA production by B-1 cells in the intestine (40). Frontiers in Immunology | www.frontiersin.org May 2015 | Volume 6 | Article 225 a high frequency of cells stained for β-galactosamine (β-gal), a senescence marker (85)(86)(87). These cells are unable to secrete NO in response to LPS challenge (35). In contrast, adherent peritoneal cells from Trypanosoma cruzi or zymosan-stimulated mice in which the main cell population constitutes SPMs and monocytes (F4/80 low MHCII int Ly-6C + ), respectively, display a significant reduction in the frequency of β-gal-positive cells and secrete high levels of NO in response to LPS (35). The frequency of IL-12-producing cells after in vitro LPS plus IFN-γ stimulation was also higher within myelo-monocytic cells from mice exposed to zymosan and T. cruzi than the frequencies of IL-12-producing cells found in unstimulated mice (35). In response to Staphylococcus epidermidis cell-free (SES) supernatant in vivo stimulation, F4/80 low CD11b + cells (consisting of SPMs and DCs) produced enhanced levels of IL-1β, IL-1α, TNF-α, and IL-12 in the presence or absence of subsequent SES treatment (37). In contrast, the supernatants of adherent cells from naïve mice treated with SES were found to contain high levels of MCP-1, MCP-1α, MIP-1β, and G-CSF (37). It is important to note that 4 days after thioglycollate injection, peritoneal cells, an extensively studied cell population (88)(89)(90)(91), also consist primarily of SPMs and inflammatory monocytes (31,40). The increase in SPM numbers and the influx of inflammatory monocytes that will give rise to SPMs greatly contribute to the improvement of the capacity of PerC to deal with inflammatory stimuli. Indeed, although neither SPMs nor LPMs produce significant levels of pro-or anti-inflammatory cytokines under steady state conditions (35)(36)(37), SPMs appear to develop a pro-inflammatory profile in response to in vitro stimuli. SPMs produced high levels of TNF-α, MIP-1α, and RANTES in response to LPS, whereas LPMs were the unique population that produced abundant levels of G-CSF, GM-CSF, and KC in response to the same stimulus (36) (Figure 2B). The NO secretion and pro-inflammatory cytokine production are the most important functions of activated macrophages by inflammatory stimulation and assigns the M1 profile (13,34,(92)(93)(94)(95)(96)(97). The functional profile of peritoneal macrophages was previous studied by our group and others (33,34). Peritoneal macrophages from Th1-prone mouse strains (C57BL/6 and B10.A) are easily activated to produce NO in response to rIFN-γ or LPS, characterizing the M1 profile. In contrast, macrophages from Th2-prone mouse strains (BALB/c and DBA/2) exhibit a weak NO response as a consequence of high levels of spontaneously secreted TGF-β1 (34). Moreover, the cells from C57BL/6 IL-12p40-deficient mice have a bias toward the M2 profile, indicating that IL-12 is required for M1 polarization of peritoneal macrophages (33). Although LPMs from naïve mice can produce NO after in vitro LPS stimulation, SPMs produce higher levels of NO than LPMs following in vivo LPS stimulation. The NO secretion by LPMs was also detected by flow cytometry in Escherichia coli inoculated mice (31), whereas nitrite was not produced in vitro by LPS-stimulated adherent peritoneal cells from control mice, which is composed mainly by LPMs (35). In addition, adherent cells obtained 48 h after T. cruzi infection, which are mostly composed by SPMs, were the unique source of NO without in vitro subsequent challenge with LPS (35). In resume, the SPM and LPM subsets cannot be accommodated in the M1/M2 framework considering the NO secretion. However, considering phagocytic assays, SPMs appear to develop an efficient profile to control infections as M1 macrophages, whereas LPMs assume a role in the maintenance of PerC physiological conditions as M2 or alternative macrophages. Despite the preserved phagocytic ability of LPMs, higher numbers of zymosan and E. coli were found inside of SPMs at early time points after i.p. injection (31,35). Conversely, at 1 h after challenge, LPMs appear to present a higher phagocytic index of apoptotic thymocytes in comparison to SPMs (36) (Figure 2A). In addition, it was recently demonstrated that LPMs have a unique ability to induce gut-associated lymphoid tissue (GALT)-independent IgA production by peritoneal B-1 cells (40) (Figure 2B). RA and TGF-β2 are the most critical factors to induce IgA class switching, and the production of TGF-β2 is regulated by the Tgfb2 and Ltbp1 genes, which are expressed by LPMs in a GATA-6-dependent manner. This process is regulated by the abundant presence of RA in the omentum, which is responsible for the induction of GATA-6 expression in LPMs that migrates to this tissue. The dynamic of LPM migration between the PerC and the omentum after the stimulation of PerC is correlated with their disappearance and the return to basal numbers of LPMs later after stimulation with LPS, zymosan, and thioglycollate (24,31,35,36,39,40). This observation suggests that LPMs can return to PerC to resolve an infectious or inflammatory process. Therefore, the presence of two specialized macrophage subsets in PerC is crucial to maintain the health of this compartment under different situations. Concluding Remarks Peritoneal macrophages represent one of the most studied macrophage populations. However, the existence of two phenotypically and functionally distinct subsets, LPMs and SPMs, residing in the PerC was recognized recently (31). In the last year, great advances in our understanding of the transcriptional regulation of peritoneal macrophages have brought novel insights into the identification of LPMs and SPMs (39,40). GATA-6, an LPMrestricted transcription factor, regulates many PMSGs, including those related to the maintenance of LPMs in PerC (40) and those that determine their function (40), metabolism, proliferation, and cell survival (39,77). Under steady state conditions, LPMs appear to originate independently from hematopoietic precursors and retained the ability to proliferate in situ, maintaining physiological numbers (26,36). Conversely, SPMs appear to originate from circulating monocytes (31,36,40), and their numbers increase remarkably under inflammatory conditions. Of note, SPMs together with their precursor, the inflammatory monocyte population, are the major myeloid populations present in elicited PerC, and are an excellent resource to study the biology of inflammatory macrophages. SPMs and LPMs exhibit specialized functions in the PerC, where SPMs present a pro-inflammatory functional profile, and LPMs appear to have a role in the maintenance of PerC physiological conditions. Moreover, the particular interactions between macrophage subsets and other peritoneal cell populations appear to play crucial roles in PerC immune state. Although the consequences of the crosstalk between SPMs and peritoneal T and B lymphocytes remain to be clarified, LPMs are required for GALT-independent and RA-dependent IgA production by peritoneal B-1 cells (40). Finally, the elucidation of the influence of soluble factors and the microbiota on the maintenance of LPM/SPM ratios in PerC, and the role of these subsets in the systemic immune response are the future challenges for this field.
2016-06-17T03:23:13.207Z
2015-05-19T00:00:00.000
{ "year": 2015, "sha1": "85d63c05e418fbc15d32bf72514287efe598bd46", "oa_license": "CCBY", "oa_url": "https://www.frontiersin.org/articles/10.3389/fimmu.2015.00225/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "85d63c05e418fbc15d32bf72514287efe598bd46", "s2fieldsofstudy": [ "Biology", "Medicine" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
226407714
pes2o/s2orc
v3-fos-license
HEALTHY BEHAVIOR OF PHYSICAL EDUCATION UNIVERSITY STUDENTS Authors’ contribution Wkład autorów: A. Study design/planning zaplanowanie badań B. Data collection/entry zebranie danych C. Data analysis/statistics dane – analiza i statystyki D. Data interpretation interpretacja danych E. Preparation of manuscript przygotowanie artykułu F. Literature analysis/search wyszukiwanie i analiza literatury G. Funds collection zebranie funduszy Summary The purpose of this study is to examine the healthy behaviors in undergraduate students of physical education (PE) with a comparison between gender and majors. A content analysis of Polish and foreign authors’ publications was performed by searching the following database: EBSCO, Proquest, Science Direct (Elsevier), Springer, Sage, and Wiley. Research indicates that undergraduates demonstrate, in general, low levels of healthy lifestyle behaviors, especially regarding diet, psychoactive substance use, coping with stress, physical activity and preventive behavior. PE students present with a high level of health-risk behavior. On the other hand, some research showed that PE students scored better than their peers of other areas of study in selected dimensions of health-related behavior. The majority of studies indicate that female students scored significantly better than male students in health-related behavior. Health promotion programs should be implemented at campuses and universities for the maintenance and improvement of a healthy lifestyle among students of all areas of study. Introduction Health-related behaviors may be defined as any "personal attributes such as beliefs, expectations, motives, values, perceptions, and other cognitive elements; personality characteristics, including affective and emotional states and traits; and overt behavior patterns, actions and habits that relate to health maintenance, to health restoration and health improvement" [1]. Positive health behaviors may include such categories as preventive behaviors (e.g., avoiding psychoactive substances), avoidance of risk-taking behaviors (e.g., related to behavioral addiction), and behaviors that improve health by maintaining and enhancing wellbeing (e.g., visiting the doctor regularly) [2]. Satisfaction with life and high quality of life are related to a healthy lifestyle, which manifests in PART I. DISEASES AND PROBLEMS DISTINGUISHED BY WHO AND FAO DZIAŁ I. CHOROBY I PROBLEMY WYRÓŻNIONE PRZEZ WHO I FAO such practices as a healthy diet, appropriate level of physical activity, preventive behaviors, coping with stress, positive social relationships and adjustments, and avoidance of psychoactive substances (e.g., nicotine, alcohol, drugs). On the other hand, poor mental and physical health and higher rates of mortality are associated with unhealthy lifestyles and sedentary behavior [3], which results in higher health care costs. An unhealthy lifestyle contributes to emerging major chronic diseases, such as coronary heart disease, stroke, diabetes, and cancer [4]. Importantly, healthy behaviors during childhood may influence adolescent health behaviors and be related to other essential health outcomes [5], while inadequate levels of physical activity (PA), sedentary behavior, and unhealthy nutrition are related to obesity [6,7], which may lead to decreased functional mobility and lower levels of quality of life in older age [8,9]. Early adulthood is the best time period for achieving the long-term advantages of a healthy lifestyle [10]. A key role in the development of a healthy lifestyle is health education. It seems particularly relevant to shape health behaviors in students of physical education (PE), as these students will become our future health educators and promoters. Unfortunately, research indicates that the health-related behavior (measured by using the Health Behavior Inventory (HBI)) of Polish PE teachers does not differ from the average Polish population [11]. Moreover, Kubińska and Pańczuk [12] showed that many PE teachers demonstrate health-risk behaviors, such as drinking alcohol and substance use. Given that PE teachers serve as role models for primary and secondary students, the unhealthy lifestyles they exhibit is of concern. These health-risk behaviors may be a result of insufficient knowledge and/or reduced motivation to conduct a healthier lifestyle. Accordingly, the systematic examination of attitudes toward health among future PE teachers is necessary. Aim of the work The present research focuses on the analysis of various dimensions of a healthy lifestyle among university students. Healthy behavioral factors examined include: consumption of a healthy diet, avoiding psychoactive substance use, coping with stress, engaging in physical activity (PA) and preventive behavior. A comparison of healthy behavior between students of different areas of study was a secondary aim of the study. In particular, the healthy and unhealthy behaviors of PE students was compared with other non-PE students. Finally, the role of gender on health behaviors was also examined. Material and methods For the study, the following databases were searched: EBSCO, Proquest, Science Direct (Elsevier), Springer, Sage, and Wiley. In addition, the Google search engine was used to find scientific publications in an Open Access option, which was particularly useful for articles in the Polish language. Keywords included: "health-related behavior", "health-risk behavior", together with "gender", "Physical Education students", "college students", and "undergraduates". The main inclusion criteria for selected articles were peer-reviewed publications from scientific books and journals related to the above mentioned keywords. A total of 175 publications were found in the first stage of screening. Exclusion criteria were as follows: 1) the participants in the publication were not students of a college or university; 2) only one of the set of keywords was related to the publication. A final count of 68 publications met the strict criteria for inclusion in this review. Healthy and unhealthy behavior among university and college students The traditional stereotype of a hedonistic student lifestyle freed from family constraints is related to the high prevalence of unhealthy behaviors [13][14][15]. Studying at college or university is often characterized by high levels of stress and health-risk behaviors [16,17], which include physical inactivity [18,19], smoking [20][21][22], alcohol and substance use [23][24][25], and poor nutrition [26]. These behaviors may contribute to adverse health outcomes, decreased subjective wellbeing, and worsened mental and physical health. Health-risk behaviors have been found to be reinforced by social support networks, academic pressure and stress, busy schedules, and habitual behaviors [17]. It is important to note, university students who report more overall psychological distress show fewer positive health behaviors (HB) across all HB dimensions [2]. The unhealthy lifestyle in university and college populations seems to be a universal trend across the world [e.g., [27][28][29]. Keller et al. [27] found a high prevalence of risk behaviors among German undergraduates. Over 95% of the total sample (N=1262) ate less than five servings of fruits and vegetables, 60% did not exercise sufficiently, 31% were current smokers, and 62% reported binge drinking. Research performed among American university students indicates that unhealthy eating, smoking, and lack of exercise are the most commonly reported negative behaviors [17]. Whatnall et al. [26] found in a large sample of Australian university students (N=3077), that 90% of them did not meet the vegetable recommendations, 50% exceeded lifetime risk guidelines for alcohol intake, 38% were insufficiently physically active, 40% were overweight or obese, 38% demonstrated a high or very high risk of psychological distress, and 22% were food insecure. Among university students from New Zealand, around 42% reported binge drinking, 15% did not meet PA guidelines, 28% were overweight or obese, 20% smoked, and 76% did not meet dietary guidelines [28]. The results of a study conducted at King Saud University (Saudi Arabia) also indicate that students are leading unhealthy lives [15]. The majority of students presented unhealthy eating habits and inadequate PA levels, and 20% were overweight (11% were obese). The majority of college students from Kuwait University reported moderately healthy lifestyles in such categories as diet, exercise, and sleep [14]. Around half of the students ate a healthy diet (50%) and got at least seven to nine hours of sleep (46%), but only one-third of the sample performed exercise frequently (34%). Over one-third of the group was obese (39%), and many had iron deficiency anemia (IDA; 49%). It is important to note that IDA is not a common health issue in this culture, but is rather unique to college students, as a result of poor diet. As evidenced by these worldwide findings, the most commonly reported negative behaviors are insufficient levels of PA, sleep issues, poor nutrition, and too few meals per day [30][31]. University students are prone to sedentary behavior, which is associated with multiple adverse health outcomes, such as obesity, cancer, type 2 diabetes, cardiovascular disease, and total mortality. Sedentary behavior is also related to numerous problems in social behavior and academic achievement [19]. A recent review showed that the prevalence of sedentary behavior among university students ranges from 34% to 90% [18]. The most prominent factors associated with sedentary behavior were overweight and depressive symptoms. On the other hand, physically active students report more excellent health (both overall and mental) and higher happiness scores than their inactive peers [30]. Unfortunately, among a large representative sample of university students in Ireland [30], only 64% met the WHO recommended level of 150 minutes of moderate to vigorous PA per week. Obesity continues to be an epidemic in college students [32]. Research indicates that overweight and obesity ranges between 30% to 40% in the university student population [e.g., 14,15]. Osborn et al. [33] have examined relationships among perceived body weight, actual body weight, body satisfaction, and selected health behaviors in a sample of undergraduate students at a southern Louisiana university. Body Mass Index (BMI) was significantly related to several health behaviors, including drinking diet soda, eating at the student union, and stress eating. Research indicates that college students have a good knowledge of a healthy diet, but, despite this, their nutritional choices are usually not healthy [13,34]. Association between particular dimensions of healthy and unhealthy behavior Research indicates that various categories of health-related behavior are interrelated [e.g., 35,36]. Dinger and Vesely [37] examined the relationship between PA and other health-related behaviors (e.g., cigarette smoking, binge drinking, illegal drug use, fruit and vegetable consumption, risky behaviors, and body mass index) among a representative sample of U.S. college students. Cigarette smoking, inconsistent seat belt use, and inadequate consumption of fruits and vegetables were significantly associated with low levels of PA among college students after controlling for age, sex, and race. Dinger and Vesely [37] found the strongest association between PA and fruit and vegetable consumption. Research indicates that low levels of regular physical exercise were associated with significantly lower scores in physical functioning, bodily pain, and vitality. Regular smoking was related to significantly lower scores in physical functioning and general health [16,38]. Litwic-Kamińska and Izdebski [39] found a positive association between the level of PA and other health behaviors, such as proper nutrition, preventive behavior, and positivemental attitude. Studies have shown that students who reported engaging in multiple health-risk behaviors were also more likely to report poorer mental health [20,22]. Canadian undergraduates with the highest likelihood of engaging in multiple health-risk behaviors (e.g., marijuana and other illicit drug use, risky sex, smoking, binge drinking, poor diet, physical inactivity, and insufficient sleep) reported poorer mental health, particularly as it relates to stress health-risk behaviors [22]. In particular, those who reported a higher score of marijuana use were more likely to use a variety of substances and engage in hazardous drinking than non-users [24,40]. All scales of the HBI, such as healthy habit nutrition (HHN), preventive behavior (PB), positive adjustments (PA), and healthy practices (HP) were positively correlated with each other with moderate or high strength [41,42]. Excessive alcohol drinking was also negatively correlated with health-related behavior (in particular, with healthy eating and preventive behavior) among a sample of Polish undergraduate students [42]. Differences in health behavior among university students according to area of study Research indicates that health-related and health-risk behavior might differ between university students in different areas of study. For instance, law students experience higher levels of psychological distress than members of the general population, as well as compared to university students in other professional disciplines [43]. However, most research examining this question was performed among students in health-related majors [e.g., [44][45][46]. Consistently, studies have found that knowledge about healthy lifestyles, as well as greater awareness of self-responsibility for health, are not strong predictors of commitment to healthy behavior [e.g., 47,48]. For example, research indicates that a large portion of nursing students displayed unhealthy dietary habits and poor health status [31,49,50]. Al-Qahtani [49] also examined healthy lifestyle behaviors of female university students enrolled in health-related studies, compared to women not in health-related studies. Both groups exhibited low levels of health-promoting behaviors, with highest scores in spiritual growth and lowest in physical health. However, the non-health profession female students had significantly higher engagement in PA than the health profession students, who showed better health responsibility, spiritual growth, and interpersonal relation practices. Evans et al. [50] explored the health and health-related behaviors of undergraduate nursing students in Scotland. The students' health behavior profiles were similar to that of the general population in Scotland. Overall, 23% of the undergraduate nursing students rated their physical health as excellent or very good, with 48% rating their mental health as such. Around 76% of students met WHO recommendations for PA per week (minimum 150 min.), which is substantially higher than previously reported pa levels in the general college population. Almost one-third of the participants were overweight (29%), and one-fifth were obese (18%). The majority of nursing students (86%) consumed alcohol, and 15% reported episodes of binge drinking. The prevalence of current smoking was 25%. A study examining the behaviors of pharmacy students, on the other hand, showed that most did not exercise regularly; the majority of the pharmacy students also reported drinking alcohol excessively [16]. Binge drinking was similarly highly prevalent in a sample of German medical students, and this behavior was, again, related to other health-risk behaviors, such as smoking, using cannabis, not exercising, and not eating fruits and vegetables. Medical students showed slightly more positive patterns of a healthy lifestyle than students of law and education [27]. Bíró et al. [44] found that over half of Hungarian students in public health studies were smokers, with more than one-quarter smoking daily. Almost one-fifth of these students suffered from psychological distress, with higher rates among female public health students than age-matched women in the general population. Kozieł et al. [46] found that the majority of medical and health sciences students in Poland exhibited low levels of healthy practices and healthy habits in nutrition. Unhealthy dietary and health habits such as nighttime snacking, coffee drinking, low milk drinking, and lack of exercise were also found among Korean female nursing students [31]. Comparison of undergraduates of PE and other areas of study Research indicates that physical education students, who will teach health education to children in the future, demonstrated rather unhealthy lifestyles and a hig risk of alcohol and drug use [e.g., [51][52][53]. Szczerbiński et al. [34] examined dietary behaviors (e.g., intake, frequency, consumption of various food groups, and eating between meals) among undergraduates in PE compared to tourism. Students in both groups demonstrated a generally healthy diet. Among health-risk behaviors in PE students, the most common were such dimensions as inadequate participation in leisure-time PA, sleep deprivation, smoking, excessive and regular alcohol drinking, and illegal drug use [e.g., 54,55]. As many students in these majors are often student-athletes, it is important to note that Graupensperger et al. [52] suggest that peer acceptance and cohesiveness are associated with attitudes toward risky behavior among college student-athletes. A number of other investigations have found differences in health-related behaviors between students in PE and other areas of study. Kubińska and Bergier [56] compared health-related behavior among university students in PE and public health (PH). The results of this study indicate that healthy eating and systematic physical activities are the most important behaviors and health practices practiced by PE students. Whereas, among PH students, balanced nutrition (81%) and PA (81%) were the most preferred health behaviors. PE students engaged in sport participation to a great extent (76%); they also engaged in health eating (73%) and did not use or had limited use of stimulants (e g., caffeine, alcohol, nicotine, psychoactive substances; 36%). Palacz [57] also examined differences between university students in four different areas of study: pedagogy, physiotherapy, PE, and tourism and recreation. Physiotherapy students had the highest levels of positive health behaviors; whereas, students in tourism and recreation had the lowest. Another study [39] showed that those PE undergraduates who are engaged in collegiate sport, demonstrate the highest levels of healthy behavior, in comparison to PE non-athletes, pedagogy, and information technology (IT) students. Further, Rogowska et al. [42] showed that students of PE drank more alcohol, but scored higher in preventive behaviors than the students in other majors. Yermakova's cross-cultural study showed that PE students from Poland and Ukraine might differ in health and health culture [58]. Polish PE students believe that an essential reason for engaging in PA is improving one's physical condition, strengthening self-esteem, and improving overall health. In comparison, the main motives of PA participation among Polish students from other areas of study are improving physical well-being and mental health. The Ukrainian PE students, on the other hand, report different motivations for engaging in PA; the majority of Ukrainian PE students believe that participation in various sports clubs is an important part of building a healthy culture and that organizing physical culture, sports, and educational work with students cannot be limited to PE classes. Ukrainian students in other areas of study report engaging in PA to improve health, enhance knowledge in specific subjects (e.g., humanities), and to promote healthy lifestyles. Gender differences in health behavior among university students It is well-established that women engage in more positive health behaviors than men [e.g., [59][60][61]. For instance, research conducted among a French national sample of medical students indicates that women smoke tobacco and cannabis less frequently than men, and they have lower rates of alcohol use disorders [60]. Bastardo [16] also found gender differences in health behaviors (i.e., smoking, alcohol consumption, and regular PA) and perceived health status in pharmacy students. Specifically, male students were more likely to consume alcohol and to exercise regularly than female students. There was no difference in smoking behavior between male and female students. Male students also demonstrated significantly higher scores in bodily pain, general health, vitality, and social functioning than female students [16]. Numerous studies have revealed that men are more likely than women to exercise regularly, but also to smoke, use substances, and drink alcohol excessively [e.g., [59][60][61]. Women usually demonstrate higher levels of overall healthy behavior (exceptionally healthy nutrition), but also higher stress and depression rates [e.g., [62][63][64]. On the other hand, high levels of poor health behavior were found recently in young adult men from the U.S. [65]. Almost 40% of men exhibited unhealthy behaviors (e.g., poor diet, no exercise, substance use), compared to only 22% of women. Interestingly, college education was found as a more protective factor for men. Denton et al. [59] examined gender-based inequalities in health. They argue that women's poorer health is related to their reduced access to material and social factors that foster health, as well as differential exposure to stressful life events and the everyday stresses associated with women's social roles (e.g., mother, wife, worker). On the other hand, men's health may be reduced by their higher likelihood of engaging in health-risk behaviors, such as smoking and excessive drinking. Further, some research indicates that women score lower on physical performance tests and score higher in depression compared to men. For example, results of a study by Karadag and Yildirim [66] suggest that 40% of university students were depressed, and many report not having enough information about maintaining a healthy lifestyle and coping with stress. Depressive symptoms were specifically associated with the female gender, high level of life stress and eating disorders, and lower levels of body image satisfaction and self-esteem among Korean university students [64]. However, these consistent gender patterns in health inequities may be related to women over-reporting and men under-reporting health problems, as suggested by Oksuzyan et al. [67]. Rogowska et al. [42] found that Polish female PE students drink significantly less alcohol than male PE students and demonstrate a higher level of overall health behavior (in the total HBI and several HBI subscales, except positive adjustment). emale PE students, placed a high importance on healthy eating (90%), engaging in sport activity (54%) [56], and not taking stimulants (49%). However, Baumgart et al. [10] examined health behaviors among young people studying physiotherapy. The responses of these students were similar to the general population, without statistically significant differences between men and women. Conclusions University students exhibit a generally unhealthy lifestyle. Unfortunately, unhealthy lifestyle behaviors appear to be common, regardless of area of study, and including among students of PE and other health-related studies (e.g., medicine, nursing, public health). Among the various dimensions of health-related behaviors, students specifically exhibit poor nutrition, physical inactivity, substance use (smoking, excessive alcohol drinking and drug use), sedentary behavior, high stress, and insufficient sleep. The main consequences of an unhealthy lifestyle include higher rates of overweight or obesity, decreased social behavior, lower academic achievements, decreased subjective wellbeing, and poorer mental and physical health (with elevated depression, in particular). Whatnall et al. [68] found a positive association between academic achievement and diet quality in a sample of Australian university students. Overall, inadequate nutrition can cause significant health problems and negatively affect academic success. Further, there is an association between the various dimensions of health behavior, with a higher level of health-risk behavior being related to a lower level of positive health-related behavior among students. There is a little difference between students across areas of study, though PE students demonstrate higher risk-related behavior and seem to be better in two dimensions of health-related behavior (i.e., healthy diet and PA), in comparison to students of other majors. However, this pattern may change over time, so more research is needed in the future. Women report having a healthier diet than men; however, men have higher PA and psychological well-being compared to women. In contrast, men exhibit a higher risk of alcohol and substance abuse compared to women. Thus, targeted health behavior interventions based on gender in this population should focus on increasing PA and stress coping mechanisms in women and improving diet and substance use in men. Finally, knowledge about health behavior of PE and healthcare professionals may be insufficient; this knowledge base should be further examined in the future.
2020-07-09T09:03:31.174Z
2020-01-01T00:00:00.000
{ "year": 2020, "sha1": "b2ead30df87129f5029f9c9a5b2278ce27fc12ff", "oa_license": "CCBYNCSA", "oa_url": "https://www.termedia.pl/Journal/-99/pdf-41004-10?filename=HPC%204%202020_Art_2_Rogowska.pdf", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "780ae013b10f2461513f449a95a43cff96cf9f8a", "s2fieldsofstudy": [ "Education", "Medicine" ], "extfieldsofstudy": [ "Psychology" ] }
258474178
pes2o/s2orc
v3-fos-license
A Low-Complexity Power Maximization Strategy for Coulomb Force Parametric Generators Energy harvesting (EH) is the process of capturing and storing energy from external sources or the ambient environment. The EH devices have found various emerging applications, particularly, in healthcare sector. Kinetic-based micro energy-harvesting is a promising technology that could prolong the lifetime of batteries in small wearable or implantable devices. In this paper, using a mathematical model of a Coulomb-force parametric generator, we analyze the dependency of the output power on the electrostatic force in this micro-harvester. We propose a low complexity strategy to adaptively change the electrostatic force in order to maximize the harvested power. Simulation results using the human acceleration measurements confirm the effectiveness of the proposed strategy. I. INTRODUCTION Wearable and implantable medical sensors (and actuators) have become a promising interdisciplinary research area in the Internet-of-Things technology for healthcare [1], [2], [3], [4], [5]. With wireless communication capability, these devices will enable an attractive set of applications for remote monitoring of physiological signals such as temperature, respiration, heart rate, glucose, and blood pressure [6], [7]. Increasing functionality and complexity along with the desired miniaturization have drawn the attention of researchers to the limited source of power in these devices [8]. Frequent recharge or battery replacement is simply not feasible in many applications and could negatively impact their market adoption. As such, any technology that can prolong the operational lifetime of these devices will undoubtedly contribute toward their commercial success. The process of capturing and storing energy from external sources or the ambient environment is referred to as The associate editor coordinating the review of this manuscript and approving it for publication was Eyuphan Bulut . energy harvesting (EH). There are a few sources from which we can harvest energy for wearable or implantable medical sensors. Examples of these sources are ambient light, body heat, and the general movement of the human body [3], [9], [10], [11]. Kinetic energy harvested from the human body motion is the most convenient solution for wearable devices [12], [13], [14]. Authors in [13] designed a lowpower kinetic energy harvesting and power management circuit along with a hardware-software context-aware algorithm that reduces quiescent losses and energy storage requirements significantly. As a result, full energy neutrality is allowed even in energy-dry periods. A coaxial wrist-worn energy harvester is proposed in [15] to efficiently capture the biomechanical energy of arm swinging to self-power IoT sensors. The authors used the Lagrangian approach and mirror image method to drive an analytical model for predicting the system dynamics and power generation performance. They also fabricated a miniature prototype containing five pairs of neodymium iron boron magnets and ten series coils to validate the performance of the proposed energy harvester under real walking excitations. In [16], the authors developed a closed-form expression for a flexible joint-bending piezoelectric energy harvester placed on the knee joint with relatively low modeling error. The developed model is also validated with a novel piezoelectric energy harvester prototype. A simultaneous energy harvesting and gait recognition architecture has also been developed in [17]. In this architecture, a preprocessing algorithm is proposed to minimize the piezoelectric energy harvester signal distortions caused by energy storage. In addition, a classifier based on long short-term memory (LSTM) deep neural network is proposed in [17] to accurately capture useful information from noisy piezoelectric energy harvester data. In [18], a self-powered health monitoring system has been proposed to collect the movement energy when users walk or run. The proposed system is installed on the shoes and uses a rectifier module to charge the rechargeable lithium battery. Miniaturized energyharvesting devices, also known as micro-generators, typically consist of a mass-spring-damper (MSD), transducer, and a power-processing circuit, as depicted in Fig. 1 [19], [20]. Movements of the human body are captured by the MSD module and converted into mechanical power. The transducer converts this mechanical power into electrical energy. The power-processing circuitry matches the electrical power generated by the transducer with the load [19], [20]. Kinetic-based microgenerators either utilize the direct application of force on the device or make use of the inertial ambient forces acting on a proof mass. A generic model of an MSD system is shown in Fig. 2 [21]. In this model, the displacement of the mass from its rest position relative to the frame is denoted by z(t). The absolute motion of the frame is y(t) and that of the proof mass is x(t) = y(t) + z(t). The proof mass is able to move between the upper and lower bounds, i.e. +/− Z l , and is attached to a spring-like structure that is denoted with k. Energy is converted when work is done against the transducer's damping force, which opposes the relative motion [21], [22]. The MSD designs that employ a spring (or a spring-like feature) are mainly suitable for applications where the environment causes the system to constantly vibrate [23]. However, the human body motion is typically not a vibrating source of motion. As a result, a microgenerator that can efficiently capture energy from human motion should have a non-resonating design. One such non-resonating microgenerator architecture is the Coulomb-force parametric generator (CFPG) [21], [22], [24], [25]. The MSD component in this architecture is nonlinear in nature. The proof mass does not vibrate up and down as if anchored on a spring-like structure. Instead, the transducer's damping force, a constant Coulombic electrostatic holding force, keeps the proof mass to an end-stop limit of +/− Z l . The proof mass is held against one endstop until the external acceleration exceeds the holding force threshold [26]. No power is generated while the proof mass is stuck on either end; instead, power is generated when the proof mass makes a full flight from one end-stop to the other. Another advantage of the CFPG design is its transduction method. It utilizes electrostatic force rather than making use of electromagnetic or piezoelectric forces. Any of these forces can be used to generate electrical power by converting mechanical energy into an electrical form. However, on the micro-scale, the electrostatic force becomes more significant and suitable for electric power generation [22]. This means that the transduction method in CFPG allows for further miniaturization of the micro-harvester, a highly desirable feature for wearable or implantable sensors. The authors in [27] highlighted the significant impact of the electrostatic force on the magnitude of the harvested power for various human activities. In [28], an adaptive maximization problem was formulated to exploit the dependency of the optimal holding force on the input acceleration waveform in order to achieve a gain in the micro-generator output power. To the best of our knowledge, the idea of adapting the electrostatic force is relatively new and was first introduced in [29]. Other existing works in the literature related to CFPG do not consider this adaptation possibility for wearable or implant sensors. The authors in [29] investigated several methodologies, such as least square and machine learning to obtain a near-optimal solution to the maximization problem and adapt the electrostatic force based on the acceleration waveform. Despite the achieved gain in the harvested power, the complexity of the methodologies used in solving the optimization problem is a major concern. High-complexity algorithms would consume more energy, reducing the overall net gain in the generated energy. As such, our objective in this paper is to focus on a low-complexity approach that can be used to adapt the electrostatic force based on the input acceleration. Following an in-depth analysis and observation of the generated power for several artificially generated acceleration waveforms, we propose a computationally simple strategy to efficiently maximize the output power in a CFPG. The complexity of the proposed scheme in this paper is much lower compared to the methodologies proposed in [29]. The results are also verified with actual acceleration measurements from the human body motion. The remainder of this paper is organized as follows. Section II analyzes the harvested power for step and square acceleration waveforms and highlights the relationships between the input parameters (i.e., amplitude and frequency), the electrostatic holding force, and the instantaneous generated power. In Section III, we propose a low-complexity strategy to adaptively change the electrostatic holding force in order to maximize the average generated power. The performance of the proposed strategy is investigated in Section IV. Finally, conclusions and future plans are discussed in Section V. II. ANALYSIS OF THE GENERATED POWER FOR SIMPLE INPUT EXCITATIONS The following nonlinear differential equation captures the dynamics of the MSD module in a CFPG micro energyharvester [28]: In the above equation, m represents the proof mass, y(t) is the motion of the generator frame with respect to the inertial frame, (ÿ(t) is the second derivative of y(t) and indicates the input acceleration),ẍ(t) is the proof mass acceleration, F represents the Coulomb force (also referred to as electrostatic holding force or more generally the MSD's damping force), x(t) is the absolute motion of the proof mass, and relay(.) represents a hysteresis function that switches between +1 and −1. The mechanical power generated by the MSD component can be obtained as follows: where F(t) is the holding force andẋ(t) represents the velocity of the proof mass. The Simulink 1 implementation of this model is shown in Fig. 3. The model accurately represents scenarios where the input acceleration does not cause a full end-to-end flight of the proof mass. In those cases, the instantaneous output power will have equal positive and negative components (reactive power); therefore, a zero average power will be generated. This complies with the stated physical requirements in [25]. 1 Simulink is a product of MathWorks, Inc. Simulink has been used in this research to foster research and understanding. Such identification does not imply recommendation or endorsement by the National Institute of Standards and Technology. On the other hand, if the amplitude of the input acceleration is sufficient enough to move the proof mass to the other endpoint, then positive energy will be generated. In this section, using the Simulink model of the MSD, we study the average generated power for simple acceleration waveforms such as step function, square and sinusoidal waves. One important objective here is to underline the significant impact of the electrostatic force F on the generated power. The results in this paper have been obtained assuming an MSD with the following specifications: proof mass = 0.965 g, and the distance between the two endstops = 5 mm. We conjecture that the general conclusions expressed here are independent of the MSD specifications. A. STEP FUNCTION Consider a step function with an amplitude 4 m/s 2 as the input acceleration waveform to a CFPG. The total duration of the waveform is assumed to be 10 seconds. Fig. 4 demonstrates the impact of the electrostatic force on the output mechanical power from the MSD module. As observed, the average generated power increases monotonically by increasing the electrostatic force up to a certain threshold and then drops to zero. This threshold changes by changing the amplitude of the step function. Let the optimal electrostatic force for the step function with amplitude a be denoted by F opt (au(t)). In general, this optimal value is a linear function of the amplitude of the step function, as shown in Fig. 5. Therefore, we will have F opt (au(t)) = G(|a|)u(t), where G(.) represents the linear function noted above, and |.| represents the absolute value function. For the example in Fig. 4, we have F opt (4u(t)) = 3.85u(t) mN. Choosing this function for the holding force will yield maximum power for a given step function. B. SQUARE WAVE Consider a square wave as the acceleration input to a CFPG. The amplitude and duration of the square wave are also assumed to be 5 m/s 2 and 10 seconds, respectively. The main harmonic frequency of the square wave (hereafter simply referred to as frequency) is considered to be 1 Hz. In this subsection, we study the impact of the frequency, amplitude, and holding force on the generated power for this input acceleration waveform. Fig. 6 displays the average output power versus input amplitude when the frequency is kept constant and F = 4 mN. As observed, there exists a threshold for the input amplitude, below which there is no generated power. However, above that threshold, the amount of the average generated power is constant. A similar characteristic can also be observed for other combinations of constant frequency and holding force. This behavior is expected since, for weak input excitations (i.e. low amplitudes), the holding force will prevent the proof mass from moving. At some point, the input acceleration overcomes the holding force, and the proof mass would be able to oscillate freely between the frame endpoints, hence, generating power. The constant value of the generated power is due to the constant frequency of the square wave. Fig. 7 displays the average output power versus frequency when the input acceleration amplitude is kept constant and holding force = 4 mN. The average generated power monotonically increases with increasing frequency up to a certain value, after which it drops to zero. Higher frequencies translate to a faster oscillation of the proof mass, resulting in higher generated power. However, beyond a certain value, the frequency would be too high for the proof mass to make complete end-to-end flights, and as a result, no power will be generated. Next, we investigate the effect of the electrostatic force on the average power when the square wave amplitude and frequency are kept constant. Fig. 8 displays the average output power versus F when the amplitude is 5 m/s 2 and the frequency is 1 Hz. As observed, the average power increases linearly with F but drops to zero beyond a certain value. The peak average power in this example is 47.44 µW. An electrostatic force stronger than this value will simply prevent the proof mass from moving, resulting in zero output power [25]. This behavior also occurs regardless of the specific values of the amplitude and frequency of the input acceleration. The results in Fig. 8 also point to the existence of an optimal value for the electrostatic force F for any given square wave excitation. This observation is highlighted in Fig. 9, where the optimal value of F for a given input acceleration frequency and amplitude is displayed. The maximal amount of the generated average power corresponding to these optimal values is shown in Fig. 10. The existence of an optimal value for F and its dependency on the characteristics of the acceleration waveform confirm that the holding force can be a design parameter in a CFPG device [28]. By carefully adapting F to the variations of the input acceleration, the energy harvesting capability of the device can be greatly improved. C. SINUSOIDAL WAVE Now consider an acceleration input in the following form: 12 shows the proof mass position and generated power when the holding force is equal to 2.2 mN. Since the holding force is not too large, the proof mass can make a full flight in both time intervals [0, 5) and [5,10], where the amplitude of the sinusoidal input is 6 m/s 2 and 3 m/s 2 , respectively. Therefore, power is generated in both time intervals [0, 5) and [5,10] when F = 2.2 mN. By increasing holding force beyond 2.2 mN, the generated power in the time interval [0, 5) increases; however, the proof mass cannot make a full flight during the time interval [5,10]. The generated power and proof mass position for the constant optimal holding force F = 4.3 mN is shown in Fig. 13. Although the proof mass cannot move during the time interval [5,10], the generated power in the time interval [0, 5) is large enough to make F = 4.3 mN the optimal constant holding force for the entire interval [0, 10]. Figs. 12 and 13 again demonstrate how judicious adaptation of F to variations in input acceleration can result in higher average harvested power. In this example, choosing The examples provided in this section not only provide some valuable insight into the impact of the electrostatic force, as well as the acceleration waveform amplitude and frequency on the harvested power, but they also lead us to an intuitive and simple scheme to adapt F in a CFPG device. Since any acceleration waveform can be approximated by a sequence of weighted and delayed step functions, in the next section, we propose a low-complexity methodology to adaptively change the holding force based on the input acceleration waveform. III. A LOW COMPLEXITY ADAPTIVE STRATEGY In this section, we develop a strategy to adaptively adjust the electrostatic force such that the generated power increases. We assume that there is the capability to adjust the electrostatic force at the beginning of each subinterval in order to maximize the average output power of the MSD. Let F k denote the constant value of the electrostatic force during the time interval [kδ, (k + 1)δ]. As indicated in Eq. (2), the output power during this time interval is directly proportional to F k . Therefore, the power maximization problem can be formulated as follows: where δ and F k , k = 0, 1, . . . , n − 1, are design parameters, andẋ(t) represents the velocity of the proof mass. The inequalities F k ≥ 0 and δ > 0 are the constraints of the above-mentioned optimization problem. Aside from an exhaustive search, identifying a methodology that can determine the optimal values δ * and F * k in Eq. (4) is quite challenging. In this paper, we first simplify the problem by assuming that δ is a given constant. Then, using our observations with the simple acceleration waveform discussed in the previous section, we propose a low-complexity methodology that can serve as an approximate solution to Eq. (4). To this end, we first approximate the input acceleration y(t) with the waveform y(t) as a summation of weighted and delayed step functions. Define y k as the input acceleration y(t) at t = kδ. Then, we will have: With y(t) expressed as a sequence of weighted step functions, we can use the results obtained in Section II to estimate the optimal value for the electrostatic force as follows: (6) Eq. (6) can be further simplified to: (7) Note that the results for a single step function assumed that the proof mass is initially resting at an end-stop. Here, we propose to use Eq. (7) as an approximate solution to the maximization problem expressed by Eq. (4). In other words, if F adp k denotes an adaptive strategy to update the value of F k at each time instant kδ, k ∈ n, then we claim that the following equation: provides a low complexity scheme to adjust the electrostatic force for input acceleration waveform y(t) at each time instant kδ, k ∈ n. Fig. 15 demonstrates the adaptive electrostatic force based on Eq. (8) corresponding to the input waveform shown in Fig. 14. The flowchart in Fig. 16 describes the details of the proposed strategy. In this flowchart, H * represents the solution of Eq. (4), and δ * is the optimal value of the adaptation interval. Remark 1: When δ = T , solving the maximization problem in Eq. (4) results in the optimal constant value for the electrostatic force, hereafter denoted by F c opt . It is to be noted that finding F c opt is not realistic, as in most practical situations, knowledge of the entire waveform is not available or predictable beforehand. In the next section, we compare We have also considered F c opt for performance evaluation purposes, although obtaining its value is not practically feasible. In general, the gain of any adaptive scheme should be measured against a constant electrostatic force which may not necessarily be optimal. IV. SIMULATION RESULTS In this section, the effectiveness of the proposed adaptive strategy is investigated using acceleration data measured from VOLUME 11, 2023 several human activities. 2 The data is obtained by using an X16-mini USB triaxial accelerometer. 3 Note that, in practice, the micro-harvester will be integrated with a wearable or implantable sensor. Usually, the location of the sensor on the human body is not a design parameter, and is mainly determined by the nature of the sensing application. With a small dimension of 51×25×13 mm, this accelerometer can be easily placed on different parts of the body to perform various measurements. The measurement samples are timestamped and stored in a CSV file in an onboard memory for later retrieval. Although the frequency of typical human motion is typically less than 5 Hz, note that some actions like heel strike and muscle vibration with a sudden movement could produce higher frequency acceleration on the device that is attached to the surface of the body. The accelerometer has adjustable sampling rates from 12 Hz to 800 Hz. A sampling rate of 100 Hz has been used in our measurements. Extensive experiments using this accelerometer have been done to generate a dataset of various acceleration waveforms corresponding to several human activities at various intensity levels and different placements of the accelerometer. Note that if the final sensor position is known, the optimum orientation of the micro-harvester inside the sensor should be along the direction of the most body movement, i.e., the highest acceleration. Example 1: Fig. 17 shows a sample 100-second acceleration waveform during light jogging with the accelerometer located on an individual's wrist. Given the description under Remark 1, and assuming δ = T = 100 s, we obtain the optimal value of the constant electrostatic force to be F c opt = 3.5 mN. As mentioned earlier, this value is obtained through an exhaustive search and assuming that the knowledge of the entire acceleration waveform is known to the micro-harvester. Using Eq. (8), we can also determine the sequence of adaptive electrostatic force F adp k , k = 0, 1, 2, . . . ., 5000 when δ = 0.02 s. Figs. 18 and 19 display the instantaneous power generated under the adaptive strategy and optimal constant holding force, respectively. There are two observations when comparing these figures. First, the number and intensity of negative spikes are far less using the adaptive strategy. As described earlier, these spikes are due to incomplete flights by the proof mass, resulting in no generated power. Second (although this may not be quite visible), there are also far fewer instances of zero instantaneous power with the adaptive strategy. These instances correspond to conditions when the proof mass cannot move due to excessive holding force. The combination of these two observations translates to higher average power under the adaptive strategy compared to the optimal constant 2 The experiments were conducted according to the research ethics regulations under the approval number 30013664 at Concordia University and ITL-2021-0273 at NIST. 3 X16 mini accelerometer is a product of Gulf Coast Data Concepts, LLC. Commercial products mentioned in this paper are merely intended to foster research and understanding. Such identification does not imply recommendation or endorsement by the National Institute of Standards and Technology. holding force, even though the optimal constant force cannot be realistically obtained in practical scenarios. The numerical accuracy of the generated power, harvested energy, and electrostatic force are 0.01 µW, 0.1 µJ, and 0.1 mN, respectively. The performance of the adaptive strategy should also be compared with that of the non-optimal constant holding force to determine the more realistic gain of the strategy. For this purpose, we also consider two other values of constant electrostatic force to evaluate the energy harvesting capability of the CFPG with our proposed adaptive scheme. Fig. 20 shows the output energy of the MSD unit with constant holding forces F c = 2 mN, F c = 4.5 mN, the optimal constant force F c opt = 3.5 mN, and the proposed adaptive force. As observed, the harvested energy under the adaptive scheme is considerably higher compared to all cases with a constant holding force. Even compared to the practically unobtainable optimal constant force (F c opt ), a gain of 42% can be achieved by using our proposed strategy for the light jogging example waveform. Example 2: Figs. 21 and 22 show the acceleration waveforms for random body movements when the accelerometer is placed on the chest and wrist, respectively. The optimal constant electrostatic force for these waveforms are 362% increases in the harvested energy using the proposed adaptive strategy compared to the harvested energy using optimal constant force, constant forces F c = 0.5 mN and F c = 1.5 mN, respectively. Similarly, with the acceleration data obtained from the wrist motion, the harvested energy under the adaptive strategy is 790.9 µJ, 24%, 212%, and 65% more than the 635.5 µJ, 253.3 µJ, and 480.5 µJ harvested under the optimal constant electrostatic forces F c = 0.5 mN and F c = 1.5 mN, respectively. These results indicate a noticeable gain in our proposed adaptive strategy for the harvested energy from the kinetic motion of the human body. Figs. 25 and 26 display the instantaneous power generated under the adaptive strategy and optimal constant holding force for the chest acceleration data. Similar to the results for the light jogging motion, there are fewer instances of zero instantaneous power with the adaptive strategy. In addition, the generated instantaneous power with the adaptive strategy is visibly higher than when constant holding force is used. As a result, higher average power under the adaptive strategy is obtained. Remark 2: Note that the X16-mini triaxial accelerometer measures the acceleration in all three x, y, and z directions. These directions are relative to the accelerometer and are not according to a universal body coordinate system. The average generated power in different directions depends on the placement of the accelerometer and the intensity of the human activity. Table 1 shows the average generated power corresponding to several human activities at various intensity levels and different placements of the accelerometer. In this table, P avg x (F A. IMPACT OF THE ADAPTATION INTERVAL As mentioned in Section III, the adaptation interval δ is a design parameter and should be chosen properly. Using the acceleration waveform shown in Fig. 21, Fig. 27 displays the average generated power for different values of δ under the adaptive holding force strategy. Fig. 28 demonstrates the average generated power for different values of the constant electrostatic forces. Comparing these results indicates that for adaptation intervals less than δ ≈ 0.2 s, the average generated power under the adaptive strategy is more than when the optimal constant electrostatic force is used. To further investigate the impact of the adaptation interval δ on the average generated power and gain some insight into the proper values for δ, extensive simulations on acceleration waveforms obtained from different human activities have been performed. Acceleration data were collected for sit-ups and jogging from a volunteer who was wearing the accelerometer on his arm, leg, chest, and wrist for several minutes. To consider the changes in amplitude and frequency of the acceleration waveform, data for each activity was collected with different intensity levels, i.e., slow, moderate and intense. We applied the adaptive strategy to the collected data for different values of δ ranging from 0.01 s to 1 s (with the step size of 0.01 s) and compared its performance with the optimal constant holding force. Assuming an adaptation interval of size δ, let the piecewise constant electrostatic force under the adaptive strategy be denoted by F adp δ . VOLUME 11, 2023 Consider the average generated power under this strategy to be P(F adp δ ). The maximum average generated power can then be denoted by P(F adp δ * ). Also, let the average generated power under optimal constant holding force be denoted by P(F c opt ). In addition, consider [δ] * to be the interval of δ in which P(F adp δ ) ≥ P(F c opt ). Define the performance improvement of the adaptive strategy compared to the optimal constant holding force strategy by the following equation. Table 2 summarizes the results for acceleration measurement scenarios mentioned earlier and different adaptation intervals. The performance improvement for [δ] * is highly dependent on the type, intensity and location of the microgenerator on the body. On average, a performance improvement of 71.7% is observed compared to the case when the optimal holding force is used. However, as explained in Remark 1, finding F c opt is not practical since knowledge of the entire acceleration waveform is required in advance. To gain a better sense of realistic values of the achievable performance improvement, we need to compare the average harvested power obtained using the adaptive strategy with the resulting average power using a constant (non-optimal) holding force. For example, let two constant forces F c 1 and F c 2 be chosen 50% lower and 50% higher than F c opt , respectively. In addition, let the performance improvement of the adaptive strategy compared to these constant holding forces be denoted by PI adp,δ * F c 1 and PI adp,δ * F c 2 . As observed in Table 2, a much higher performance improvement is achieved for these realistic scenarios. The average performance gain using our proposed adaptive strategy is 160.2% and 359.6% higher than when F c 1 and F c 2 are used as the constant holding forces, respectively. Similar values of performance improvements can be observed as long as the adaptation interval is chosen to be relatively small. Although we have provided information on the impact of adaptation frequency (or equivalently interval), it is conceivable that this frequency itself can also be adapted based on the intensity of human activity. This could avoid unnecessary adaptation operations, lowering energy consumption by the adaptive module and leading to an even higher gain in the harvested power. Detailed studies on the relation between the average generated power and adaptation interval are underway and will be provided in future work. Remark 3: Building the prototype of the micro-harvester device requires additional expertise and overcoming specific practical challenges. We hope the prototype development and physical evaluation of its performance would be possible as we continue this research. In the meantime, we are optimistic about the accuracy of simulations since the fundamental physics of the CFPG operation has been carefully considered in the mathematical model. V. CONCLUSION AND FUTURE WORK Limited source of power is one of the major challenges in developing miniaturized medical wearable or implantable sensors with more functionality. This power is typically provided by small batteries. Integration of micro energyharvesters with these sensors could be a promising approach to prolonging their battery lifetime. Considering the significant impact of the electrostatic force on the harvested power in a CFPG, we have proposed a simple methodology to adapt the holding force based on the input acceleration waveform. Simulation results for various human activities confirm the noticeable increase in the harvested power that can be achieved using this strategy. Other sophisticated adaptive schemes that may lead to higher output power have also been proposed for this purpose [29]. However, the complexity of such adaptive schemes is extremely important as this additional module in the CFPG architecture would itself require power to operate. This required power reduces the overall achievable gain in the harvested power compared to the case with a constant electrostatic force. Although the computational complexity of the adaptive holding force strategy developed in this work is relatively low, further research to estimate its required power for a given adaptation interval (δ) is needed. In this paper, a fixed adaptation interval has been assumed to simplify the general optimization problem stated in Eq. (4). It is conceivable that joint holding force-adaptation interval optimization could result in higher gains. The authors plan to investigate these issues in the future.
2023-05-04T15:07:20.058Z
2023-01-01T00:00:00.000
{ "year": 2023, "sha1": "387e958d0ecfaeab75ded406332cca0384fb246d", "oa_license": "CCBY", "oa_url": "https://doi.org/10.1109/access.2023.3260106", "oa_status": "GOLD", "pdf_src": "IEEE", "pdf_hash": "aca1a7698f9c6ecd06c34caef013497d0a0c8902", "s2fieldsofstudy": [ "Engineering", "Computer Science" ], "extfieldsofstudy": [ "Computer Science" ] }
218830261
pes2o/s2orc
v3-fos-license
Dispositional Mindfulness and Past-Negative Time Perspective: The Differential Mediation Effects of Resilience and Inner Peace in Meditators and Non-Meditators Purpose: Past-negative time perspective (PNTP) can affect our everyday lives and is associated with negative emotions, unhealthy behaviors, rumination, anxiety, depression, and post-traumatic stress disorder (PTSD). Dispositional mindfulness may be able to reduce the negative effects of PNTP; however, few studies have investigated their relationship. Thus, the purpose of this study was to explore the effect dispositional mindfulness has on PNTP, as well as the mediating role of resilience and inner peace in this regard. Methods: This study investigated the cross-sectional relationship between self-reported mindfulness, resilience, inner peace, and PNTP. In order to further explore the relationship between mindfulness and PNTP, this study specially selected and analyzed the samples of 185 meditators and 181 non-meditators. Results: Correlation analysis revealed that mindfulness is signi fi cantly positively correlated with resilience and inner peace. Conversely, PNTP is signi fi cantly negatively correlated with mindfulness, resilience, and inner peace. Structural equation model analysis revealed that resilience and inner peace partially mediated the relationship between mindfulness and PNTP. Furthermore, a multi-group analysis showed that the mediating effects are different between meditators and non-meditators. For meditators, the effect of mindfulness on PNTP was fully mediated by resilience and inner peace. For non-meditators, the effect of mindfulness on PNTP was only partially mediated by resilience and inner peace. Conclusion: Based on the signi fi cant differences between the mediational models of meditators and non-meditators, we believe that dispositional mindfulness can negatively predict PNTP, and practicing meditation consistently improves dispositional mindfulness, resilience and inner peace and effectively reduces PNTP. Our fi ndings indicate that a combination of mindfulness and PNTP could be used to design new psychological interventions to reduce the symptoms of mental health concerns such as negative bias, rumination, depression, anxiety, and PTSD. Introduction Past-negative time perspective (PNTP) describes a generally pessimistic, negative, or aversive view and attitude toward the past. 1 Due to the reconstructive nature of past experiences, these negative views and attitudes may be due to actual experiences of unpleasant or traumatic past events, negative reconstruction of benign past events, or a mix of both. Several studies have shown that PNTP is positively associated with neuroticism, 2 and anxiety, 1 but is negatively associated with emotional stability, 1 self-esteem, 3 and subjective wellbeing. 3,4 Although PNTP can affect routine cognitive and behavioral decisions, in general, the negative impact PNTP can have on one's life is controllable. However, some people with high PNTP are prone to engage in unhealthy behaviors that have a negative impact on their lives, such as excessive drinking or Internet addiction. 5,6 Furthermore, some people who experienced negative or traumatic events may even "get stuck in the past," increasing the likelihood they will develop symptoms of post-traumatic stress disorder (PTSD), 7,8 and depression. 9,10 Several recent studies have identified a negative association between dispositional mindfulness and PNTP. 3,4,11 People with high dispositional mindfulness generally display less PNTP. 12 Generally speaking, mindfulness is considered to be nonjudgmental acceptance of and nonreactive awareness to internal and external stimuli, including emotions, cognitions, and sensory input, as experienced in the present moment. 13,14 Dispositional mindfulness is considered to be a psychological trait, and describes one's tendency to be mindful in everyday life. [14][15][16] In recent decades, mindfulness meditation has received a great deal of application and research, these studies have shown that for clinical patients and healthy people alike, mindfulness training can significantly improve both physical and mental health, as well as promote overall psychological wellbeing. We posit that dispositional mindfulness may not only be negatively correlated with PNTP, but may also negatively predict PNTP. While PNTP reflects an orientation of attention toward the past with a pessimistic and negative attitude, 1 mindfulness reflects an orientation of attention toward the present with a neutral and compassionate attitude. 12,17 Mindfulness involves maintaining a conscious awareness of the present moment, which can adjust one's attention to time, improve one's attitude toward time, and make one's perspective of time more flexible. Thus, we propose hypothesis one: dispositional mindfulness can affect PNTP. We posit that mindfulness can not only negatively affect PNTP directly, but also may negatively affect PNTP indirectly through resilience. Resilience is a psychological trait that helps individuals who have experienced adversity and trauma adjust and develop during trying circumstances, 15 thus encouraging successful coping through "self-righting". 18,19 It is a protective factor and developmental asset that can help people improve their subjective wellbeing. 18 Studies have shown that dispositional mindfulness can be a significant predictor of resilience. 15,20 Some previous studies have found that both dispositional mindfulness and resilience can effectively reduce the symptoms of mental health concerns such as negative bias, 21,22 rumination, 14,23 depression, 13,24,25 anxiety, 13,26 and PTSD. 19,25 The IAA model of mindfulness suggests that mindfulness can enhance self-regulation and increase cognitive, emotional and behavioral flexibility, 25,27 which may increase resilience and enhance adaptive coping to stressful events. 20 Mindfulness and resilience can reduces these negative symptoms, thereby also reducing PNTP. Thus, we propose hypothesis two: resilience plays a mediating role in the relationship between dispositional mindfulness and PNTP. In addition to resilience, we also believe that inner peace can play a mediating role in the relationship between dispositional mindfulness and PNTP. First of all, Xu et al 28 consider inner peace, a relatively mild positive emotion, to represent calmness and harmony in the mind. Inner peace was found to significantly decrease symptoms of depression and anxiety; therefore, psychology researchers usually regard inner peace as an important indicator of mental health. 29 Lee et al 29 further found that compared to Americans, Chinese individuals scored higher on scales used to measure inner peace. This indicates that in Chinese culture, individuals are more focused on inner peace as a core principle in the pursuit of emotional wellbeing. Xu et al 28 also identified "peace of mind," as understood in Chinese culture, to be a main component of the Eastern cultural concept of subjective wellbeing. Secondly, in ancient Buddhism, achieving inner peace is the core goal of meditation. 30 Empirical studies have shown that inner peace remains an important goal in contemporary meditation as well. 31,32 Recent studies have demonstrated that dispositional mindfulness is also a significant predictor of experiencing inner peace. 28 Meditation's fourfold model of well-being suggests that mindfulness can improve an individual's ability to regulate emotions. 33 Mindfulness encourages facing one's own experiences with acceptance and a non-judgmental attitude in the moment. When a person non-judgmentally accepts him/herself instead of blaming or criticizing his/her feelings, emotions, and thoughts, it becomes easier to maintain and enhance emotional balance and inner peace. 28,31 According to the broaden-and-build theory of positive emotions, 34 positive emotions can broaden the scope of cognition, enhance cognitive flexibility, and promote effective problem solving. It is speculated that inner peace could help people reduce negative cognition and increase positive emotions, which could consequently encourage individuals to develop a more flexible view of life and time, thereby reducing PNTP. Thus, we propose hypotheses three: inner peace plays a mediating role in the relationship between dispositional mindfulness and PNTP. In addition, recent research has found that dispositional mindfulness not only directly affects inner peace but also indirectly affects inner peace through self-acceptance. 28 Recent studies have provided evidence that individuals with high resilience also experience more equanimity and peacefulness when facing stressful events. 20 We believe that mindfulness can also cultivate and improve inner peace through resilience. Mindfulness can improve resilience, and resilience may improve inner peace by improving emotional regulation and cognitive flexibility. 20,25 Thus, we propose hypotheses four: resilience and inner peace plays a serial mediating role in the relationship between mindfulness and PNTP. At last, Pepping et al 35 found that experienced meditators have significantly higher levels of dispositional mindfulness than inexperienced meditators. Wittmann et al 36 found that the PNTP scores in a long-term meditator group was significantly lower than in the control group (non-meditators). In addition, Liu et al 31 found that eight weeks of mindfulness training led to a significant increase in scores for mindfulness and inner peace in the meditation group, compared with the control group (non-mindfulness intervention). After a brief mindfulness training, the mindfulness and resilience of the participants were significantly improved. 37 Since these variables may be different between meditators and non-meditators in the present study. Thus, we propose hypothesis five: the serial mediating role of resilience and inner peace affecting dispositional mindfulness and PNTP may be different in meditators and nonmeditators. Materials and Methods Participants In total, questionnaires were distributed to 416 participants in the meditator group and the working population through an Internet platform; however, due to incomplete information about meditation practice, data of 50 participants were excluded from analysis. Thus, a total of 366 individuals (152 female; 214 male) completed the questionnaire and were included in the final study sample. The final analysis sample had an overall age range of 18 to 55 years with a mean age of 33.71 years (SD = 7.81). The education level of the sample was generally high, with 81.15% of the participants having completed university studies. Of the participants, 185 (75 female; 110 male) regularly engaged in meditation and had an overall age range of 18 to 55 years, with a mean age of 34.41 years (SD = 8.07). The other participants were 181 individuals (77 female; 104 male) who had never regularly meditated (although they may have tried it once or on a few occasions) and had an overall age range of 19 to 54 years, with a mean age of 32.99 years (SD = 7.49). Procedure The present study was approved by the ethics committee of the Faculty of Psychology at Southwest University, China. All participants gave online informed consent before completing the questionnaire. Respondents were told that the questionnaire was entirely anonymous, and that they would receive feedback regarding their results. Measures Meditation Experience Information Meditation experience was assessed with a brief questionnaire designed by Baer et al. 38 Among the regular meditators, mean lifetime duration of meditation practice was 616.51 hours (SD = 845.294), and ranged among participants from 20 hours to 3000 hours. More than half (55.68%) reported meditating more than three times per week. Meditation practice was defined broadly, and included Vipassana meditation, loving kindness meditation, walking meditation, and yoga. Measurement of Dispositional Mindfulness Dispositional mindfulness was measured using the Chinese version of the Mindful Attention Awareness Scale (MAAS). 39 The MAAS consists of one dimension and 15 items rated on a 6-point scale ranging from 1 ("almost always") to 6 ("almost never"). Previous research found that the MAAS has good reliability and validity. 39 In the present study, the reliability coefficient of the whole scale was 0.88. Measurement of Resilience Resilience was assessed using the Chinese version of the Connor-Davidson Resilience Scale (CD-RISC), as translated and revised by Yu and Zhang. 40 The CD-RISC contains 25 items and divides resilience into three dimensions: fortitude, self-improvement, and optimism. The CD-RISC rates each item on a 5-point scale ranging from 0 ("not true at all") to 5 ("true nearly all of the time"), with higher scores reflecting greater resilience. The Chinese version of the CD-RISC has been shown to have good reliability and validity. 40 In the present study, the reliability coefficient of the whole scale was 0.90, and the reliability coefficients for the subscales were 0.84 for fortitude, 0.70 for selfimprovement, and 0.63 for optimism. Measurement of Inner Peace The Peace of Mind Scale (PoM) was used to measure peace of mind. 29 It comprises a total of 7 items rated on a 5-point scale ranging from 1 ("not at all") to 5 ("all of the time"). Previous research found that the PoM has good reliability and validity. 29 Cronbach's alpha in the present study was 0.89. Measurement of Past-Negative Time Perspective PNTP was measured using the past-negative dimension of the Zimbardo Time Perspective Inventory (ZTPI). 1 The ZTPI has shown good cross-cultural reliability and validity. 41 The past-negative dimension of the ZTPI comprises 10 items rated on a 5-point Likert scale. In the present study, the reliability coefficient of the PNTP scale was 0.83. Statistical Analyses All analyses in this study were made using the SPSS 23.0 and AMOS 23.0 software programs. First, we examined the descriptive statistics and correlations of the study variables using SPSS 23.0. Second, AMOS 23.0 was used to establish structural equation models and examine the mediating effects and differences in the totality of the meditator and non-meditator groups. Common Method Bias Test After the data collection was completed, a Harman single factor test was used to test common method bias. The results showed that the first factor only accounted for 21.99% of variance, which was below the critical standard of 40%, indicating that common method bias was not significant. Descriptive Statistics and Correlation Analysis of Each Variable The results, as shown in Table 1, demonstrate that mindfulness is significantly positively correlated with resilience and inner peace. Conversely, PNTP is negatively correlated with mindfulness, resilience, and inner peace. t-Test for Differences Between Meditator and Non-Meditator Groups After independent sample t-test and chi-square test, the results are shown in Table 2, no significant differences in gender, age, educational background or economic income between The two groups. As shown in Table 3, results of the t-test indicated that meditators and non-meditators show significant differences in mindfulness, inner peace, and PNTP. Testing the Multiple Mediation Model of Dispositional Mindfulness Influencing Past-Negative Time Perspective Wu and Wen 42 suggest that a single dimensional scale should be parceled. In the present study, these parcels were formed to control for inflated measurement errors caused by multiple items for the latent factor. Three item parcels each were created for mindfulness and PNTP and two item parcels were created for inner peace. A random algorithm was used to combine the items into parcels. In addition, the variables of gender and age were controlled in the subsequent structural equation model analysis. Next, we created a multiple mediation model to examine whether resilience and inner peace mediate the impact of dispositional mindfulness on PNTP, and, if so, how this is done. Finally, AMOS 23.0 was used to perform bootstrapping procedures to test the significance of the mediation effects of resilience and inner peace. We generated 2000 bootstrapping samples from the original dataset (N = 366) through random sampling. Empirical 95% confidence intervals did not contain zero, indicating that the indirect effect was statistically significant. As shown in Figure 1 and Table 4, the bootstrap method revealed that both resilience and inner peace mediate the association between dispositional mindfulness and PNTP. The significant mediating roles were: (a) that of resilience for PNTP, accounting for 12.95% of the total effect (95% CI [−0.096, −0.001]); (b) that of inner peace for PNTP, accounting for 16.35% of the total effect (95% CI [−0.102, −0.023]); and (c) the serial mediating role of resilience via inner peace for PNTP, which accounted for 16.99% of the total effect (95% CI [−0.102, −0.031]). Therefore, our findings also support hypotheses 1, 2, 3, and 4. Testing Group Differences Between Meditators and Non-Meditators in the Multiple Mediation Model First, the mediation effect model was tested on the meditator and non-meditator groups, respectively. The fit indices of the obtained models are shown in Table 5. Both the meditator model (M m ) and the non-meditator model (M n ) were acceptable and can be compared across groups. Next, using the multi-group comparison method of structural equation modeling, the unconstrained model (M 1 ) and the structural covariances model (M 2 ) were set. The fitting results of the two models are shown in Table 5 wherein a significant difference between M 1 and M 2 (Δχ 2 =52.18, Δdf =19, p < 0.001) is indicated. These findings support hypothesis 5. As shown in Figures 2 and 3 and Table 6, the bootstrap method was used to test the mediation effects of the meditator and non-meditator models. In the meditator group, the serial mediating role of resilience and inner peace between mindfulness and PNTP accounted for 23.44% of the total effect (95% CI [−0.188, −0.046]), and the serial mediating effects in the non-meditator group accounted for 10.98% of the total effect (95% CI [−0.109, −0.007]). In the meditator group, resilience and inner peace had significant intermediation between mindfulness and PNTP, respectively. Resilience for PNTP accounted for 36.31% of the total effect (95% CI [−0.281, −0.065]), and inner peace for PNTP accounted for 40.25% of the total effect (95% CI [−0.322, −0.081]). The non-meditator group did not display these two effects. The Multiple Mediation Model of Dispositional Mindfulness Influencing Past-Negative Time Perspective in the Whole Group Our findings show that dispositional mindfulness negatively affects PNTP. On the one hand, dispositional mindfulness can negatively predict PNTP, and previous studies have found a negative correlation between dispositional mindfulness and PNTP. 4,11 However, our research is a step forward from these studies as they have not further explored the relationship between dispositional mindfulness and PNTP. Mindfulness may be able to directly affect PNTP because mindfulness is a way of balancing awareness, encouraging one to observe thoughts and feelings without avoiding or attempting to change them or applying exaggeration or bias. 33 This style of awareness can help reduce self-criticism, avoid overthinking negative events, and protect individuals from the harmful effects of depression, anxiety, and other negative emotional states. 13,14,39 Furthermore, as a result of the influences of culture, education, religion, social class, and family, humans can rely too much on a particular time perspective, and this behavior becomes automatic. 1 Mindfulness has a de-automating effect that can reduce tendencies towards cognitive behavioral habituation, 43 helping to reduce automatic or habitual overuse of a particular time perspective. On the other hand, dispositional mindfulness can also indirectly affect PNTP through resilience or inner peace. This is consistent with the IAA model of mindfulness which indicates that mindfulness involves a nonjudgmental awareness and acceptance of the present experience. 21,27 This enhances individuals' cognitive functioning and emotional regulation, thereby improving resilience and enhancing inner peace, and may, therefore provide a more balanced and flexible time perspective. The Differences in Multiple Mediation Models of Dispositional Mindfulness Influencing PNTP in the Meditator and Non-Meditator Groups The present study found that for the meditator group, the influence of dispositional mindfulness on PNTP was completely mediated by resilience and inner peace. For the nonmeditator group, only a small part of the influence of dispositiona mindfulness on PNTP was mediated by resilience and inner peace. This may be because in the meditator group, due to the regular and continuous practice of meditation, resilience and inner peace have gradually become internalized as relatively stable psychological qualities. This is consistent with previous studies showing that mindfulness meditation can cultivate qualities such as loving-kindness, compassion, inner peace, and resilience. 20,33 Additionally, Buddhism has a system of meditative practices for cultivating similar qualities, further illustrating the importance of meditation practice. Mindfulness meditation has always emphasized that only by the consistent practicing of mindfulness skills can these qualities be cultivated, maintained, and improved. Additionally, the present study found that for the meditator group, the effect of inner peace was greater than that of resilience, suggesting that inner peace may be an important predictor of and a good representation of Chinese emotional wellbeing. This finding is consistent with previous research. 28,31 According to Lee et al, 29 traditional Chinese culture is greatly influenced by the three main religions or philosophies of Taoism, Confucianism, and Buddhism, which all value and emphasize inner peace. Additionally, mindfulness can promote inner peace, which is both consistent with previous research and with Buddhist thought. 28,31 The ultimate goal of meditation is inner peace, and in Buddhism, inner peace is considered to be the most important thing in life. 30,31 Finally, the present study found that for nonmeditators, the direct effect of mindfulness on PNTP accounted for 89.04% of the total effect. Arguably, the direct effect of mindfulness on PNTP is relatively large, while the indirect effect of resilience and inner peace only provide a relatively small part of the total effect. This shows that mindfulness can promote psychological functioning and mental health, allowing individuals to better understand, manage, and solve problems in daily life, thereby reducing PNTP. However, the direct effect of mindfulness on PNTP also suggests that mindfulness may reduce PNTP in other ways as well. In addition to resilience and inner peace, there may be other mediating variables, which future research should work to identify. Conclusion and Limitations This study has a few limitations. Firstly, the results relied completely upon self-report data, which may have potential problems, such as recall bias or response bias. Also, mindfulness is a complex and abstract construct that involves an advanced level of meta-cognition, 44 so self-report methods may not be a reliable source of information. 45 Future research can add direct and objective assessment measures (e.g., the Triangle task) or qualitative analysis to supplement the assessment of mindfulness. 45 Secondly, since this study adopted a cross-sectional design, it was not appropriate to make causal inferences. In addition, we took mindfulness as the independent variable and PNTP as the mediating variable, and took inner peace or resilience as the dependent variables to make structural equation models. Although the models are weak, they remain significant, thus, the interpretation proposed here should be treated with proper caution. Future studies should design experiments or longitudinal studies that could complement and validate our findings. Additionally, a multi-group analysis showed that, for meditators, the effect of dispositional mindfulness on PNTP was fully mediated by resilience and inner peace; however, for non-meditators, the effect of mindfulness on PNTP was partially mediated by resilience and inner peace. Thus, the mediational model was significantly different between meditators and non-meditators. On the basis of these findings, we conclude that dispositional mindfulness, resilience, and inner peace all negatively predict PNTP. Finally, some studies have found that the psychological problems, such as emotional regulation, addiction, procrastination, anxiety, depression, can be relieved or reduced by regulating the individual's PNTP or by jointly regulating other dimensions of time perspectives. 46,47 In the future, a combination of mindfulness and time perspective could be used to design new psychological interventions to regulate moods, manage addiction, reduce unhealthy behaviors, relieve stress and PTSD, and promote positive individual mental health outcomes.
2020-05-07T09:12:34.355Z
2020-05-01T00:00:00.000
{ "year": 2020, "sha1": "3a64c03c45569931a23036770dc4c8caa35966cd", "oa_license": "CCBYNC", "oa_url": "https://www.dovepress.com/getfile.php?fileID=57911", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "66b0c49709bdf78dd9851b957419330702b2606d", "s2fieldsofstudy": [ "Psychology" ], "extfieldsofstudy": [ "Medicine", "Psychology" ] }
261445721
pes2o/s2orc
v3-fos-license
N-acetyltransferase 2 genetic polymorphisms and anti-tuberculosis-drug-induced liver injury: a correlation study Background: Considering the genetic characteristics of people with anti-tuberculosis (TB)-drug-induced liver injury (ATDILI), genetic factors and their consequences for treatment need to be studied. Objective: The correlation between N-acetyltransferase 2 (NAT2) genetic polymorphisms and ATDILI was analysed. Methods: In this study, the liver and coagulation functions of 120 patients with TB were monitored dynamically for at least 3 months. The genetic polymorphisms of patients were detected by pyrosequencing, and the acetylation types of liver damage and the distribution of NAT2 genetic polymorphisms were compared and analysed. Results: The results showed that there were significant differences in the distribution of alleles and acetylation types among different groups (p < 0.05). In patients with grade 4 liver injury (liver failure), any two alleles were included, i.e., *6 and *7. Specifically, patients with fast acetylation genotypes accounted for 42.4% (14/33), those with intermediate acetylated genotypes accounted for 55.2% (32/58), and patients with slow acetylation genotypes accounted for 65.5% (19/29). Conclusion: Patients with slow acetylation genotypes had higher rates of liver failure and liver injury than those with intermediate and fast acetylation genotypes, and patients with slow acetylation genotypes containing any two alleles (*6 and *7) had a higher rate of liver failure than those with other alleles. In summary, the time of liver injury in patients with slow acetylation genotypes was earlier than the total average time, and the time of liver function recovery in patients with fast acetylation genotypes was shorter than the total average time. Introduction According to the Global Tuberculosis Report 2022 released by the WHO, an estimated 10.6 million people became ill with tuberculosis (TB) in 2021, compared with 10.1 million in 2020, and 1.6 million people died from TB in 2021, compared with 1.5s million in 2020 (Bagcchi, 2023).Isoniazid (INH), rifampicin (RFP), pyrazinamide (PZA) and ethambutol (EMB) are the first-line medications used in traditional anti-TB therapy, and all are metabolised by the liver, which may lead to the development of anti-TB-drug-induced liver injury (ATDILI).Deaths caused by ATDILI are uncommon but possible (Wang et al., 2016;Jiang et al., 2021;Zhou et al., 2022).The risk of liver damage during treatment can vary significantly between individuals and, accordingly, refers to the issue of individual susceptibility (Kim et al., 2010;Bose et al., 2011;Chen et al., 2015). The occurrence of ATDILI is related to the production and elimination of toxic substances during drug metabolism in the liver, where INH is the most prominent first-line anti-TB drug that causes drug-related liver injury (Huang, 2014).Isoniazid is metabolised in vivo by N-acetyltransferase 2 (NAT2) to produce intermediate products, such as acetyl INH, isonicotinic acid and acetyl hydrazide, which, ultimately, produce non-toxic diacetyl hydrazine.Hydrazine and ketene, which are produced in this metabolic pathway, are hepatotoxic substances that can cause drug-related liver injury (Huang et al., 2002;Saukkonen et al., 2006;Huang, 2007;Tostmann et al., 2008). NAT2 genetic polymorphisms affect NAT2 activity and can thus lead to the risk of drug-related liver injury in patients with TB (Zhang et al., 2018).Several papers have reported a correlation between NAT2 genetic polymorphisms and ATDILI and posited that the slow acetylation of NAT2 was significantly associated with the risk of ATDILI (Ng et al., 2014;Beijing Chest HospitalCapital Medical University, 2021;Yang et al., 2022).However, studies correlating NAT2 genetic polymorphisms with ATDILI in China are rare.In addition, it has been reported that a personalised clinical drug dosage model can be developed for TB treatment, which is especially important for those areas of South and East Asia with high incidences of ATDILI (Bagcchi, 2023).Therefore, we aimed to investigate the association of NAT2 genetic polymorphisms with ATDILI in Chinese patients with TB. The distinct NAT2 genotype can be divided into three types, i.e., fast, intermediate and slow acetylation genotypes.According to the results of genetic tests based on previous research, among four mutant loci of NAT2 genes (C282T, T341C, G590A and G857A), 282 loci were typically mutated in combination with 590 or 857 to form alleles *6 and *7; 341 loci were mutated to form allele *5; and four loci that were not mutated were wild-type allele *4.According to the four denoted alleles, NAT2 could be classified into the wild homozygous fast acetyl type, i.e., *4/*4; the wild mutant heterozygous intermediate acetyl type, i.e., *4/*5, *4/*6 and *4/ *7 and the mutant homozygous slow acetyl type, i.e., containing any two of alleles *5, *6 or *7 (Dong et al., 2020); thereby, we evaluated the association of NAT2 genetic polymorphisms in three different genotypes with ATDILI in TB based on the above protocols. Overall, we hypothesised that NAT2 genetic polymorphisms are associated with ATDILI. Subjects The inclusion criteria were as follows: 1) patients aged 16-85 years; 2) a clear diagnosis of primary TB treatment, including patients with pathological confirmation, pathogenetic confirmation and clinical diagnosis; 3) normal liver function before anti-TB treatment and 4) patients who provided signed informed consent for their voluntary inclusion in the study. The study's exclusion criteria were as follows: 1) abnormal liver function before anti-TB treatment; 2) patients with other diseases that could cause abnormal liver function, including alcoholic and viral hepatitis, cirrhosis, immune haemolytic disease and congestive heart failure; 3) patients who were taking other drugs that could cause abnormal liver function, including immunosuppressants, antitumour drugs, acetaminophen and chlorpromazine; 4) patients who had not completed 3 months of anti-TB treatment for reasons other than liver impairment and 5) patients with abnormal coagulation function before anti-TB treatment.The selection criteria were based on existing research (Dong et al., 2020).The present study was approved by the Ethics Committee of Wenzhou Central Hospital (No. K2020-04-003), and the corresponding documents are listed in Supplementary Table S1. Figure 1 displays the schedule for the experiment. According to the diagnosis of TB, the presence of TB in 120 patients was confirmed; they included 88 males and 32 females, aged 18-82 years, with a mean age of 55.5 years.The cases included 100 cases of pulmonary TB, six cases of extrapulmonary TB (one case of tuberculous meningitis, three cases of tuberculous pleurisy, one case of lymphatic TB and one case of tuberculous abscess) and 14 cases of pulmonary TB combined with extrapulmonary TB.In addition, there were eight cases of pulmonary TB combined with tuberculous pleurisy, one case of pulmonary TB combined with pelvic TB, two cases of pulmonary TB combined with cervical lymphatic TB, two cases of pulmonary TB combined with laryngeal nodules and one case of pulmonary TB combined with intestinal TB.Among them, 33 patients (all with grade-1 liver injury) developed adaptation to treatment, i.e., mild liver profile elevations that normalised despite treatment continuation.Thirty-two patients (patients with grades 1, 2 and 4 liver injury) changed their anti-TB regimen due to elevated liver indexes. The general conditions of the study participants are listed in Table 1.The 'yes' and 'no' values for diabetes mellitus and hypertension in Table 1 represent patients with and without the respective disease.All 120 patients in this study were inpatients.The doctors in charge asked about their medical history in detail and the use of hypotensive and hypoglycaemic drugs.The patients' blood pressure and blood glucose levels were monitored during hospitalisation. As noted in the above research methods (Dong et al., 2020), a standard combination anti-TB regimen was implemented for at least 3 months, and all patients received first-line anti-TB drugs according to the following regimen: 2HRZE/4~7HR (i.e., two months of intensification and four to 7 months of consolidation), with consistent drug doses as follows: INH 300 mg, once daily; RFP 450 mg, once daily; EMB 750 mg, once daily and PZA 500 mg three times daily.The treatments were adjusted accordingly if any patient developed definitive ATDILI. N-acetyltransferase genetic polymorphism detection steps and determination methods Specific whole-blood deoxyribonucleic acid (DNA) extraction was conducted using standard procedures described in existing studies (Zhong et al., 2018).The manufacturer's instructions for using a QIAamp DNA Blood Mini Kit (Cat.No. 51104; Qiagen, Valencia, CA, USA) were followed.A venous blood sample (2 mL) was collected from each patient in the early morning under fasting conditions using an EDTA-K2 anticoagulated blood collection tube, which was immediately mixed and sent for DNA extraction.Finally, the DNA content was assessed using a NanoDrop 2000 ™ spectrophotometer (ThermoFisher Scientific, Waltham, MA). Information about the NAT2 gene and mutant loci was obtained from the PubMed literature database (www.ncbi.nlm.nih.gov), and primers for pyrophosphate sequencing of the corresponding NAT2 gene at loci 282, 341, 590 and 857 were designed using PyroMark Assay Design (version 2.0) software.The specific design process that was implemented was adopted from an earlier report (Dong et al., 2020).The primers are listed in Supplementary Table S1. The experiments involving polymerase chain reaction amplification, single-strand DNA template preparation and pyrophosphate sequencing were performed as described previously (Zhong et al., 2018;Renu et al., 2021). Observation indicators Observation indicator tests were conducted following the standard procedure described in existing studies (Zhong et al., 2018).Patients in each group were tested for alanine aminotransferase (ALT), alkaline phosphatase (ALP), total bilirubin (TBil), prothrombin concentration (PT), glutamic oxaloacetic transaminase (AST)and prothrombin activity using fully automated biochemical analysers (iChem-520; KuBeier, Shenzhen, China) at weeks 2, 4, 8 and 12 of anti-TB treatment.The international normalised ratio (INR) is a blood coagulation index used to monitor the therapeutic effect in patients taking oral anticoagulant drugs.The calculation method for INR detection divides the patient's PT value by the mean PT value of the control plasma used in the laboratory to obtain a ratio.Then, the ratio is corrected using the INR formula: INR = (patient ratio) Înternational Sensitivity Index.Time before liver damage was the time between drug administration and the first time detecting abnormal liver function (ALT/glutamic oxaloacetic transaminase [AST]/TBil) and coagulation function (INR/PTA).Generally, the tests are performed at weeks 1, 2, 3, 4, 8 and 12 of treatment.However, when patients showed symptoms of liver damage or adverse drug reactions, such as nausea, vomiting, rash or other symptoms, we immediately conducted liver function and coagulation tests.Part of the test was early or late due to the rest day involved, and the measurement units were in days.Time of liver function recovery: For patients with liver injury, we strengthened the monitoring of liver function and coagulation function and reviewed them after two to 3 days.The calculated time was the time required for the patient's liver function (ALT/AST/TBil) and coagulation function (INR/ PTA) to return to normal, and the measurement units were in days. Detecting the degree of liver function injury impairment The reference standard for the degree of liver function impairment (Guidelines for the Diagnosis and Treatment of Anti-Tuberculous Drug-Induced Liver Injury, 2019) (The Society of TuberculosisChinese Medical Association, 2019) was adopted to detect the degree of liver function injury impairment.The levels of ALT, ALP, TBil and INR in the patients were detected using fully automated biochemical analysers (iChem-520; KuBeier, Shenzhen, China) to review the degree of liver injury as follows: Grade 1 (mild liver injury): the recoverable elevation of serum ALT and/or serum ALP; TBil <2.5 times the upper limit of normal (ULN) (42.8 μmol/L); INR <1.5 G. Grade 5 (fatal): death due to ATDILI or the need to undergo a liver transplant to survive. Statistical analysis This study used SPSS (version 24.0, IBM) statistical software to conduct a statistical analysis.Measurement data that obeyed a normal distribution were described as means ± standard deviations, and a one-way analysis of variance was used for comparisons between multiple groups.Measurement data that did not obey a normal distribution were described by medians and interquartile spacing, and a Kruskal-Wallis H rank-sum test was used for comparisons between multiple groups.Paired samples were compared using a Wilcoxon rank-sum test, while count data were described by frequency.The Chi-squared test or Fisher's exact probability method was used to analyse the distribution differences, and p < 0.05 was considered statistically significant.We estimated the sample size for the distribution rate of acetylation types among liver function groups in the Chi-squared test using PASS software (version 15.0) (NCSS, LLC.Kaysville, Utah, USA).A sample size of 35 achieved 90% power to detect an effect size (W) of 0.7072 using a six-degrees-of-freedom Chi-squared test with a significance level (alpha) of 0.05. Results In this study, the authors hypothesised that NAT2 genetic polymorphisms were associated with ATDILI.This section presents a comparison of the basic clinical data of all the participants and excludes other interfering factors.Additionally, the relationship between different alleles and acetylation types and the degree of liver injury in patients is explored.To study the relationship between the degree of liver function and coagulation function, the authors measured the coagulation index of the patients.Finally, the authors investigated the relationship between different acetylation types and the occurrence and recovery time of liver injury in patients. Comparison of clinical baseline data The general conditions of the study participants were analysed, and the results showed that differences in gender, age, diabetes mellitus, hypertension, INR and PTA were not statistically significant between the different groups (p > 0.05).Additionally, although there were statistical differences in AST and TBil between the different groups (p < 0.05), they were all within the normal range, as shown in Table 1. Comparison of alleles and acetylation types The analysis of alleles and acetylation types in the study participants indicated that their distribution types were statistically different between groups (p < 0.05), and in patients with grade 4 liver injury, either two alleles of *6 or *7 were present.Liver injuries occurred in 42.4% (14/33) of patients with fast acetylation genotypes, 55.2% (32/58) of those with intermediate acetylation genotypes and 65.5% (19/29) of patients with slow acetylation genotypes.All patients with fast acetylation genotypes had mild liver injuries, and the proportion of patients with slow acetylation genotypes who had grade 4 liver injury was higher than those with intermediate and fast acetylation genotypes. These results confirmed that patients with slow acetylation genotypes containing any two alleles of *6 and *7 may have a higher rate of liver failure than patients with other allele types.None of the patients exhibited grade 3 or grade 5 liver injury.For additional details, see Table 2. Comparison of the time before liver damage and the recovery of liver function in patients with different acetylation types The results showed that the time to observing liver damage and the time to liver function recovery did not differ statistically between the different groups (p > 0.05).The time before liver damage occurred earlier than the overall mean time in patients with slow acetylation genotypes, and the time required for the recovery of liver function was shorter than the overall mean time in patients with fast acetylation genotypes, as shown in Table 3. Discussion In countries with a high rate of TB treatment, ATDILI is a major problem.Existing studies have focused on patients with genetic polymorphisms and altered genes encoding metabolic enzymes for INH bioactivation and inactivation, leading to the differential accumulation of INH-active metabolites and resulting in liver injury (Zhou et al., 2022;Bagcchi, 2023).Currently, with the discovery of new mechanisms, the relevance of INH-mediated mitochondrial dysfunction in ATDILI is gradually being recognised, but the exact mechanism of its occurrence remains unclear (Jiang et al., 2021).This study investigated the correlation between NAT2 genetic polymorphisms and ATDILI in Chinese patients with TB. Personalised dosing therapy based on drug-metabolising enzymes and transporter genomes has become one of the focuses of personalised medicine.To study the association between NAT2 genetic polymorphisms and ATDILI, many countries throughout the world have conducted a series of studies.Yuliwulandari Rika et al. (Yuliwulandari et al., 2019) performed NAT2 genotyping by direct DNA sequencing in 100 cases of clinically severe ATDILI and 210 non-ATDILI controls; they found that slow NAT2 acetylation was significantly associated with ATDILI risk, while fast and intermediate acetylation was associated with reduced ATDILI risk, thus suggesting the importance of NAT2 genotype and phenotype determination for reducing ATDILI risk.Chamorro et al. (Chamorro et al., 2012) found that slow acetylation increased the risk of ATDILI in their study of 185 patients with TB in Argentina.Li Xinjie et al. (Li et al., 2011) and Shen TT et al. (Shen et al., 2015) found that the genetic phenotype of NAT2 in the Chinese Han population of patients with TB was predominantly intermediate, and the risk of drug-related liver injury was higher in the slow metabolic form of NAT2.The participants in this study were all Han Chinese, and the genetic phenotype of NAT2 was found to be predominantly intermediate (48.3%), which was consistent with the results of Xinjie Li et al.'s research (Li et al., 2011).A study conducted by Toure et al. (Toure et al., 2016) confirmed that NAT2 fast acetylator genotypes accounted for a high proportion in Senegalese patients with TB.The current results showed that the NAT2 fast acetylation genotype was the least predominant.Overall, these studies may reflect that polymorphism of the NAT2 gene is related to racial regional differences.Therefore, we need to study the association between NAT2 genetic polymorphisms and ATDILI to provide a theoretical basis for personalised drug delivery in mainland China. Yang Seungwon et al. (Yang et al., 2019) reported that ATDILI was more likely to occur in patients with NAT2 slow acetylation genotypes, who may require close monitoring.In addition, this study found that patients with the slow acetylation genotypes had a higher rate of liver injury than patients with intermediate and fast acetylation genotypes; patients with slow acetylation genotypes developed liver damage earlier than the overall mean time, and all three cases of liver failure occurred in patients with slow acetylation genotypes.Accordingly, slow acetylation types have a greater risk of causing ATDILI. Suvichapanich Supharat et al. (Suvichapanich et al., 2018) conducted a meta-analysis of 18 studies involving 822 cases of ATDILI and 4,630 controls; they confirmed a strong association between each NAT2 slow acetylation genotype and ATDILI, except for NAT2*5B/*5B.Furthermore, a meta-analysis also argued that a personalised clinical drug dosage model is especially important for the populations of South and East Asia with a high incidence of ATDILI. Additional in vitro studies with INH as a substrate provided support for the presence of ultralow acetylation alleles (NAT2*6A and NAT2*7B).In Thailand, a study concluded that NAT2 slow acetylation genotypes are a high risk factor for drug-induced liver injury in patients with TB (Wattanapokayakit et al., 2016).In Japan, Higuchi et al. (Higuchi et al., 2007) found that slow acetylation genotype NAT2*6 could increase hepatotoxicity in patients with TB, while acetylation genotype NAT2*4/*4 could reduce the risk of liver injury in such patients.The results of this study found that patients with grade 4 liver injury (liver failure) had either *6 or *7 alleles, and patients with grade 2 liver injury had either *6 or *7 alleles; it was hypothesised that patients with slow acetylation genotypes containing either *6 or *7 alleles may have a higher rate of liver failure than those with other types of alleles. The above studies reflect the difference in dominant subgenotypes of NAT2 between countries and races, and further investigation is required in relevant studies. The correlation between genetic polymorphism and changes in serum enzyme expression levels depends on several factors, including genotype, environmental factors and genetic interactions.In molecular biology, genetic polymorphism refers to the presence of different alleles of the same gene in a population, which may affect gene expression and function and lead to biological differences between individuals.These genotype differences may lead to changes in enzyme expression levels, affecting the metabolism and other biological processes.Some genetic polymorphisms have been shown to be correlated with the expression levels and activity of specific enzymes.For example, single-nucleotide polymorphisms have been found to be correlated with the expression levels of some metabolic enzymes in certain genes.In addition, other types of genetic variations, such as insertions/deletions or gene locus amplification, may also affect enzyme expression and function. However, the correlation between genetic polymorphism and enzyme expression levels is also influenced by environmental factors and genetic interactions.For example, diet, drug exposure and environmental factors may affect enzyme expression levels, altering the relationship between genotype and enzyme expression (Scott, 2010).Additionally, genetic interactions may also affect the relationship between genetic polymorphism and enzyme expression levels.Therefore, more research is needed to determine the relationship between genetic polymorphism and enzyme expression levels; furthermore, environmental factors need to be controlled and genetic interactions considered. In recent years, due to an increase in the combination of drugs, drug-related liver injury has gradually become a common clinical pharmacogenetic disease, with a serious impact on treatment effects and the quality of life of patients (Liu et al., 2019).The long treatment period and many adverse reactions that may arise during the treatment of patients with TB often lead to treatment interruptions or regimen changes, resulting in reduced efficacy and drug resistance and directly impacting the effectiveness of TB control. There are few studies on NAT2 genetic polymorphisms and a lack of research on the correlation between NAT2 genetic polymorphisms and ATDILI in China.It has been reported that a personalised clinical drug dosage model is especially important for the populations of South and East Asia with a high incidence of ATDILI; thereby, we investigated the association between NAT2 genetic polymorphisms and ATDILI in China.The present study has important implications for identifying patients with a high risk of developing liver damage before anti-TB treatment and has clinical implications for the targeted guidance of individualised drug therapy, the mitigation of ATDILI and the rational distribution and use of drugs, which will benefit patients with ATDILI in China. Herein, we found that in 33 patients with adaptive liver injury (grade 1 liver injury could be recovered by continuing HRZE treatment), there were 12 cases (36.4%) of fast acetylation genotypes, 16 cases (48.5%) of intermediate acetylation genotypes and five cases (15.1%) of slow acetylation genotypes.In 32 patients with a modified anti-TB treatment plan were two cases of fast acetylation genotypes (6.25%), 16 cases of intermediate acetylation genotypes (50%) and 14 cases of slow acetylation genotypes (43.75%).Therefore, we speculated that the adaptability of fast acetylation genotypes was stronger than that of slow and intermediate acetylation genotypes; consequently, patients with fast acetylation genotypes could continue to be treated with the HRZE regimen under the condition of monitoring liver function and coagulation function after the emergence of grade 1 liver injury. Previous studies have found that polymorphisms in the NAT2 gene are most likely to be associated with the anti-TB drug INH (Huang et al., 2021).One study described the liver damage mechanism of INH, RFP and PZA (Tuberculosis Branch of Chinese Medical Association, 2019).In this study, among 30 patients (intermediate and slow acetylation genotypes) with a modified anti-TB treatment regimen, 20 patients were re-used INH, among which, 15 patients were treated with rifapentine (0.6 twice a week), and only four patients were re-used PZA.Twenty-five patients were treated with moxifloxacin (0.4 once a day) or levofloxacin (0.5 once a day).No liver injury was found at follow-up. It is speculated that the combination of INH with RFP and PZA results in a superposition effect (especially in slow acetylation), and INH combined with rifapentine, moxifloxacin or levofloxacin in the treatment of TB may reduce the probability of liver injury.Rifapentine is a new long-acting rifamycin antibiotic, with a good antibacterial effect on Mycobacterium TB.Compared with the widely used first-line anti-TB drug RFP, its antibacterial spectrum is similar, but its anti-TB action is 2-10 times higher, with fewer adverse reactions, a longer elimination half-life in plasma and a slightly smaller induction effect of cytochrome P450.However, its hepatotoxicity is still substantial, especially in TB with liver injury, and there are more obvious individual differences. This study has some limitations.First, the sample size was relatively small, and second, no pharmacokinetic data were included.Furthermore, the association between the expression level of NAT2 and the occurrence of ATDILI still requires further investigation. Conclusion This study found that NAT2 genetic polymorphisms were associated with the development of ATDILI in Chinese patients with TB, and patients with slow acetylation genotypes had higher rates of liver injury and failure than those with intermediate and fast acetylation genotypes.Additionally, patients with slow acetylation genotypes containing any two alleles of *6 and *7 had higher rates of liver failure than those with other alleles.These results are in line with previous findings.However, the sample size of the liver failure group in this study was small, which had a certain influence on the conclusion.In subsequent studies, we will expand the sample size to further verify the conclusions of this paper. This study has important implications for identifying patients with a high risk of developing liver damage before anti-TB treatment and has important clinical implications for the targeted guidance of individualised drug therapy, which will benefit patients with ATDILI in China.If the association between genetic polymorphisms and the risk of ATDILI is determined, a personalised clinical drug dosage model could be developed for the treatment of TB. FIGURE. 1 FIGURE.1The schedule for the experiment. TABLE 1 Comparison of clinical baseline data between the groups with different degrees of liver impairment and the control group (groups without liver impairment). TABLE 2 Comparison of alleles and acetylation types in groups with different degrees of liver injury and none (grade 0). TABLE 3 Comparison of time before liver damage and liver function recovery time in patients with different acetylation types.
2023-09-02T15:21:19.780Z
2023-08-31T00:00:00.000
{ "year": 2023, "sha1": "44a454fe856a4cc0dc5b822817f4ba5a96cfd375", "oa_license": "CCBY", "oa_url": "https://www.frontiersin.org/articles/10.3389/fphar.2023.1171353/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "b943c7bc5329513a7616d1954b8848034b0cf565", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
210882897
pes2o/s2orc
v3-fos-license
Case Study: Predictive Fairness to Reduce Misdemeanor Recidivism Through Social Service Interventions The criminal justice system is currently ill-equipped to improve outcomes of individuals who cycle in and out of the system with a series of misdemeanor offenses. Often due to constraints of caseload and poor record linkage, prior interactions with an individual may not be considered when an individual comes back into the system, let alone in a proactive manner through the application of diversion programs. The Los Angeles City Attorney's Office recently created a new Recidivism Reduction and Drug Diversion unit (R2D2) tasked with reducing recidivism in this population. Here we describe a collaboration with this new unit as a case study for the incorporation of predictive equity into machine learning based decision making in a resource-constrained setting. The program seeks to improve outcomes by developing individually-tailored social service interventions (i.e., diversions, conditional plea agreements, stayed sentencing, or other favorable case disposition based on appropriate social service linkage rather than traditional sentencing methods) for individuals likely to experience subsequent interactions with the criminal justice system, a time and resource-intensive undertaking that necessitates an ability to focus resources on individuals most likely to be involved in a future case. Seeking to achieve both efficiency (through predictive accuracy) and equity (improving outcomes in traditionally under-served communities and working to mitigate existing disparities in criminal justice outcomes), we discuss the equity outcomes we seek to achieve, describe the corresponding choice of a metric for measuring predictive fairness in this context, and explore a set of options for balancing equity and efficiency when building and selecting machine learning models in an operational public policy setting. INTRODUCTION Some of the most vulnerable populations in the United States struggle with a complex combination of needs, including homelessness, substance addiction, ongoing mental and physical health conditions, and long-term unemployment. For many, these challenges can lead to interactions with the criminal justice system [28]. Of the millions of people who are incarcerated in jails and prisons each year, more than half have a current or recent mental health problem and inmates are far more likely to have experienced homelessness or substance dependence. In local jails, where 64% struggle from mental health issues, 10% were homeless in the year before their arrest (compared to a national average under 1% [56]), and 55% met criteria for substance dependence or abuse [34]. By 2005, there were three times as many individuals with serious mental illness in jails and prisons than in hospitals and the per capita number of psychiatric hospital beds in the US had fallen by an order of magnitude over 50 years, suggesting a failure of the community mental health system to meet the needs of this at risk population [26]. For some of these individuals, the criminal justice system may be their first or primary interaction with social services, but it is particularly poorly suited to address these additional needs. Lacking needed treatment or other interventions, a significant group of individuals cycles through jails and prisons, with the system as a whole failing to appreciably improve their individual outcomes arXiv:2001.09233v1 [cs.CY] 24 Jan 2020 or public safety [37,39,52]. The Criminal Justice/Mental Health Consensus Project found widespread dissatisfaction with the lack of resources available in the criminal justice system to address mental illness [55], and these failings are borne out in the statistics, with recidivism rates for individuals with mental illness reaching as high as 70% in some jurisdictions [57]. Likewise, Demleitner [18], argues that the combination of lacking effective treatment and "collateral restrictions" (such as restrictions on welfare benefits and employment opportunities) for drug offenders tends to reinforce the cycle of incarceration for people facing substance abuse issues. Faced with the high costs of incarceration, large jail populations booked with low-level misdemeanor offenses, and poor outcomes for these individuals with complex needs, some communities are turning to restorative justice and pre-trial diversionary programs as an alternative to incarceration in an effort to break this cycle. The design and implementation of these programs is as variable as the needs of the populations they serve, including (for example) mental health services, community service or restitution, substance abuse treatment, and facilitated meetings between victims and offenders. Use of these programs has expanded rapidly over the last two decades [50] and recent examinations of opportunities to improve outcomes in the criminal justice system have identified wide support for their continued expansion [55]. Evaluations of diversionary programs have generally shown success in reducing the time spent in jail without posing an increased risk to public safety, as well as increasing utilization of social services by individuals with mental health and substance abuse issues [16,31,40,50]. Although evidence around the relative short-term costs and savings has been considerably mixed, depending in great degree on the implementation details and variation in costs of incarceration across communities [17,50], there seems to be a growing consensus that diversionary programs that reflect individuals' specific challenges and needs can have a positive impact on those individuals. Our Work This paper describes a collaboration between the University of Chicago's Center for Data Science and Public Policy 1 and the Los Angeles City Attorney's Office to develop individualized intervention recommendations (i.e., diversions, conditional plea agreements, stayed sentencing, or other favorable case disposition based on appropriate social service linkage rather than traditional sentencing methods) by identifying individuals most at risk for future arrests for misdemeanor offenses handled by their office. The case study we present here is focused on dealing with equity, fairness, and bias issues that come up when building such systems, including: identifying desirable equitable outcomes from the policy view, defining these metrics for specific problems, understanding their implications on individuals, performing machine learning model development and selection, and helping decision-makers decide how to achieve their policy outcomes in an equitable manner by implementing such a system. While there has been a lot of theoretical work done on fairness in machine learning models in resource allocation settings, our work is focused on taking the many definitions and metrics for fairness that exist in literature, and showing how to 1 The corresponding authors and the collaboration have now moved to Carnegie Mellon University operationalize those definitions to select a metric that optimizes a specific policy goal in a public policy problem. We believe that this mapping from theory to practice is critical if we want data-driven decision making to result in fair and equitable policies. The ethical implications of applications of machine learning to criminal justice systems, particularly recidivism risks, has been the subject of considerable work and recent debate. The May 2016 publication by ProPublica of an investigation into the predictive equity of a widely-used recidivism risk score, Correctional Offender Management Profiling for Alternative Sanctions (COMPAS), helped raise both public awareness and researcher interest in these issues. Their analysis found dramatic racial disparities in the score's error rates, with false positive rates nearly twice as high for black defendants relative to white defendants and false negative rates roughly twice as high for white defendants, despite similar levels of precision across racial groups [3,41]. Subsequent scholarly work further explored the COMPAS example as well as the theoretical limitations of various competing metrics for measuring fairness [12,30]. More recently, Picard and colleagues [45] used anonymized data from New York city to demonstrate the generalization of ProPublica's findings to another context and explore more equitable options for implementing risk assessment in bail determination. Machine Learning in Criminal Justice The ongoing debates about both the context-specific definitions of fairness and the implications of not being able to meet all definitions at the same time are far from settled, and researchers continue to explore these topics in both the machine learning and legal literature. While some (such as Picard and colleagues [45] as well as Skeem and Lownkamp [49]) see the promise of algorithms carefully designed with equity in mind to improve on a status quo rife with subjectivity and biases, others raise questions about the practical ability of these tools to overcome existing disparities in the criminal justice system. Citing concerns about biased input data and conflicting definitions, Mayson [44] argues for restraint in the use of any predictions in criminal justice applications, particularly for punitive outcomes such as denying bail or handing down harsher sentences. Likewise, Harcourt [29] argues that strong associations between prior arrest history and race could exacerbate the "already intolerable racial imbalance" in prison populations through the growing use of risk scores in criminal sentencing. While our work focuses on an assistive intervention use case of identifying at risk individuals for social service interventions that seem to raise fewer inherent ethical concerns for many authors (e.g. Mayson [44] and Harcourt [29]), we nevertheless believe it is important to carefully consider fairness in these predictions in order to ensure that scarce resources are being allocated in a manner consistent with social goals of fairness and equity, instead of purely optimizing for efficiency alone. Ideally, to the extent that these programs may lower the risk of future arrests associated with individuals' existing challenges, accounting for predictive fairness in programs that help divert individuals from jail may even help counterbalance existing disparities in incarceration rates of these vulnerable populations. Previous work has enumerated metrics for evaluating bias [27,58], explored inherent conflicts in satisfying them [12,30], and described case studies and applications to a variety of problems [6,14,30,46]. The main contributions of this work include our framework for equity analysis, methods for balancing equity with other goals such as efficiency and effectiveness, and the application of this framework and methods to a public policy problem. Section 2 discusses the context of the work, data, and our approach. Section 3 briefly reviews the results of modeling and initial validation on novel data. Section 4 explores the potential sources of bias in this context while Section 5 discusses predictive fairness specifically and strategies for mitigating disparities. Section 6 concludes and discusses implications for similar applications and opportunities for future research. PROBLEM AND APPROACH 2.1 Recidivism Reduction in Los Angeles The Los Angeles City Attorney's Office has taken a leading role in developing and implementing innovative programs to improve individual outcomes and public safety. Their array of community justice initiatives reflect principles of partnering with the community to work in its best interest, creative problem solving, civic-mindedness, and attorneys embodying a leadership role in the community [24]. Many of these programs have received recognition for their holistic view of justice and the City Attorney's role in the community, including pop-up legal clinics for homeless citizens, prostitution diversion efforts, and a neighborhood justice initiative that focuses on restorative justice over punitive responses for low-level offenses [7,47,59]. Believing that traditional prosecutorial approaches have proven insufficiently effective as a response to misdemeanor crime -particularly in the context of a city facing overcrowded jails, endemic homelessness, and closures of county courthouses -the LA City Attorney has also recently created the Recidivism Reduction and Drug Diversion Unit (R2D2) to develop, oversee, and implement new criminal justice strategies rooted in evidence-based practices, data analytics, and social science. The unit has seen success with proactive community outreach programs (such as LA DOOR [9]) seeking to bring services to, and remove legal barriers from, individuals afflicted with substance abuse, poverty, and homelessness. But R2D2 also has a more ongoing role as well, seeking to improve the results of individuals who frequently cycle in and out of the criminal justice system as they show up involved with new misdemeanor cases. Recognizing that these chronic offenders reflect a failure of the existing criminal justice system to either deter future offenses through punitive actions or improve the underlying challenges that are leading the individual back into the system, R2D2 aims instead to develop individualized social service intervention plans in hopes of disrupting this unproductive cycle. However, the unit faces a number of challenges in preparing such diversion plans in real time when a case arises: the heavy caseload handled by the City Attorney's Office, very short turn-around times between initial booking and prosecutorial resolution, and poor data integration (including, in some cases, paper records). Ideally, these intervention plans could be prepared in advance and ready for implementation if and when a given individual was seen by their office again. However, because the process of developing case histories and recommendations for appropriate social service interventions is time and resource intensive, R2D2 could not practically prepare them for the large number of individuals who have been involved in past cases and instead needs a means of prioritizing the individuals most likely to be involved in a new misdemeanor case in the near future in order to effectively implement such a program. To aid R2D2 in identifying chronic offenders, determining caseload priorities, developing prioritized interventions, and protecting public safety, the Los Angeles City Attorney partnered with us to develop predictive models for the risk of a given individual to be involved with a subsequent interaction with the criminal justice system. The goals of this work were to build a system that 1) enables efficient use of the limited resources the City Attorney's office has, and 2) results in mitigating existing disparities in criminal justice outcomes. Data Data extracts from the City Attorney's case management system were provided for the project. As with any project making use of sensitive and confidential individual-level records, data protection is of the utmost importance here and all the work described in this paper was done under strict data use agreements and in secure computing environments. These data included information about jail bookings, charges, court appearances and outcomes, and demographics relating to cases handled by their office between 1995 and 2017. Because the system lacks a global unique personlevel identifier, case-level defendant data was used to link cases belonging to the same person using a probabilistic matching (record linkage) package, pgdedupe [4]. Matches using first and last name, date of birth, address, driver's license number (where available), and California Information and Identification (CII) number (where available) identified a total of 1,531,534 unique individuals in the data, associated with 2,456,365 distinct City Attorney cases. Machine Learning Modeling Strategy and Goals To assist R2D2 with their workload management and proactive case and intervention preparation, we used these data to develop predictive models of individuals likely to cycle back into the criminal justice system, choosing as our target variable (label) an indicator of whether a given individual was associated with at least one new booking into the local jail or City Attorney case in the subsequent six months. It is worth highlighting that, as several authors have noted previously [3,12,29,38,44], target variables focused on subsequent arrest, booking, or prosecution are highly imperfect proxies for subsequent crime commission (because, particularly for lower-level offenses, not all crimes committed lead to arrests, and policing practices and decisions may result in disparities between communities in enforcement rates), nor can or should the resulting scores be interpreted as any reflection of the underlying criminality of the individuals about whom predictions are made. We suggest that two factors mitigate these potential ethical concerns in this case: First, that the nature of this program is supportive and designed to help the individual rather than punitive ameliorates the potential for harm associated with being predicted to have a high risk. And, second, the reactive nature of the intervention means that predicting the likelihood of subsequent interaction with the criminal justice system is in fact the appropriate outcome of interest here: the tailored intervention plans will only be put into effect for those individuals who are involved in a subsequent case handled by the City Attorney and the aim of the program is to provide better outcomes for these people if and when they do return. We do recognize that there are potential ethical issues here around the misuse of such a system when given to the wrong agency but have worked closely with the organizations involved to ensure that this does not happen. From its inception, this work had two key goals: First, to improve the efficiency of R2D2's ability to serve the community through appropriate social service intervention programs by identifying individuals for whom advance preparation of individualized intervention plans was likely to be warranted. And, second, to ensure that the program resulted in equitable outcomes, consistent with the unit's goals of improving outcomes in traditionally under-served communities and working to mitigate existing disparities in criminal justice outcomes. As such, we sought to develop models that were effective at predicting future interactions with the criminal justice system, while evaluating the predictive fairness of these models and taking steps to ensure decisions based on these predictions were equitable as discussed further in Section 5 below. An important assumption to make explicit here is that the additional consideration individuals will receive on a subsequent case as a result of being selected by the model will in fact accrue to their benefit (as well as enhance public safety in general) by helping them successfully exit the criminal justice system in the long run. We arrived at this assumption through the process of scoping and defining the project in detailed conversations with the City Attorney's Office, as well as our understanding of the scholarly literature surrounding the needs and barriers to success of many individuals involved in the criminal justice system. In particular, our belief that a better understanding of how the criminal justice system has failed to improve outcomes for these individuals in the past will allow R2D2 to develop forward-looking strategies that will do so in the future provides the foundation for how we analyze and understand the fairness implications of our predictive model in Section 5. However, this assumption can and should be tested rigorously and regularly in a fully implemented program and, if found to be faulty, a review of the equity and ethical implications of the work would be necessary. Because the program was focused on improving outcomes for people frequently cycling through the criminal justice system, we focused our modeling efforts on those individuals who had more than one prior interaction (initial analyses also indicated that this cohort was far more likely to experience a subsequent interaction as well). Feature construction, model training, and performance evaluation was performed with the open-source machine learning toolkit, triage [1]. Features developed from the input data included information on the number and type of previous charges (structured to indicate the type and relative seriousness of each offense), information on origins and outcomes of prior City Attorney cases, demographics, prior jail bookings (and associated charges), and frequency and recency of prior criminal justice interactions. A grid of binary classification methods (including regularized logistic regressions, decision trees, random forests, and extra trees classifiers) and associated hyperparameters was evaluated for performance on the task of identifying the top 150 people most at risk of a new case or booking in the next six months, with the focus on the model's top 150 chosen as a potentially reasonable workload for R2D2. To ensure evaluation and model selection was done in a manner that reflected performance on novel data in a context in which policies and practices may change over time, we used a strategy of inter-temporal cross-validation [33] with modeling dates spaced at 6 month intervals between January 1, 2012 and January 1, 2017, each evaluated on the subsequent six month period. ML MODELING RESULTS Results of the grid search used for model selection are shown in Figure 1. Many of the models and hyperparameters tested performed in a similar range, with precision (positive predictive value) at the top 150 varying over time in a range between 70-80%, and a final model was chosen for its balance between overall performance and stability. 2 As of January 1, 2017 the City Attorney's data included 415,614 individuals who had more than one prior misdemeanor case or jail booking (and were included in the model built at that time). The baseline rate at which these individuals had a new criminal justice interaction over the next six months was 4.4% (18,374), indicating that relatively few people eligible to be included in the model were seen again over the evaluation period. From January 1 through June 30, 2017, 109 of the 150 highest-risk individuals identified by the model were involved with a new case or booking in this time window, a rate of 73%, and much higher than the overall 4.4% (random) baseline. Among the most predictive features used by this model to identify risk are the individual's age (both at time of first arrest and as of the prediction date), number of recent priors, and recency of their last interaction with the criminal justice system. With any modeling system built on temporal data, there is always the possibility that information "leaks" from the future to artificially improve model performance. For example, a coding error may cause events to become misdated. Although we diligently searched for such errors in the system, the best test of a model's performance is how well it predicts events that haven't happened yet. As a test of how the model would perform on new events that we did not have access to when we built the system, we used our modeling tools to make predictions for the second half of 2017 at the conclusion of the initial model development. In 2018, we received a second data transfer from the LA City Attorney and matched the new cases and bookings from July 1, 2017 through December 31, 2017 to our predictions for that period and found that, out of the 150 highest risk individuals, 104 (69%) went on to have a new case or booking during the last half of 2017. Taken together, these results indicated that the predictive model we had developed could perform and generalize reasonably well at achieving the project's first goal of improving the efficiency of R2D2's efforts to proactively develop BIAS AND FAIRNESS Gathering case histories and developing individually-tailored recommendations for social service intervention plans is a time and resource intensive process for the R2D2 staff. Even if given considerable advanced warning of individuals likely to be seen by their office again in the future, they would only be able to do so for a small fraction of individuals. They therefore want to ensure that they are allocating these scarce resources in a manner that is both efficient and equitable. As in any machine learning problem, there are a number of potential sources of bias that could influence the equitability of our results: the representativeness of the sample, accuracy of labels/outcomes and columns/variables, data reconciliation and processing, feature engineering, the modeling pipeline, and program implementation (such as the assignment and effectiveness of interventions). As other authors have discussed, particular concerns in the criminal justice context stem from sample and label biases [3,12,29,38,44]. Over-policing in communities of color may lead both to an unrepresentative sample for recidivism projects as well as label issues when subsequent arrests are used as indicators of future criminality. Likewise, racial disparities in conviction rates and sentencing may introduce bias into labels that rely on these criminal justice outcomes. A broad array of socioeconomic factors certainly contribute to historical and ongoing disparities in underlying crime rates that can inform programmatic goals and concepts of fairness even when labels may be considered reliable. Improving machine learning results with respect to fairness has recently been a very active area of research, with several innovative approaches proposed at various stages of the process. Providing a framework for decomposing the components of biases, Chen and colleagues [11] suggest that targeted collection of additional examples or new features may be an effective mitigation strategy in some cases. Others, including Zemel and colleagues [62], Celis and colleagues [23], Edwards and Storkey [21], Agarwal and colleagues [2], and Zafar and colleagues [60,61] have focused on accounting for biases directly in the learning process by making modifications such as introducing costs for departures from equity into the loss function during model training. Equity metrics have also been introduced in the process of model selection [13,51], balancing test set performance in terms of both accuracy and fairness in making the choice of modeling method and associated hyperparameters. Where an existing classifier shows disparate results, Dwork [20] described methods for eliminating biases by learning separate group-specific models on top of the existing classifier, and Hardt [30] likewise describes model-agnostic post-processing steps to mitigate disparities. Even when taking steps to account for and remove bias issues earlier in the pipeline, auditing the resulting predictions for fairness, using tools such as aequitas [48], is necessary to understand both how effective these mitigation strategies have been and detect any residual biases. Our approach in Section 5 focuses on this latter phase of post-hoc bias detection and mitigation. And, although we directly use labels that reflect future interactions with the criminal justice system, we do not rely on an assumption that these labels provide an unbiased representation of subsequent criminal activity and in fact explore an approach to predictive fairness that seeks to counteract existing base rate disparities that might arise from the sorts of sample and label biases that others have raised as potential concerns when working with criminal justice data. Additionally, in Section 6 we provide further thoughts on detecting and avoiding biases in program implementation, both in this particular case as well as more generally. PREDICTIVE FAIRNESS 5.1 Measuring Fairness Much has been written about the competing (and often mutually exclusive) concepts of fairness in machine learning problems [12,27,30,58]. In the context of recidivism prediction, this debate has focused primarily on punitive applications, such as risk scores being used to deny defendants bail or even to assign harsher sentences to individuals with higher risk. In that setting, individuals may be harmed by being predicted to be at higher risk than they in fact are: that is, many of the relevant fairness metrics include some measure of false positives produced by the score. The program we focus on in this work, however, is supportive in nature, aiming to improve long-term outcomes for defendants through diversion programs, tailored social service interventions, and additional consideration of their case history (refer to Section 2.3 for a discussion of the underlying assumptions here). Moreover, because the tailored intervention recommendations will only be acted upon on a subsequent case, the interventions only apply to individuals who the model correctly classifies as high risk. As such, there is minimal risk of individual harms accruing from false positives (while they do represent wasted effort on the part of the R2D2 team, we see relatively few equity considerations in that regard). Instead, the individuals who could be viewed as harmed by an inequitable application of this program are those who might have benefited but were mistakenly classified as unlikely to return: that is, the model's false negatives. In most cases, this would lead us to consider equity metrics that focus on disparities concerned with individuals who may benefit from the assistance but are left out from the program, such as the false omission rate or false negative rate (Figure 2 provides more detail on our framework for choosing predictive fairness metrics). However, the limited scale of the program due to the office's constrained resources poses additional challenges for thinking about equity. Because intervention recommendations can only be prepared for a small fraction of the individuals who will actually be charged with another misdemeanor, any implementation will unavoidably have a large number of false negatives. A focus on false omission rate parity, for instance, in not meaningful for such a small program because the false omission rates will very nearly approximate the underlying prevalence for each group and not be possible to balance given the limited number of people who can received assistance. Likewise, in these cases, the false negative rate for each group will be very close to 1 -although balancing F N R across groups in these cases is possible, focusing equivalently on recall is easier in practice (for instance, with more meaningful ratios across groups). Additionally, in the case of limited resources, we see a reasonable interpretation of recall as fairness metric in itself, noting that it corresponds to what Hardt and colleagues [30] term "equality of opportunity": given that the program cannot serve everyone with need, we may want to at least ensure that the set of people it does serve is representative of the distribution of need across protected classes in the population. To evaluate the predictive fairness of our best-performing model, we looked at the distribution of recall (also known as sensitivity) by race/ethnicity 3 . Figure 3 illustrates the presence of disparities if the model were used to select the 150 highest-risk individuals without consideration of equity. While recall is similar for black and white individuals, hispanic individuals are considerably underrepresented in the top 150 group relative to their actual prevalence. Mitigating Disparities While the predictive performance of the model satisfied the goal of efficiency (using precision or positive predictive value as the metric) defined at the outset of the project, the racial disparities found above fell short of satisfying the equally important goal of fairness. In order to remedy this shortcoming, we explored the use of slightly adjusting the score threshold used by the model to select individuals from each race/ethnicity group to better balance recall across the groups. Some authors have argued that using separate thresholds in the interest of balancing predictive equity in itself falls short of fairness by treating individuals with similar risk profiles in different ways [15]. However, concepts of fairness through unawareness have been consistently demonstrated to be misguided [8,10,19,25,38,54], and any process that seeks to balance the dual goals of equity and efficiency will face an inherent trade-off between these objectives, even where it is obscured by the process involved. For instance, when a more equitable but less predictive model is chosen over a more predictive but less equitable one to distribute a benefit, there will always be some individual whose score in the more predictive model would have qualified them for a benefit that they didn't receive as the result of choosing the more equitable model. Though both models may be well-calibrated in a limited sense, data was available to better understand the risk profile of this individual that was ignored in the interest of equity, implicitly making the same trade-off as allowing the threshold to vary by group. For further discussion of this ongoing debate from a legal perspective, see the informative pieces offered by Kroll and colleagues [38] 4 as well as Bent [5] and Huq [32], which highlight several of the competing standards and interpretations of colorblindness, equal protection, disparate treatment, and disparate impact, and their implications for algorithmic decision making. Of particular interest here is the suggestion by Kroll [38] that the Supreme Court's findings in Ricci v DeStefano might prohibit any post-hoc algorithmic adjustments made in the interest of fairness along the lines of protected attributes. 5 Several others [5,35,36,43], however, Figure 2: A framework for considering potential fairness metrics. Here, FP/GS is the number of false positives divided by the total size of each group of interest (e.g., the subset of individuals of a given race, gender, etc), FN/GS is the analog with false negatives, FDR is the false discovery rate, FPR the false positive rate, FOR the false omission rate, and FNR the false negative rate. In practice, considering the trade-offs across multiple metrics is often desirable. Note that while focusing on recall in the case of a small assistive program is equivalent to focusing on FNR parity, it may have nicer mathematical properties, such as meaningful ratios. disagree with this interpretation, noting that the harm involved in Ricci was in undoing a benefit that had already been awarded, not anything inherent in auditing or improving an algorithm after the fact, so long as those improvements are used to make future decisions rather than to reverse past ones. Other authors [42,53] likewise speak to the potential necessity of differential treatment to avoid or mitigate disparate outcomes for ML-aided decision making. This need may even be more acute in contexts where there may be a compelling social goal of counteracting existing disparities or historical inequities. Huq [32] further discusses the tension between between existing legal and technical concepts of fairness, suggesting a need for practical evaluation of algorithms on the basis of their actual long-term impact on disparities. Finally, from a more technical perspective, Hardt and colleagues [30] make a strong case for the ability of post-processing to achieve several definitions of fairness and describe the procedure they propose as shifting the burden of uncertainty from the protected class to the decision maker. Dwork and colleagues [20] likewise explore "decoupling" methods that allow for improving equity by learning group-specific classifiers built on top of existing "black box" algorithms. We could further consider the trade-offs involved with meeting the goal of equity in two ways: (1) One option would be to measure an "additional cost of equity" in terms of programmatic resources. If more resources are available (or could be obtained), the scale of the program could be expanded to serve the 150 highest-risk individuals along with additional high risk individuals who are underrepresented in this set. (2) If, however, the program has a hard constraint on resources, then there is a more explicit trade-off between equity and efficiency. In this case, some individuals from over-represented groups would, of necessity, be left out in order to serve slightly lower risk individuals from under-represented groups. In either case, we also wanted to consider how adjusting for predictive equity might affect longer-term outcomes, particularly in the presence of underlying disparities in the baseline prevalence across groups. Assuming the program is equally effective across individuals (an assumption that does need to be validated), simply balancing recall (or sensitivity) across groups would aim to improve outcomes proportionally across groups without increasing disparities (as could happen if the model were deployed without consideration of predictive equity), but wouldn't serve to counteract existing disparities. We therefore provided an additional set of options for the City Attorney's Office to consider, balancing recall not equally across groups, but relative to their current rate of having repeated interactions with the criminal justice system. While both options will focus more resources on groups with higher need, the latter seeks to improve outcomes more rapidly for these groups relative to others, ideally resulting in equal recidivism rate across groups over time. Because recall is monotonically increasing with the depth traversed into a score, we could readily determine thresholds that balance this metric across groups (either equally or relative to prevalence as noted above) using the procedure described in Algorithm 1. For forward-looking predictions, the within-group list sizes k д were determined by balancing recall to the specific objective for the most recent complete test set. Considering first options that expand the scale of the program in the interest of recall equity, we looked at how many additional case histories and intervention recommendations the R2D2 staff would need to be able to prepare to include the 150 highest-risk individuals as well as enough individuals from groups under-represented by this set such that either (a) every group had a recall as near to 0.81% (the highest observed in the top 150) as possible, or (b) the ratio between the recall for each group and that for white individuals (0.66%) was equal to the ratio of their prevalences. In the latter case, this required targeting higher values of recall for black (1.04%) and hispanic individuals (0.80%) relative to white individuals, as shown in Figure 4A. In either case, the scale of the program would need to expand by about 50% in order to meet these criteria: to 218 individuals for equalized recall or 228 individuals for recall balanced relative to prevalence. Figure 4B breaks these counts down by race/ethnicity groups. Alternatively, smaller thresholds can be applied to each group to satisfy these criteria on the distribution of recall while maintaining a total program size of 150 individuals. Figure 4B shows how these options break down by group as well. In particular, note that many more hispanic individuals are included in either case than simply focusing on the 150 highest-risk individuals. When equalizing recall, fewer black individuals are included than even in top 150 case, while their higher underlying prevalence results in a more similar number being included when balancing recall relative to prevalence. While keeping the scale fixed at 150, we can also consider the explicit trade-offs between equity and efficiency. In this case, we find only a Choose a reference group against which to normalize prevalences, G = д r ef ; 17 Calculate target ratios for each group relative to this reference: r д = P д P д r e f ; 18 if desired reference recall value R д r e f is specified then 19 set k д = max(n д,i ) ∋ R д,i ≤ r д × R д r e f ∀д ∈ G; 20 end 21 if desired list size K is specified then 22 Initialize x = min(R д r e f ,i ), a small step size s; modest decrease in precision is required to achieve more equitable predictions: precision for both recall balanced options is only 2 percentage points lower than for the 150 highest-risk individuals without accounting for fairness (70.7% vs 72.7%). Although we can explore a variety of options and make explicit the trade-offs inherent to balancing program size and costs, efficiency, and equity, the choice of how to weigh these factors against one-another is fundamentally one of policy and judgment. In practice, this involved a series of detailed conversations between the data science team at the University of Chicago and the policy makers at the LA City Attorney's Office about how to understand the meaning of each metric, possible limitations of the data, available resources, and goals both for R2D2 generally and this project specifically. Not only has this process greatly helped us refine our understanding of operational predictive fairness for policy problems, but we believe it will yield better and more equitable outcomes for Los Angeles. DISCUSSION As of this writing, the City Attorney's Office is implementing the system internally and exploring deploying these predictive models in their current workflow. While the model could be deployed as a "black box" process that periodically generates predictions based on the current state of their data, this sort of implementation runs the medium-term risk of degraded performance (in terms of both precision and fairness metrics) as patterns in the data change with changing laws and social context. Instead, an effective implementation will require ongoing evaluation of both the performance and fairness of the model's predictions over time, revisiting the model training and selection process as needed to ensure it continues to reflect changes in the underlying relationships. From a measurement perspective, one simplifying feature of the program discussed here is its reactive nature: while social service intervention recommendations would be prepared for the set of individuals selected by the model, interventions will only take place in response to a subsequent case involving these individuals. As a result, the relevant pre-intervention outcomes for all individuals can in fact be measured, allowing for ongoing assessment of both model performance and equity. In many programs, however, this may not be the case. Where interventions are seeking to prevent the adverse outcome the model is working to predict, it may be difficult or impossible to measure true and false positives without an understanding of the counterfactual of what would have happened in the absence of the intervention. For instance, among a cohort of unemployed individuals who receive assistance through a job training program and subsequently find employment, it would be impossible to say who would have found a job without the help of the program, inhibiting the accurate measurement of recall (along with many other potential metrics) as a means to assess performance or fairness. Data scientists and policy makers working in such contexts will need to carefully consider a strategy for ongoing measurement and feedback depending on the practical and ethical considerations relevant to their specific context, potentially drawing on methods from program evaluation and causal inference. Despite the challenges, continuing to assess and improve both efficiency and equity over time is a critical element of any predictive system that will be deployed to an ongoing application. Finally, we should comment on the interaction between predictive fairness and fairness in outcomes. Although our focus here has been on the machine learning aspects of a project and considerations around fairness in the decision of who will receive the benefits given limited resources, this work cannot be divorced from broader questions of fairness in the context of the overall program implementation. Here, the inclusion of scenarios that incorporate disparities across racial/ethnic groups in the underlying prevalence of a subsequent interaction with the criminal justice system in the decision making process represents one step in moving beyond a simplistic view of predictive equity. However, as programs such as the one described here are implemented, equity needs to be considered not only at the level of the machine learning pipeline, but in the context of programmatic outcomes as well. Ensuring fairness in decisions made with the aide of predictive models is an element of this broader goal of fairness in outcomes, but is far from sufficient to ensure it. In order to do so, programs need to assess the potential for differential impact of their interventions across protected groups and feed this understanding back into both their decision making about who receives interventions and, importantly, into the design of the interventions themselves to ensure they are best serving vulnerable populations. The appropriate concept of fairness, both in decision making and implementation, is highly dependent on the nature of the program in question. The supportive nature of the social service intervention plans and implementation details of the program described here led us to focus on balancing recall in the predictive outputs of the model we developed, but this decision would be less appropriate for measuring fairness in other settings. Our hope is that the framework in Figure 2 will help machine learning practitioners and other stakeholders arrive at the appropriate concept of predictive fairness in their specific context. Likewise, as discussed above, there are ethical implications of how predictive scores such as those developed in this case study are used and interpreted. The potential for selection and label biases in the training data mean it would be highly inappropriate to interpret the resulting scores as any reflection of the underlying criminality of the individuals about whom predictions are made, let alone take any actions that reflect such an interpretation. A related concern might involve the possibility of stigma or stereotyping associated with being identified as high risk for a future arrest. Similar issues have been described in the context of educational programs aimed at predicting students at risk of dropping out [22] and seem particularly salient in the criminal justice context as well. Structurally, two factors may help reduce these risks here: First, that the intervention here only involves acting on social service plans should an individual in fact be involved in another case rather than proactively reaching out to these individuals and alerting them that they have been flagged as at risk. And, second, that the intervention plans reflect what the City Attorney's Office ideally would prepare for every case (time and resources permitting) rather than a specific program developed for these high-risk individuals that might garner some stigma. Nevertheless, this concern only further highlights the fact that carefully monitoring for actual improvement in outcomes and potential unintended consequences such as these is a vital aspect of the implementation of any program intending to assist vulnerable populations. This case study with the Los Angeles City Attorney's Office is in many ways a work in progress. We have learned a great deal from the collaboration about how to approach and understand predictive equity and the trade-offs involved in implementing a public policy program. Our hope is that these lessons and insights will prove informative to others working to balance the dual goals of equity and efficiency in the application of machine learning to other problems facing government agencies. The methods and analyses described here are most directly applicable to other resource-constrained benefit allocation problems. Such problems, of course, are found in many public policy settings: allocating food or housing subsidies, giving additional tutoring to students, identifying long-term unemployed individuals for a job training program, or distributing healthcare workers across rural communities in a developing nation. With some modification, a similar approach certainly seems applicable to other settings (for instance, where the intervention is punitive such as with inspections for hazardous waste violations or fraud detection) so long as a single equity metric can be identified which increases or decreases monotonically with a score cut-off. Exploring the trade-offs between equity, efficiency, and effectiveness across other contexts and applications to understand the best general approaches to balancing these goals is an ongoing research interest for us. Similarly, additional research is needed to understand how to extend this work to contexts where there is a less clearly-defined choice of fairness metric (for instance, where there are appreciable costs to disparities in false positives and false negatives) or the relevant metric is not monotonically increasing or decreasing with the prediction threshold (e.g., false discovery rate). While some recently-developed methods provide considerable flexibility for optimizing for a wide variety of fairness metrics in classification (see, for instance, [23] for both a good example in itself and overview of other methods), a number of practical challenges remain to be addressed such as adapting these methods to the common challenge of allocating limited resources and associated non-convex "top k" optimization problem this implies.
2020-01-23T09:11:23.958Z
2020-01-24T00:00:00.000
{ "year": 2020, "sha1": "06e041dd7fb880e672103174aaa84ad0e44cbd50", "oa_license": null, "oa_url": "http://arxiv.org/pdf/2001.09233", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "cb0cab29ad521a204f2e5fd738c2bb82780808c1", "s2fieldsofstudy": [ "Law" ], "extfieldsofstudy": [ "Computer Science", "Psychology" ] }
33258404
pes2o/s2orc
v3-fos-license
Disadvantaged Parents’ Engagement with a National Secondhand Smoke in the Home Mass Media Campaign: A Qualitative Study Mass media campaigns can be effective in tobacco control but may widen health inequalities if they fail to engage disadvantaged smokers. This qualitative study explored how parents with young children living in disadvantaged circumstances engaged with a national campaign which aimed to raise awareness of the importance of smokefree homes. Individual semi-structured interviews were carried out with 17 parents before and after the Scottish 2014 “Right Outside” mass media campaign. A conceptual framework exploring meaningful exposure (recall and understanding), motivational responses (protecting children from secondhand smoke (SHS)) and opportunities to act (barriers) was used to thematically analyse the findings. Campaign recall and engagement, and motivation to protect children were high. Parents identified with the dramatized scenario and visual impact of SHS harm to children in the TV advertisement. Some reported changed smoking practices. However, supervising young children in limited accommodation when caring alone constrained opportunities to smoke outside. Instead, parents described actions other than smoking outside that they had taken or were planning to take to create smokefree homes. Mass media campaigns using emotive, real-life circumstances can be effective in engaging parents about SHS. However, the behavioural impact may be limited because of difficult home environments and circumstances. Introduction Substantial international evidence on the negative health effects of exposure to secondhand smoke (SHS) exists, particularly for children for whom these effects include reduced lung function, asthma, glue ear, and bronchitis [1,2].Reducing children's SHS exposure is, therefore, an important aim of tobacco control.SHS exposure reduced in adults and children following the implementation of comprehensive smokefree public places policies in countries such as Scotland (in 2006) and England (in 2007) [3][4][5].However, smokefree policies do not encompass private homes where children spend most of their time.Further, the decline in SHS exposure has been greatest in socioeconomically advantaged groups, thereby increasing social inequalities in children's SHS exposure [5].In Scotland, 33% of the most socioeconomically disadvantaged children are exposed to SHS in homes compared to 6% of the most advantaged children [6].This reflects higher parental smoking rates and fewer smoking restrictions in disadvantaged homes [7].While disadvantaged parents attempt to reduce children's SHS exposure by restricting smoking to one room and/or to when children are not present [8][9][10][11], only creating and maintaining a smokefree home effectively protects children [12].However, a recent systematic review of the barriers, motivators, and enablers to smokefree homes concluded that these include: knowledge, awareness and risk perception; agency and personal skills/attributes; wider community norms and personal moral responsibilities; social relationships and influence of others; perceived benefits, preferences and priorities; addiction and habit; and practicalities [13].While some of these apply to all study settings and populations, the increasing introduction of smokefree legislation has shown that disadvantaged parents can face several barriers to creating smokefree homes, including limited access to safe outside space, more permissive smoking norms and limited awareness/ acceptance of SHS messages [8][9][10][11]. In 2014, Scotland became the first country to set a national target to reduce the number of children exposed to SHS in the home, from 12% to 6% by 2020 [14].The announcement of this target was supported by the launch of a national mass media campaign "Right Outside" which aimed to increase awareness about the importance of a smokefree home to reduce children's SHS exposure.Mass media campaigns, when designed appropriately, can be effective as part of tobacco control programmes [15].For example, smoking cessation mass media campaigns have increased awareness of the harms of smoking, changed smoking attitudes and norms, increased quitting intentions and quit attempts, and reduced adult smoking prevalence [16,17].SHS mass media campaigns have not been evaluated as extensively or as comprehensively as those on cessation [18].However, an international review of more than 30 SHS mass media campaigns suggested that the more successful campaigns used television, relied on "testimonials" or personal stories, focused on the health impact of SHS on children, and elicited negative emotions while not demeaning smokers [18], and a recent UK study indicated a tentative and temporary effect of secondhand smoke campaigns on smokefree homes [19].TV advertisements containing graphic health harm messages were also found to be the most effective in increasing motivation for creating smokefree homes in male smokers and non-smokers in China, India, and Russia [20]. Concerns have been raised that mass media campaigns may inadvertently increase health disparities due to an unequal uptake of the messages by different socioeconomic status (SES) groups, and inequalities in social resources to support quitting smoking [21,22] or the ability to smoke outside the home [11].While evidence on the equity impact of SHS mass media campaigns is limited, and a recent large UK survey study found no evidence of participants' socioeconomic status affecting the likelihood of them creating a smokefree home after secondhand smoke mass media campaigns [19], a review of cessation campaigns found that disadvantaged populations were more likely to respond to negative imagery or testimonials about the negative effects of smoking [23].Advertisements which stir emotional reflections about one's own behaviour or life may be particularly persuasive, but the emotional reaction has also been found to depend on the extent to which individuals relate to the characters and/or story portrayed in both cessation [24] and SHS campaigns [18]. The impact of a mass media campaign is usually measured by levels of recall and/or changes in awareness, attitudes, or behaviour [21].Niederdeppe and colleagues [21] have identified three different areas of relevance when exploring the effect in different SES groups: meaningful exposure, motivational response to messages, and the opportunity to act on them.Meaningful exposure relates to the comprehension and recall of the message(s).Smokers in disadvantaged circumstances may have lower exposure as they may have less access to media (e.g., Internet), or use or attend to media differently.The motivational response to the message(s) relates to whether the campaign messages increase motivation to seek further information, advice and/or support to implement the message by, for example, phoning quit lines or exploring websites.Opportunities to act acknowledges that people from different SES groups may be constrained in responding to the message(s) communicated, as lower SES groups have fewer social (e.g., social capital, networks, workplace cultures) and structural (e.g., healthcare access) resources to support behaviour change even when motivation to change is high [25].This analytical framework is particularly relevant to assess disadvantaged parents' responses to a smokefree homes campaign, as previous research has shown that opportunities to act might be constrained by their limited domestic and social circumstances [26,27].Concerns have also been expressed about the unintended consequences that tobacco control interventions, including mass media, can have such as increasing guilt and stigma, particularly among disadvantaged mothers [28,29].Disadvantaged parents of young children thus represent a target group for smokefree mass media campaigns, but may experience particular barriers in responding to the messages.However, as far as we are aware, no studies have used qualitative methods to explore how parents who live in disadvantaged circumstances engage with and respond to mass media campaign messages on SHS in the home.This is surprising as effective targeting of specific populations requires an in-depth understanding of current awareness, reasoning around current protective measures and barriers to responding to smokefree home messages [18]. The Scottish Government's "Right Outside" media campaign aimed to increase awareness of SHS and the importance of smokefree homes to protect child health.The key messages were children's greater vulnerability to SHS and the ineffectiveness of many parents' existing attempts to protect children by restricting smoking to one room and/or dispersing the smoke through an open window.The campaign had several elements including community events, radio advertisements, posters and a website (http://www.rightoutside.org).The main element of the campaign was a 40-s TV advertisement which aired up to 10 times a day from 26 March to 15 June 2014 on three terrestrial TV channels.The advert showed a mother tucking her six-year old son into bed, shutting his bedroom door and the kitchen door before lighting a cigarette.She opens the window and when her partner enters she gestures to him to shut the door quickly, and waves her hand to disperse the smoke.She then puts the cigarette out and closes the window.A split screen shows on one side her watching TV with her partner, brushing her teeth and going to bed, while on the other side tobacco smoke gradually darkens her son's lungs as shown on his white t-shirt while he sleeps.A voice-over explains that "Kids breathe faster than adults.When you smoke indoors, your secondhand smoke lingers in the air.You can't see it or smell it but it's there.No matter what you do the harmful chemicals move from room to room for up to five hours, and because your child breathes faster than you, they breathe more of those harmful chemicals.You can choose whether your child breathes secondhand smoke or clean air.For your kids' sake, don't smoke indoors.Take it right outside". In this study we first undertook interviews with disadvantaged parents prior to the media campaign and found that the challenging and changing living circumstances and the increasing mobility of children in their first few years were key barriers to them creating smokefree homes [30].Our previously published paper on these findings focussed on the disadvantaged mothers who were interviewed and their reported attempts to protect their children from both SHS and becoming smokers.We found that these attempts were motivated by the perceived future health and financial burdens these entail [30].In the context of several intersecting dimensions of disadvantage (unemployment, low income, alcohol/drug abuse, and domestic abuse), the imperative to be and to be seen to be a good mother was also key in shaping smoking practices in the home.While some recognised that the strategies that they used to protect their children from SHS (e.g., opening windows, moving to another room) were not wholly effective, they were perceived to be the best that they could do within their constrained circumstances.In this paper we draw on the second wave of interviews with a sub-sample of these parents (including fathers) which took place after the media campaign and explore how they engaged with, and responded to, this mass media campaign which encouraged them to smoke "right outside", given their challenging circumstances, drawing on an adapted version of Niederdeppe et al.'s framework [21] outlined previously. Participants Twenty-two mothers and three fathers were interviewed individually pre-campaign in November 2013-January 2014.The post-campaign follow-up interviews with seventeen of these parents took place in May-August 2014.Eight of the original parents could not be contacted or did not turn up for the interview.Fourteen mothers and three fathers aged 22-47 years were interviewed.Parents had one to four children of whom at least one was under three years old.Fourteen were single parents, all were long-term unemployed, and all but one parent lived in a flat without direct access to outside space.All parents smoked apart from four mothers who had quit smoking a few weeks prior to the first interview, two of whom had relapsed by the second interview. The parents were purposively recruited in the first pre-campaign phase of the study from Early Years' Centres in five Edinburgh communities; three disadvantaged and two of mixed SES.Centres were contacted by the interviewer (NR-D) to inform staff about the study.Parents who smoked and had children aged 1-3 years were informed by their key worker first, and then by the interviewer, that their voluntary participation would entail an interview at a place and time of their choice about where, when, and why they smoked.Parents are referred to centres by social services when considered vulnerable and in need of support, usually because of issues related to mental health, drug or alcohol dependency, and/or family breakdown.Participants were informed verbally and in writing in the first phase of the study, that the interview would explore smoking practices around young children in the home.In the second phase, participants re-consented to be interviewed about any changes in their smoking practices and their views on anything they had seen on SHS in the media.Participants were advised before both interviews that they could withdraw from the study at any time and decline to answer any question.Written consent was obtained.Ethical approval was granted by the University of Edinburgh Centre for Population Health Ethics Committee.Participants received a £15 voucher per interview to recognise their participation in the study.Participants' names have been changed to pseudonyms to protect their anonymity. Data Collection Methods Interviews took place in private rooms in the centres, lasted 25-75 min (average length 45 min), and were digitally recorded with participants' permission.In the first interview, parents were asked about their understandings of SHS, smoking restrictions in their home, interactions with others around these, and any changes over time.Accounts of home smoking restrictions were prompted using floor plans, where parents drew a floor plan of their home and indicated where they smoked during pregnancy, after their child was born, and in the period since then.This method was developed in a previous smoking in the home study [10]. In the follow-up interviews, which are the focus of this paper, the interview guide (see supplementary materials) covered questions regarding any changes since the first interview to parents' smoking practices and understandings about SHS.If home smoking restrictions had changed they were asked to illustrate the change(s) in another floor plan.They were then asked if they had heard or seen anything about secondhand smoking or smoking in the home since the first interview and, if so, what they remembered.Participants who had seen the TV advert were asked their views about the messages, and if and how they could implement them.At the end of the interview, after being asked about their recall and the impact of the advert, all participants were shown the TV advert on an iPad.Those who had previously seen it were asked about any additional responses to it, and those who had not seen it were asked about their responses to it. Data Analysis The analysis focused on participants' perspectives on the campaign in the second post-campaign interviews.The digital interview recordings were transcribed, and the transcripts read and reread by both authors alongside the field notes of the first author who conducted the interviews.Emerging themes and issues were identified by the first author according to Braun and Clarke's approach to thematic analysis, a flexible and iterative method for identifying, analysing, and reporting patterns as themes within qualitative data [31].The themes were verified by the second author and the final comparative analysis involved both authors.After a full thematic analysis of participants' accounts, the authors analysed participants' accounts of their responses to the mass media campaign using Niederdeppe et al.'s [21] framework of meaningful exposure, motivational response, and opportunity to act.We have adapted the motivational response category of the framework to also encompass increased changes in awareness and/or smoking practices.The quotations from the interviews included in the results section use pseudonyms. Meaningful Exposure All parents reported daily TV (and other media) use, however many watched non-terrestrial channels rather than the three terrestrial channels that the advert aired on.Despite this, 13 of the 17 parents had seen and mentioned the TV advert when asked to recall anything about secondhand smoking that they had seen in the media in recent months, and two mothers who had seen the adverts quoted the campaign messages unprompted.One parent recalled hearing the radio advert, another had seen a campaign poster in the local pharmacy, but parents had not seen the community events or the website.Demonstrating their recall and understanding of the messages portrayed in the TV advert, participants summarised these in their own words: "I think it's quite good, it's getting the message across.It's not just smoking away from your kid, but it lingers as well, when you're smoking round them anyway.It's just as bad, smoking away from them.Because it's still travelling round the house."Cara "Just to take it away from your bairns (children).Don't make them breathe your smoke.And, I agree with it.I do actually agree with it; don't let your bairns breathe your smoke."Louise Recall and comprehension of the messages about lingering smoke and child vulnerability were high both for parents who had seen the advert before and for the four parents who watched the advert for the first time in the interview. Motivational Response Most parents had explained that they attempted to protect their children from SHS by restricting smoking to one room and by opening a window in the pre-campaign interviews.Many, therefore, identified with the advert's portrayal of a mother who smoked by her kitchen window to protect her child.For instance, Tania, who had recently quit smoking, described how the advert appealed to her because it "felt real" and reaffirmed her decision to quit smoking."I thought, that's actually quite good...( . . . ) "cause I thought, that's actually more like it" cause it showed the actual child ( . . . ) I just remember it was more realistic ( . . . ) It was more about smoking within close proximity, and the effects it has when you think that closing the window is enough.( . . . ) If I was still smoking, I'd have seen that and went, "oh god", you know what I mean.You think it's enough protection but it's not." Mothers frequently referred to the boy's lungs "going black" (Louise) in the advert, demonstrating its visual and emotional impact of the depiction of a situation close to their own.They appeared to identify with the mother and to see their own child(ren) in the boy."It's definitely had an effect on me.I get guilty when I hear it, you know, ( . . .), it makes it even more penetrable in your brain, you think it's me."Michelle Indeed, one mother who viewed the advert for the first time in the interview was reduced to tears and vowed never to smoke in her home again. All were aware that the advert aimed to inform parents about the risk that their smoking in the home posed to their children's health, responses to which varied from acceptance and guilt, to resistance.Parents accepted that the ideal strategy would be to only smoke outside, and reported feelings of guilt about their smoking in the home given the potential impact on their children's health; guilt that watching the advert appeared to reinforce.Most mothers and one father talked about their emotional responses to the advert: "it gets me every time" as one mother said.For another mother, the guilt provoked by the advert was partly credited with reinforcing her wish to stop smoking: "But the advert's horrible.It makes you feel ten times worse.I don't know seeing the kids lungs is just like that...because they're starting to go...do they no go black?( . . . ) That's horrible.That's definitely another reason to stop."Keira For others, expressions of guilt were tempered by the statement that they were doing their best. "It does make me feel guilty and bad, but I do my best to keep it away from her, so . . ." Louise Guilt was also tempered by references to the smoking practices of others perceived to be less responsible.For example, Tara described her friend's smoking practices and her own attempts to intervene. "Her youngest is a baby coming up for a year, and she smokes in front of her, sorry, from day one, she was in a Moses basket and she's sitting in the living room having a cigarette and the baby's in the Moses basket.And I'm like, what are you doing?You know, and I came up with my baby and she was like . . .I was like, are you wanting to kill your baby? ( . . . ) you have to be cruel to be kind, and then of course when my baby was up there (...) and she went to light up, and I was (makes a "no" noise).That's the first time . . .like I say, I don't tell people what to do in their own house, but I'm like . . .you might want to kill your baby, but you ain't killing mine, and the smoke in this house is enough to kill the both of them." Commenting on the guilt the advert provoked, Michelle reasoned it was beneficial and justified as it might change parents' smoking practices. "I think it's a good thing to guilt trip us into stopping, oh, definitely.I mean show us the horrors, you know, shouldn't be doing it and especially with your kids.See those mothers that walk about with fags in their mouths pushing the prams, oh . . ." Michelle includes herself in those who 'should' feel guilty, but then reverts to, and distinguishes herself from, the smoking mother with a pram as a signifier of an irresponsible mother.Admitting some, but resisting full, blame characterises such discourses.Others, like Sharon, claimed the advert had changed their attitudes to smoking in their home and its effect on their children. "I think it's more just like, you don't know until you see things like that.You don't, so your demeanour don't change until you see something like that.Know what I mean, and then you do see it and it does kind of make you go, oh, I've been letting my bairn do that, I've been doing that with my bairn.So it's not very nice, but I think that gets the point across very well.You understand that, it's explaining enough to you, look just don't smoke in your house at all, basically, it's travelling." Three mothers claimed the campaign had altered where they smoked and/or increased their motivation to stop smoking.One of these mothers had misunderstood the smokefree home message and said she had started smoking in other rooms rather than the kitchen where she used to smoke: "They say you don't smoke in the same room as your kid...well, or in the kid's bedroom but it does linger for five hours or so.So, I try to go in rooms that she doesn't go in (now)."Keira In a clear display of resistance to the advert's messages Monica and Lisa, both of whom had seen the advert before, crossed their arms and hardly looked at the screen when the advert played, shaking their heads.Refuting the messages in the campaign, Monica expressed her disbelief and disapproval: "I don't know.Like, they're still saying, like, take it outdoors, but the smoke's obviously still going to be on your clothes, so they've still got to . . . it doesn't make any difference what you do really, eh.So it's weird.( . . . ) Oh, I know 'breathe faster', but if the smoke's just lingering in there and it's only a tiny wee bit, how can they be breathing in more, do you know what I mean, if you can't see it or can't smell it . . .like, if you can't see it or can't smell it, how do the people actually know it's there, do you know what I mean.It's weird.Just mad." Similarly, Lisa refuted the message that a child at some distance in another room or in a well-ventilated space like her own home would be at risk: "And really I think that's just pure scaremongering for mothers.I do know it affects them if they're in the same room and stuff, but if you have like kind of ventilation in your house then I don't believe that that actually happens.( . . . ) I don't think that's right at all, that's why I think it's just scare mongering.Like my windows are open 24/7 a day every day of the week, like all my windows, the only window that gets shut is my daughter's at night!But if they're in the same area as you then obviously it's going to affect them, or if they're walking past you." Opportunity to Act Smokefree homes were described as desirable, but impossible, for most parents given their current circumstances, which entailed supervising a very young and mobile child with little support or access to outside space; exposing them to other risks.Their current strategies to protect children from SHS were not seen as wholly effective but the best they could do.For example, Louise expressed her desire to smoke outside to protect her asthmatic daughter from SHS but as a single parent she felt her options were constrained: "I do agree with it, but there are some people . . .Like, I'm a single mum with two kids and I can't leave them in the house while I nip down the stairs for a fag, so hanging out the window is my only option.I think they've actually hit the nail on the head with the advert.But, the way I see it is, if they want people not to smoke, give everybody a house that's got a front and back garden.That would be the only way for it to be possible for me to smoke and still look after my children would be to have a front and back door house." Mark, a single father, described the effect of the advert on his smoking behaviour as temporary because of his addiction to nicotine and limited accommodation: "It made me think.It really did make me think.And I think it took a wee while for me to have another smoke in the house.But then it just goes out of your head, then you start smoking again.It's the addiction.( . . . ) I couldn't go outside.I wouldn't leave the house because of (son).If I had a garden, yeah, I'd stand at the back door.Close the door and have a cigarette.But there's no way I can get outside.If I had a proper veranda, then I could go out there and close the door.But because there's nowhere I can go to get outside I can't smoke outside." In a similar vein, Michelle, another single mother, did not refute the advert's message or the portrayal of how many parents smoke, but questioned the target of the campaign."It's good.It's all aimed I think at people that live in, you know, nice houses with to go outside and have a fag and the majority of people I know don't live in a house with garden to go outside and have a nice cigarette.I live in a flat and say my neighbour on the sixth floor, she can't go downstairs, she's got to hang out her window as well.( . . . ) The advert is very, kind of, pigeonholed at a certain type of smoker or type of property a smoker might have.Assumed that she's got a partner and a house with a garden when the majority of single folk on low incomes are smokers, live in flats, they know this but they've got to, kind of, stretch it out and make it sound good."Furthermore, parents' accounts were often resistant to the idea that they were solely responsible for their children's SHS exposure, when they could spend significant amounts of time with other adult family members who smoked such as ex-partners and grandparents with less strict home smoking rules. Alternative Solutions Rather than going outside to smoke, parents sought other options to reduce their children's SHS exposure.Most said they would like to quit: a wish supported by overlapping concerns about child health (SHS exposure and role-modelling smoking), their own health, and family finances, which were significantly stretched for all.Quitting was perceived to be a great challenge."(Quitting)'s not that easy to do either.I know people that have smoked for years and years, even me I can't just wake up one day and go I'm not going to smoke, because that is not going to work.And I've tried to stop a few times and it just doesn't work."Lisa Further, the exponential rise in the UK in e-cigarette use during the study period was, to some extent, reflected in parents' accounts with some using or planning to start using e-cigarettes instead of cigarettes.For example, Sharon described using e-cigarettes inside her home to protect her son who had recently been diagnosed with asthma, while smoking cigarettes when outside her home. "Well, with the normal fags, obviously that . . .(son) can inhale the smoke, or whatever, and the fumes and stuff.But with the electric one, he can't." Events other than the mass media campaign had also increased some parents' awareness of SHS and changed their smoking practices between the two interviews.The rapidly changing lives described by many parents in the first interview [30] which included frequent house moves, relationship breakdowns, and subsequent increased social isolation, had continued for some.Since the first interview, two participants had moved again, one mother had moved in with her partner, two previously single mothers had new partners, and two mothers were pregnant.One of the children had been diagnosed with asthma and another with leukaemia.All these changes, particularly the latter, were reported to have led to reductions or increases in smoking, and other changes in parents' smoking practices. Discussion The findings of this study add to our understanding of how disadvantaged parents respond to mass media campaigns on SHS.The study has several limitations.It involved a small number of purposively recruited parents from five communities in Scotland with a diverse socioeconomic profile who may not be generalizable to the wider population of parents who smoke.While the parents were not shown the media advert in the first interview or told about the imminent media campaign they may have been more receptive to the advert after participation in the first interview about smoking in the home.Additionally, Scotland has had comprehensive smokefree public places' legislation since 2006, so the findings may not be generalizable to countries with less comprehensive, or more recent, smokefree legislation.Despite these limitations, the findings show that disadvantaged parents engaged with the visual, emotive, and real-life circumstances of parents attempting to protect their children from SHS in the 'Right Outside' campaign.However, despite accepting the campaign's messages and a reported high motivation to protect their children from SHS exposure, parents' opportunities to act on the messages were constrained by their domestic circumstances.Unlike the mother in the advert who had a partner (and possibly a garden), many parents in the study did not have partners, gardens, or safe outdoor spaces and, therefore, could not leave their children unattended to smoke outside.Previous studies have found that disadvantaged smoker's responses to mass media campaigns can differ with more limited understandings and recall of messages, less motivation, and more limited opportunities to act [20].This study indicates that a TV advertisement, when designed to reflect the practices of disadvantaged parents who smoke in the home but want to protect their children from SHS, can constructively engage parents, but their limited opportunities to act on the messages remains problematic. Rather than smoking outside, most parents sought and discussed alternative ways of protecting their children, including quitting, which many thought they would find difficult.Parents in similar disadvantaged circumstances may, therefore, need to be offered alternative options which health and social care professionals need to be committed to and confident in providing.Regardless of parents' circumstances or gender, health and social care professionals should provide parents with SHS advice and cessation support.Parents' attempts to quit, reduce and/or protect their children from their smoking within these challenging circumstances should be supported and not undermined [29].Consideration should, therefore, be given to offering and supporting potential harm reduction options, such as temporary use of e-cigarettes and nicotine replacement therapy (NRT), for parents who feel unable to quit and cannot smoke safely outside, to create smokefree homes.Avoiding health risks has become a moral obligation for parents, particularly mothers [32].The findings suggest that tobacco control's attempts to de-normalise smoking may have become embedded in these parents' view of themselves and other parents.We, therefore, propose that the predominant focus on parents (particularly mothers) who smoke in SHS mass media campaigns [18], should be expanded to encompass fathers and grandparents who smoke indoors, whom mothers can often feel powerless to affect [29].This would also be likely to increase campaign effectiveness and avoid the further stigmatisation of disadvantaged parents who smoke [29]. Conclusions Mass media campaigns like "Right Outside", with visual, emotive real-life circumstances portraying parents attempting to act responsibly to reduce children's exposure to SHS, may be effective in engaging disadvantaged parents with SHS messages.However, the impact on parents smoking practices may be limited given their challenging home environments and circumstances.Alternative options and additional support for parents in disadvantaged circumstances are, therefore, needed to avoid undermining their wishes and actions to protect their children. Supplementary Materials: The following are available online at www.mdpi.com/1660-4601/13/9/901/s1,Protecting Young Children in Disadvantaged Households from Secondhand Smoke: Identifying Barriers and Levers to Smokefree Homes.
2016-09-10T08:43:00.142Z
2016-09-01T00:00:00.000
{ "year": 2016, "sha1": "ffc98b4e9eaa9c0efab6afb0ad110dc035d75d15", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/1660-4601/13/9/901/pdf?version=1473427381", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "91a1a21b36a149cdf68a989866340b6f1dc2f52b", "s2fieldsofstudy": [ "Psychology" ], "extfieldsofstudy": [ "Medicine" ] }
12254446
pes2o/s2orc
v3-fos-license
Isolated Photons in Deep Inelastic Scattering Photon radiation at large transverse momenta at colliders is a detailed probe of hard interaction dynamics. The isolated photon production cross section in deep inelastic scattering was measured recently by the ZEUS experiment, and found to be considerably larger than theoretical predictions obtained with widely used event generators. To investigate this discrepancy, we perform a dedicated parton-level calculation of this observable, including contributions from fragmentation and large-angle radiation. Our results are in good agreement with all aspects of the experimental measurement. Recently, the ZEUS collaboration at DESY HERA reported a measurement [5] of the inclusive production cross section for isolated photons in deep inelastic scattering (DIS).The normalization of the experimentally determined cross section turned out to exceed the cross section expected from the multi-purpose event generator programs HERWIG [9] and PYTHIA [10] by factors 7.9 and 2.3 respectively.Even after normalizing the total event rate, none of these programs was able to describe all kinematical dependencies of the measured cross section.Since the same event generator programs are used frequently to estimate photon production backgrounds for new particle searches in other collider environments, it appears to be very important to determine the origin of these large discrepancies.In a subsequent study [11] the ZEUS measurement was analyzed in view of determining the photon distribution in the proton, relevant for electroweak radiative corrections at colliders.By further analyzing the hadronic final state in isolated photon production in DIS, it is possible to define photonplus-jet cross sections.In [5], the isolated-photon-plusone-jet cross section was also measured and found in good agreement with the theoretical prediction [12].This observation renders the discrepancy in the inclusive isolated photon cross section even more intriguing, since the inclusive cross section can in principle be obtained by summing all isolated-photon-plus-n-jet cross sections, starting with n = 0. To investigate the origin of the discrepancy, we performed a new calculation of the inclusive isolated photon cross section in DIS. Production of photons in deep inelastic scattering is described by the leading order parton-level process q(p 1 ) + l(p 2 ) → γ(p 3 ) + l(p 4 ) + q(p 5 ) , where q represents a quark or anti-quark, and l a lepton or anti-lepton.The measurable cross section for lepton-proton scattering σ(ep → eγX) is obtained by convoluting the parton-level lepton-quark cross section σ(eq → eγq) with the quark distribution functions in the proton.In the scattering amplitudes for this process, the lepton-quark interaction is mediated by the exchange of a virtual photon, and the final state photon can be emitted off the lepton or the quark.Consequently, one finds three contributions to the cross section, coming from the squared amplitudes for radiation off the quark (QQ) or the lepton (LL), as well as the interference of these amplitudes (QL).These were computed originally as part of the QED radiative corrections to deep inelastic scattering [8], where the final state photon remains unobserved.The QL contribution is odd under charge exchange, such that it contributes with opposite sign to the cross sections with l = e − and l = e + . The isolated photon rate in deep inelastic scattering is defined by imposing a number of kinematical cuts on the final state particles.In the ZEUS analysis (which combined three data samples: 38 pb −1 e + p at √ s = 300 GeV, 68 pb −1 e + p at √ s = 318 GeV and 16 pb −1 e − p at √ s = 318 GeV), these were chosen as follows: virtuality of the process, as determined from the outgoing electron Q 2 = −(p 4 − p 2 ) 2 > 35 GeV 2 , outgoing electron energy E e > 10 GeV and angle 139.8 • < θ e < 171.8 • , outgoing photon transverse energy 5 GeV< E T,γ <10 GeV and rapidity −0.7 < η γ < 0.9.Photon isolation from hadrons is obtained by requiring the photon to carry at least 90% of the energy found in a cone of radius R = 1.0 in the η−φ plane around the photon direction.This cone-based isolation procedure is commonly used to define isolated photons produced in a hadronic environment.A minimal amount of hadronic activity inside the cone has to be allowed in order to ensure infrared finiteness of the observable.This cone-based isolation could face theoretical problems only if the cone size is chosen much smaller than unity [4], as often required for new particle searches.Finally, to eliminate contributions from elastic Compton scattering, observation of hadronic tracks in the ZEUS central tracking detector was required.This cut can not be translated directly into the parton model calculation.We translate this cut as follows: the ZEUS central tracking detector [6] covers in the forward region rapidities η < 2. Requiring tracks in this region amounts to the current jet being at least partially contained in it.Assuming a current jet radius of one unit in rapidity, this amounts to a cut on the outgoing quark rapidity η q < 3, which we apply here.Varying this cut results only in small variations of the resulting cross sections. In the QQ contribution, the photon can have two possible origins: the direct radiation off the quark and the fragmentation of a hadronic jet into a photon carrying a large fraction of the jet energy.While the former direct process takes place at an early stage in the process of hadronization and can be calculated in perturbation theory, the fragmentation contribution is primarily due to a long distance process which cannot be calculated within perturbative methods.The latter is described by the process-independent quark-to-photon fragmentation function [14] D q→γ (z), where z is the momentum fraction carried by the photon.D q→γ (z) must be determined by experimental data.Furthermore, when the photon is radiated somewhat later during the hadronization process, in addition to this genuinely non-perturbative fragmentation process, the emission of a photon collinear to the primary quarks is also taking place and has to be taken into account, giving rise to a collinear singularity.As physical cross sections are necessarily finite, this collinear singularity gets factorized into the fragmentation function defined at some factorization scale µ F,γ .The factorization procedure of final state collinear singularities in fragmentation functions used here is of the same type as the procedure used to absorb initial state collinear singularities into the parton distribution functions [15].In the present calculation, we use the phase space slicing method [16] to handle the collinear quark-photon singularity, as described in detail in [12,17].As a result, the parton-level cross section σ(eq → eγq) contains a collinear divergence, which is compensated by adding the parton-level cross section for the fragmentation process σ(eq → eq) ⊗ D q→γ (z) to it.Since the photon is required to carry at least 90% of the energy of the quark-photon cluster, D q→γ (z) is probed only for z ≥ 0.9. The only data constraining D q→γ (z) come from final state photon radiation in electron-positron annihilation at LEP (some earlier evidence for the non-vanishing of D q→γ (z) was obtained by the EMC experiment [7] in deep inelastic muon-proton scattering).Using the method described in [17], the ALEPH collaboration has performed [13] a direct measurement of D q→γ (z) from the photon-plus-one-jet rate in e + e − using a leading order (LO) theoretical calculation.In the method of [17], logarithms of µ F,γ are not resummed, such that any cross sections computed with D q→γ (z) of [13] are completely independent of µ F,γ .Next-to-leading order (NLO) corrections to the photon-plus-one-jet rate in e + e − are known [18], and allow for a determination of D q→γ (z) at NLO [19] from the ALEPH data. Several other parameterizations of photon fragmentation functions were proposed in the literature, based on models for the non-perturbative components [20,21].These parameterizations incorporate a resummation of logarithms of µ F,γ , such that physical cross sections acquire a residual dependence on µ F,γ .Advantages and drawbacks of this resummation applied to different observables are discussed in [22].The BFG parameterizations [21], yield a satisfactory description [22] of the ALEPH data.Furthermore, a measurement of the inclusive photon spectrum in hadronic Z boson decays [23] was made by the OPAL collaboration.The OPAL data are consistent [22] with the ALEPH and BFG parameterizations. In our calculation, we use the ALEPH leading order parameterization [13] as default, and the BFG (type I) parameterization [21], evaluated for µ 2 F,γ = Q 2 for comparison.The factorization scale µ 2 F for the parton distributions is Q 2 for the QQ subprocess. In the LL subprocess (which is the only subprocess included in [11]), the final state photon is radiated off the lepton.Consequently, the momentum of the final state lepton can not be used to determine the invariant fourmomentum transfer between the lepton and the quark, which is in this subprocess given by Q is unconstrained by the kinematical cuts, and the squared matrix element for the LL subprocess contains an explicit 1/Q 2 LL The track requirement, implemented through a cut on the outgoing quark rapidity, enforces a minimum Q 2 LL , thus avoiding a singularity in the subprocess cross section.Some care has to be taken in the choice of factorization scale µ 2 F in the LL subprocess.In a leading order parton model calculation, µ 2 F should ideally be taken to be the invariant four-momentum transfer to the quark, i.e.Q 2 LL for the LL subprocess.Even applying the quark rapidity cut, , where the parton model description loses its meaning.Because of the cuts, this kinematical region yields only a small contribution to the cross section.To account for it in the parton model framework, we introduce a minimal factorization scale µ F,min = 1 GeV, and choose for the LL subprocess µ F = max(µ F,min ,Q LL ), and for the QL in- Values for the isolated photon cross section in deep inelastic scattering using the cuts of the ZEUS analysis [5]. The theory prediction is the weighted average of electron and positron induced cross sections at different energies as analyzed by the experiment.By default, the ALEPH parameterization of the quark-to-photon fragmentation function [13] is used, with results obtained using the BFG parameterization [21] listed for comparison. terference subprocess µ F = max(µ F,min ,(Q LL +Q QQ )/2).This fixed factorization scale is an approximation to more elaborate procedures to extend the parton model to low virtualities [24], but sufficient in the present context.This procedure for the scale setting in the LL and QL subprocesses is similar to what is done in the related process of electroweak gauge boson production in electronproton collisions [25].The major difference to [25] is that the cross section for isolated photon production in DIS vanishes for Q 2 QQ,LL → 0, while being non-vanishing for vector boson production.Consequently, in [25] the calculation of deep inelastic gauge boson production had to be supplemented by photoproduction of gauge bosons at Q 2 = 0, with a proper matching of both contributions at a low scale.This is not necessary in our case. For the numerical evaluation of the cross sections, we use the CTEQ6L leading order parameterization [26] of parton distributions.Using the ZEUS cuts and the ZEUS composition of the data sample at different energies and with electrons and positrons, we obtain a theoretical prediction for the isolated photon cross section in DIS of 5.39 pb, to be compared to the experimental value of 5.64±0.58(stat.)+0.47 −0.72 (syst.).The total cross section is therefore well reproduced by our calculation. We also computed the individual contributions to this cross section, which we list in Table I.It can be seen that the difference among the different beam energies and e + /e − induced cross sections is about 6%, thus justifying their combination into a single data sample.By decomposing the observed cross section into the QQ, LL and QL contributions, we find that the QQ contribution yields only 53% of the cross section, although the experimental cuts were designed to enhance this contribution relative to the others.Especially, by requiring the final state lepton and the photon to be found in different parts of the detector, any small-angle radiation off the lepton is suppressed, thus leaving only the (kinematically disfa- vored) large-angle radiation in the LL contribution.The still substantial magnitude of the LL contribution can be understood by the larger magnitude of the electric charge of the lepton compared with the quark.As expected, the QL contribution is very small.Finally, we observe that using the BFG (type I) parameterization [21] for D q→γ (z) instead of the ALEPH parameterization [13] enhances the theoretical prediction only insignificantly by two per cent. To investigate the dependence of the isolated photon cross section on the event kinematics, the ZEUS collaboration also measured differential distributions in η γ , E T,γ and Q 2 .Comparing the shapes of these distribu-tions with normalized predictions from HERWIG [9] and PYTHIA [10], it was found that none of these programs could describe all distributions: both reproduced only the shape of the E T,γ -distribution correctly, HERWIG predicted a too soft Q 2 -distribution, while PYTHIA yielded an incorrect η γ -distribution.The approach suggested in [11], containing only the LL subprocess, was found to yield a reasonable description of the E T,γ -distribution, but failed on the η γ -distribution [27].Using our leading order calculation, we obtain differential cross sections in η γ , E T,γ and Q 2 , which are shown in Figures 1, 2 and 3. Comparison with the ZEUS data shows agreement for all three distributions in both shape and normalization.It should be noted that ZEUS does not provide a differential distribution in Q 2 , but just normalized event counts binned in this variable. Especially the η γ -distribution, Figure 1, gives important insight into the discrepancies observed between the data and predictions from PYTHIA and HERWIG.In this distribution, the shapes of the QQ and LL contributions are considerably different.Comparing with the distributions obtained in [5] from the event generator programs, it can be seen that the shape of the QQ contribution resembles the PYTHIA prediction, while the shape of the LL contribution resembles the HERWIG prediction.This observation suggests that each program accounts for only one of the subprocesses appropriately: PYTHIA for only QQ and HERWIG only for LL.The lack of photon radiation off quarks in HERWIG was al-ready observed by the H1 collaboration [28] in the study of photoproduction of isolated photons.The importance of both subprocesses for the shape of the η γ -distribution shows clearly that the isolated photon cross section in DIS can not be described by the LL subprocess [11] only. In this letter, we investigated the production of isolated photons in deep inelastic scattering in view of a recent ZEUS measurement of this observable.We found that photon radiation off quarks and leptons contribute about equal amounts to this observable, although radiation off leptons is restricted to large angles by the kinematical cuts.Since the photon isolation criterion admits some amount of hadronic activity around the photon direction, small angle radiation off quarks is kinematically allowed, and inherently contains a contribution from the non-perturbative quark-to-photon fragmentation function.Both these effects (large-angle radiation and photon fragmentation) are included in our fixedorder parton model calculation, while they are usually not accounted for in standard event generator programs.While the ZEUS collaboration could not describe their data with event generator predictions, we found that our calculation is in very good agreement with the ZEUS data both in normalization and in shape.
2017-04-06T18:01:16.704Z
2006-01-10T00:00:00.000
{ "year": 2006, "sha1": "15962193be7428dcde8581b27cf15375d1f63c6d", "oa_license": null, "oa_url": "https://arxiv.org/pdf/hep-ph/0601073", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "15962193be7428dcde8581b27cf15375d1f63c6d", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics", "Medicine" ] }
51725015
pes2o/s2orc
v3-fos-license
Esophageal Retained Lithium Battery in Children Younger than 6 Years Objectives Disk battery esophageal retention in children younger than 6 years represents an increasing endoscopic emergency, followed by a relevant risk of life-threatening late complications. Surgical removal after a failed endoscopic approach is rarely reported in the literature. We describe our experience in this scenario. Methods Two female asymptomatic patients aged 26 and 29 months presented within 4 hours after a witnessed ingestion of a 2-cm, 3-V lithium battery (CR2032) retained in the cervical esophagus. Both patients underwent a prolonged unsuccessful emergent endoscopic removal with a flexible instrument performed by an adult gastroenterologist. Both batteries fused with the esophageal wall were extracted through a longitudinal left cervical esophagotomy combined with minimal resection of necrotic tissues and repaired over a 12F feeding tube. Results Patients were extubated after 12 and 72 hours, respectively. Contrast study was performed after 20 and 13 days, respectively, before resuming oral feeding. At endoscopy, the first patient developed a 3-cm-long severe esophageal stenosis (35th day), followed by an asymptomatic tracheoesophageal fistula (60th day), which was conservatively treated. After spontaneous resolution of the tracheoesophageal fistula, esophageal stenosis progressed, partially responsive to esophageal stenting. Short esophagectomy is under evaluation. The second patient developed an asymptomatic limited stenosis, not requiring dilatation. Conclusions The emergent management of lithium battery ingestion needs a structured timely multidisciplinary approach in the emergency department, an experienced pediatric endoscopist, and a simultaneous engagement of pediatric surgical expertise, even in patients who do not show bleeding, to reduce esophageal exposure time to high-voltage current released by batteries, which represents the main factor conditioning tissue damage and prognosis. L ithium battery (LB) esophageal retention in children younger than 6 years represents an increasing social and endoscopic emergency, followed by a relevant risk of life-threatening complications. [1][2][3] To date, 59 deaths in children younger than 6 years who underwent battery ingestion have been reported worldwide from 1977, mainly related to LB. 4 The causal mechanism of death in at least 36 children was massive bleeding from aortoesophageal fistula 5 or fistulae with other major vessels of the mediastinum, 6 followed by esophageal perforation (10 cases) or tracheoesophageal fistula (TEF; 9 cases). In the same period, 231 cases with severe esophageal or airway injury are reported. 7 The severity of injury depends on battery type, size, voltage, location, and duration of close contact with the mucosa. The most important lesion mechanism consists of the electrical generation of caustic hydroxide ions at the negative pole proportional to the battery voltage. Tanaka et al, 8 in an animal canine model, demonstrated that sodium hydroxide is produced much more rapidly with LB (3 V) than with other button cells because the amount of alkali produced in tissue is proportional to the electric current produced, and the same amount of current is produced more rapidly with the higher-voltage lithium cell. The recent and increased use of the more powerful LB has increased the risk of significant tissue damage, which can occur after just 2 hours, from their lodgment in the esophagus in small children. 9 In this dramatic scenario, endoscopic removal of LB retained in the esophagus of small children represents a frequently successful endoscopic emergency to be performed as soon as possible, under safety conditions. The role of surgery after ingestion of LB is mainly reported as an emergent attempt in patients presenting with massive bleeding [10][11][12] or for the treatment of complications such as esophageal perforation, 13 TEF, 14,15 or esophageal stenosis, whereas surgical successful removal of batteries after a failed endoscopic approach is not reported in the literature. We report 2 cases of surgical approach to LBs retained in the esophagus after a failed prolonged attempt of endoscopic removal, describing postoperative management and outcomes. METHODS The medical records of 2 consecutive female patients aged 26 and 29 months, respectively, presented in our department from October 2015 to October 2016 within 4 hours after a witnessed ingestion of a 2-cm, 3-V LB (CR2032) retained in the cervical esophagus were reviewed. Both patients underwent a prolonged unsuccessful emergent endoscopic removal with a flexible instrument, followed by a successful extraction through a longitudinal left cervical esophagotomy combined with minimal resection of necrotic tissues. The postoperative management and outcomes are reported. Case 1 A 26-month-old female patient was sent to our department from a surrounding hospital 2½ hours after a witnessed ingestion of a 2-cm, 3-V LB (CR2032), which was changed from the television remote controller by her father 4 days ahead. The patient underwent a cervicothoracic x-ray assessment that returned the presence of a large button battery with a "halo sign" retained in the cervical esophagus. Before sending the patient to our hospital, where pediatric endoscopic and surgical expertise is present, the doctors of the accepting emergency department (ED) expected the blood tests to return from the laboratory. A total of 4 hours passed between ingestion of LB and delivery of the baby. In our hospital during the night, with the baby intubated, an emergent endoscopy with an 8-mm flexible instrument was performed by the adult gastroenterologist on call, in the operating room, in the presence of the pediatric surgeon on duty. An unsuccessful endoscopic removal of the LB using different endoscopic retrieval instruments was attempted for 1 hour, but the battery was literally fused with necrotic esophageal wall on his negative pole (Fig. 1A); thus, any further attempt was considered at risk. At this time, using a left lower cervical approach, a 3-cm-long longitudinal esophagotomy was performed where transmural full-thickness esophageal necrosis was found at the level of the left common carotid artery that was adherent to necrotic tissues but apparently not involved (Fig. 1B). After retrieving the intact battery, the necrotic esophageal wall was partially removed and the controlateral esophageal mucosa was inspected, evidencing a 360-degree mucosal burns. The esophageal wall was repaired over a 12F polyurethane feeding tube and a Penrose drain was left in periesophageal space. The patient was transferred intubated in the pediatric intensive unit, where therapy with proton-pump inhibitors and antibiotic was started. The patient was extubated the afternoon after surgery, after a contrast-enhanced computed tomographic scan demonstrating the absence of evolution of the vascular involvement. A swallow contrast study was performed after 20 days, before considering to resuming oral feeding, returning the absence of contrast extravasation but evidencing an esophageal stenosis at the level of esophagotomy. After further dilators until reaching the goal to accommodate a 16F feeding tube. At the time of writing, the patient has a stricture 0.5 cm in length and she is able to assume a creamy diet, integrated with a gastric feeding through the tube. Because of the level of the stricture, just below the pharyngoesophageal junction, further attempts of conservative stricture management have been scheduled, avoiding surgery. Case 2 A 24-month-old girl came to our hospital after 2 hours from a witnessed ingestion of a foreign body taken from a basket of used batteries. Thoracoabdominal x-rays evidenced the presence of a lower cervical foreign body with a "halo sign," compatible with a 2-cm, 3-V LB (CR2032) (Fig. 2A). Within 1 hour, the patient underwent an unsuccessful attempt of endoscopic removal with a flexible instrument, performed by the adult gastroenterologist on call, in the operating room under general anesthesia. At this time, with a left lower cervical approach, a 2-cm-long longitudinal esophagotomy was performed on the esophageal wall, not transmurally necrotic, at the level of left common carotid artery, which was apparently not involved (Fig. 2B). After retrieving the intact battery, the esophageal mucosa was inspected, evidencing a mucosal edema. The esophageal wall was repaired over a 12F polyurethane feeding tube and a Penrose drain was left in periesophageal space. The patient was transferred intubated in the pediatric intensive unit where therapy with proton-pump inhibitors and antibiotic was started. In the first postoperative day, a contrast-enhanced computed tomographic scan excluded any progression of tissue damage. The patient was extubated after 72 hours. A contrast study, performed 13 days after surgery (Fig. 2C), excluded esophageal perforation or TEF, allowing for a gradual introduction of liquid oral feeding. An endoscopic control was performed at 1 and 4 months postoperatively. returning an asymptomatic limited stenosis, not requiring dilatation (Fig. 2D). One year after surgery, the patient is asymptomatic, at full diet. DISCUSSION In most cases, battery ingestion does not result in any or minor sequelae for the health of affected children. However, despite a moderate decrease in the overall rate of ingestion in the last decade (from 12 to <10.5 cases per million population), in the same period, a linear increase of major or fatal outcomes has been registered (from 0.4% to 1% of patients). 17 These data are derived from the increased use of large-diameter button batteries, especially 20-mm diameter high-voltage LB, that are demonstrated to severely damage the esophageal wall within a very limited time of contact with mucosa (<2 hours from ingestion). If we look at monocentric 11 or large epidemiological series, 4 the percentage of fatal cases is around 20% to 25% of the entire cohort of patients at risk for severe effects. These data indicate that an essential effort has to be made to rapidly identify and adequately treat the limited number of children with high risk of esophageal, vascular, or airways damage. Children younger than 6 years who ingested high-voltage button batteries of at least 20 mm are particularly at risk, mainly in case of unwitnessed ingestion with battery permanence in the esophagus lasting more than 2 hours. 18,19 Our 2 patients presented with many of the previously cited risk factors: small age, high-voltage large-diameter LB, and prolonged contact time with esophageal mucosa. According with the European Society of Gastrointestinal Endoscopy-European Society for Paediatric Gastroenterology Hepatology and Nutrition guidelines, 20 emergent removal (<2 hours) of the retained disk battery (DB) is the goal of the acute phase after the radiologic highlighting of the battery position. This goal can be accomplished through different methods, such as endoscopic removal, balloon extraction with fluoroscopy, and esophageal bougienage, with the endoscopic approach to be preferred for its safety and completeness of derivable information, being the only one method that, under direct vision, can return the extent of mucosal damage. Whenever possible, the intervention of an endoscopist with adequate pediatric expertise coming from different subspecialties (pediatric surgeon, otolaryngologist, or gastroenterologist, depending on the center policy) should be obtained. Regarding the type of endoscopic instrument to prefer, the pediatric literature do not show any significant differences among rigid and flexible endoscopies in terms of the efficacy of esophageal foreign body retrieval in children, 21,22 but any of the articles available in literature refers only to the specific situation of DB esophageal retention. In our 2 cases, a prolonged attempt of endoscopic removal was performed by experienced adult gastroenterologists on call with flexible instruments that failed to detach the DB from the esophageal mucosa. On the base of subsequent surgical finding of batteries fused with esophageal wall, we could only speculate that, in these specific cases, a rigid endoscope would not have added any advantage in terms of effectiveness; rather, it would have increased the risk of esophageal perforation. Success rate of endoscopic approach has been reported as high as 98% to 100% of cases of foreign bodies retained in the esophagus of small children, 22 with very few cases requiring a surgical removal, 21 but to the best of our knowledge, this event has never been described for DB presenting without massive or sentinel bleeding, such as in our 2 cases. Some authors, on the basis of the high risk of mortality, agree to advocate a contemporary emergent cardiothoracovascular surgical approach performed by the general (pediatric) surgeon or cardiovascular (pediatric) surgeon, possibly combined with intraoperative endoscopy, in cases of documented DB ingestion retained in the esophagus presenting at the ED with clinical signs of vascular fistula (sentinel or massive bleeding). 23, 24 We agree not only that, under these life-threatening circumstances, the presence of a surgeon in the management protocol is recommended, but also that an emergent surgical approach is a priority over the endoscopic removal of the retained DB. The protocols used by the aforementioned authors differ when patients present at the ED without a sign of vascular fistula, such as in our 2 cases. Brumbaugh et al 23 do not take into consideration a surgical presence during the endoscopic maneuver, unless for immediate rigid esophagoscopy where significant esophageal edema makes flexible endoscopy battery removal impossible. Barabino et al,24 in case of DB incarcerated in the esophageal wall with endoscopic finding of severe and deep ulceration of the esophageal wall, suggest a surgical cervical or thoracotomic approach combined with the endoscopic removal of the DB, thus preventing uncontrollable fatal bleeding. In the 2 reported patients from our experience, both presenting without a sign of bleeding, the presence of pediatric surgeons assisting live the endoscopic maneuver made a direct multidisciplinary evaluation of the mucosal damage possible, suggesting in a shared way not to proceed further with endoscopic attempt and to immediately convert to a surgical successful approach. The latter certainly resulted in a partial resection of necrotic esophageal wall which ultimately contributed, together with the tissue damage induced by the LB, to the following esophageal stenosis. However, the decision to proceed surgically was at that point the only effective method to prevent the progression of necrosis to the surrounding vascular structures. We agree with Barabino et al that, under these circumstances at high risk of severe short-term or midterm complications, a surgical presence in the operating room during endoscopy is essential to offer the child the maximal chance of safe management. What is stated is particularly sensible in children at major risk (age <6 years, large-diameter high-voltage DB, battery persistence in the esophagus lasting >2 hours), for whom a specific management protocol is proposed (Fig. 3). Epidemiological data and our experience put the spotlight on 2 essential aspects of the urgent management of these cases in the ED: 1. The absolute need to promptly send the patient to the operating room to perform the endoscopic removal attempt, to be carried out as soon as possible. For this purpose, Russell and colleagues 25 clearly demonstrated that, in their level I trauma center, the activation of an immediate full trauma team response (trauma I triage protocol) is useful to reduce the risk of complications in patients at risk. A trauma I triage protocol entails immediate notification of the pediatric trauma team, the anesthesia team, the radiology technician, and the operating room charge nurse of the arrival of trauma patients to the ED. This allows for immediate response of the pediatric trauma surgeon and/or endoscopist and immediate confirmation of the presence and site of battery by imaging. Availability of an operating room for potential operative intervention is also secured. 2. The opportunity to guarantee the patient the assistance of a multidisciplinary team that can manage urgently both the endoscopic priority and any surgical requirement. We are well aware of the limitations imposed by an approach requiring the simultaneous presence of endoscopic and surgical expertise, particularly in trauma centers other than level I. However, the duty to prevent severe damages in selected pediatric cases and the possibility of a surgical procedure after endoscopy should be well known. This knowledge should direct the protocols of ED when facing a child with documented DB ingestion belonging to categories at high risk to rapidly address the patient to the more adequate level I trauma referral center, previously identified, reachable within a reasonable time frame of no more than 1 hour, with which a transfer agreement was found. Any attempt at endoscopic or fluoroscopic removal of large DB retained in the cervical esophagus should absolutely be avoided if there is no adequate pediatric surgical expertise in the hospital, also in case without sentinel or massive bleeding. In conclusion, the emergent management of LB ingestion needs a structured timely multidisciplinary approach beginning from the ED, an endoscopist with pediatric experience, and a simultaneous engagement of pediatric surgical expertise, to reduce esophageal exposure time to high-voltage current released by batteries, which represents the main factor conditioning tissue damage and prognosis.
2018-08-06T13:47:41.938Z
2018-07-25T00:00:00.000
{ "year": 2018, "sha1": "c8f679f062c6f9af2b6d5498559b0fb4ab8497a8", "oa_license": "CCBYNCND", "oa_url": "https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8162217", "oa_status": "GREEN", "pdf_src": "PubMedCentral", "pdf_hash": "fe5401af655240597e2d5c48671b00fac887732c", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
237531218
pes2o/s2orc
v3-fos-license
Neural Machine Translation using Recurrent Neural Network In this era of globalization, it is quite likely to come across people or community who do not share the same language for communication as us. To acknowledge the problems caused by this, we have machine translation systems being developed. Developers of several reputed organizations like Google LLC, have been working to bring algorithms to support machine translations using machine learning algorithms like Artificial Neural Network (ANN) in order to facilitate machine translation. Several Neural Machine Translations have been developed in this regard, but Recurrent Neural Network (RNN), on the other hand, has not grown much in this field. In our work, we have tried to bring RNN in the field of machine translations, in order to acknowledge the benefits of RNN over ANN. The results show how RNN is able to perform machine translations with proper accuracy. languagehas for quite some time been a fantasy of mankind. Most populations around the globe find acquiring and understanding foreign languages highly difficult because of the obvious geographical reasons, which is why speech to speech translation system in today's world is a great boon. Speech translation is basically a way to convert the language spoken by a certain individual into another preferable language. Google has its own algorithm that uses Artificial Neural Network (ANN) in order to facilitate machine translation using Neural Machine Translation (NMT), called GNMT (Google NMT). Now, Recursive Neural Network (RNN) can model sequence of data, like time series, so that each sample can be assumed to be dependent on the previous ones, unlike Artificial Neural Network. Also, RNN can even be used with convolutional layers to extend the effective pixel neighborhood. Thus, RNN is preferred comparatively more these days. There exists a great build-up of materials that is needed to be interpreted and analyzed for organizations, instruction, trade, the travel industry and so forth. Innovative help similar to machines' help for this is required. Neural Network is a kind of artificial intelligence that deals with imitating the way a human brain works. A neural network functions after establishing connections between the elements that are involved in the processing. This connection acts as the neurons in living animals. These connections and imitation of animal neuron are particularly effective for predicting all the events when the networks have quite vast database of prior examples or classification, to draw on. Recursive Neural Network is a kind of neural network that is designed and deployed by applying the same set of weights in a recursive manner over a structured input, to come up with a structured prediction on a random input structure or come up with a scalar prediction on the structured input. This is carried out by letting the machine traverse the given structure in a systematic topological order, where the first step comes first and second step comes later, like in a directed graph if there exists an edge xy then x comes before y in the ordering. Artificial Neural Networks, also called connectionist systems, are the computing systems which can perform tasks by simply learning from other examples, most of the time even without programming with the specific steps for the tasks. It is based on the collection of connected nodes which try to replicate animal neurons in the biological brain. The artificial neurons are capable of receiving signals from a source or other artificial neurons and then process it, and transmit the signal to the neurons connected to it. Recursive Neural Network (RNN) can model sequence of data, like time series, so that each sample can be assumed to be dependent on the previous ones, unlike Artificial Neural Network (ANN). Also, RNN can even be used with convolutional layers to extend the effective pixel neighborhood. There is a constant backpropagation or backtracking within memory cells in RNN, so it possesses the capability to bridge very long lags in time. Recurrent neural networks are capable of handling noises as well as continuous values. [2] Unlike finite state automata or hidden Markov models, recurrent neural networks require no a priori choice of a finite number of states, as it has an ability to deal with unlimited number of states. This project aims to create a system that is able to facilitate speech translation among various languages using a three-stage model. These three main modules include Speech Recognition, Machine Translation and Speech Synthesis use different tools and concepts for their specific purpose. Google APIs have been used to convert text to speech and speech to text, while the translation is carried out using Recurrent Neural Network (RNN). The use of RNN for helping in translation is elaborated and further the unique part of speech synthesis is explained in detail. In addition, system architecture has been provided along with the services and communication protocols for connecting the client to the main speech to speech translation servers. The research aims to deal with the underlying pipelined engineering of automated speech acknowledgment, machine translation framework and speech synthesis or content to speech which principally depends on lexical data and disregarding the other rich data which is available in speech and spoken talk, for example, commotion and human articulations. The foundation for the system is that, it focuses on the rich context free of the actual dictated phrases, along with having an understanding of and corresponding with variety of cultural people making the data transfer better, and more efficiency in communication. This project aims to help in understanding the need for avoiding dividing the task of speech to speech translation into various stages, and also in increasing of the inference speed whilst cultivating the speech of the individual with less errors in recognition and translation. II. RELATED WORK There have been several researches in the field of Recurrent Neural Networks and Machine Translations. Research workers have come up with several methods and approaches in to bring forth the techniques that can really be helpful for machine translations. Several parameters have also been checked by them which can be useful for future development of machine translation systems. The research by Shadiev et. al. [1] discusses the effectiveness of STR application on students learning performance, during and after the collaborative learning activities on the online synchronous cyber classrooms. The application has been implemented on the open university students, using STR. The usage of STR was implemented among students with cognitive environment, collaborative academic activities, non-native speakers and students. Dataset used in this paper is Windows Speech Recognition in the Microsoft Operating System, IBM ViaVoice software. The paper has used its dataset to be the Windows Speech Recognition in the Microsoft Operating System IBM ViaVoice software. Most experimental students perceived that STR was useful for individual presentations and for essays writing. Students who were exposed to the STR were willing to use STR system for learning in the future. One of the most common concerns reported in relation to online learning literature is the poor audio quality due to restricted internet bandwidth availability. The above-mentioned problems can be solved by adopting some assistive media-to-text recognition technologies, such as writing-to-text, image-to-text, diagram-to-text, text-to-speech, speech-to-text, and handwriting-to-text. The work of Soltau et. al. [2] deals with usage of deep bi-directional LSTM RNNs with CTC loss. Neural Speech Recognizer model has a deep LSTM RNN architecture built by stacking multiple LSTM layers. Since the bidirectional RNN models have better accuracy and their application is offline speech recognition on the system, they have used two LSTM layers at each depth -one operating in the forward and another operating in the backward direction in time over the input sequence. Vocabulary corpus has been used as dataset here, that has 296 videos from 13 categories, with each video averaging 5 minutes in length. The total test set duration is roughly 25 hours and 250,000 words. The output from the CTC layer, essentially making the CTC word model an end-to-end all-neural speech recognition model. The bi-directional LSTM CTC word models are capable of accurate speech recognition with no language model or decoding involved. The error rate calculation disadvantages the CTC spoken word model as the references are in written domain, but the output of the model is in spoken domain, creating artificial errors like "three" vs "3". The entire speech recognizer becomes a single neural network. According to Xiong et. al. [3], the use of various convolutional and LSTM acoustic model architectures, combined with a novel spatial smoothing method and lattice-free MMI acoustic training, multiple recurrent neural network language modeling approaches, and a systematic use of system combination. They have used CNN, LSTM, Spatial smoothing, speaker adaptive modelling, lattice-free sequence training. The 4-gram language model for decoding was trained on the available CTS transcripts from the DARPA EARS program: Switchboard (3M words), BBN Switchboard-2 transcripts (850k), Fisher (21M), English CallHome (200k), and the University of Washington conversational Web corpus (191M). The only exception is the VGG+ResNet system, which combines acoustic senone posteriors from the VGG and ResNet networks. While this yields our single best acoustic model, only the individual VGG and ResNet models are used in the overall system combination. International Journal of Engineering and Advanced Technology (IJEAT) ISSN: 2249 -8958, Volume-9 Issue-4, April 2020 In the included model, N-best output from all systems are combined confusion network construction generates new possible hypotheses not contained in the original N-best lists the machine errors are substantially the same as human ones, with one large exception: confusions between backchannel words and hesitations. The paper presented by Shang et. al. [4] discusses a general encoder-decoder framework: it formalizes the generation of response as a decoding process based on the latent representation of the input text, while both encoding and decoding are realized with recurrent neural networks (RNN). The dataset consists of a corpus of roughly 4.4 million pairs of conversations from Weibo. They use an encoder-decoder based neural network to generate a response in STC. They have also empirically verified that the proposed method can yield performance better than traditional retrieval-based and translation-based methods. Widely accepted evaluation methods in translation do not apply. It is also not reasonable to evaluate with Perplexity, a generally used measurement in statistical language modeling, because the naturalness of response and the relatedness to post cannot be well evaluated. Natural language conversation is one of the most challenging artificial intelligence problems, which involves language understanding, reasoning, and the utilization of common-sense knowledge. Automatic evaluation of response generation is still an open problem. Similar to the previous works, the research work that has been carried out by Battenberg et. al. [5] discusses empirical comparison among the CTC, RNN-Transducer, and attention-based Seq2Seq models for end-to-end speech recognition. Simplifying speech recognition pipeline so that decoding can be expressed purely as neural network operations. On simplifying speech recognition pipeline so that decoding can be expressed purely as neural network operations. It has been done on Hub5'00 dataset. The choice of the encoder plays a crucial role in optimizing the performance of three models: CTC, RNN-transducer and attention-based seq-seq model. In attempt to train RNN-Transducer models with the streaming constraint, and in reducing computation in encoder layers, it is found that CTC and attention models still have strengths that we aim to leverage in the future work with RNN-Transducers. In the work of Bahdanau et. al. [6] they have proposed two types of models. The first one is an RNN Encoder-Decoder and another is RNNsearch. The general model proposed for neural machine translation often belongs to a family of encoder-decoders and encode a source sentence into a fixed-length vector from which a decoder generates a translation. Dataset used in this project is Europarl (61M words), news commentary (5.5M), UN (421M) and two crawled corpora of 90M and 272.5M words respectively, totaling 850M words. In this paper they have two types of models. The first one is an RNN Encoder-Decoder and another is RNNsearch. The experiment revealed that the proposed RNNsearch outperforms the conventional encoder-decoder model (RNNencdec) significantly, regardless of the sentence length and that it is much more robust to the length of a source sentence. One of challenges left for the future is to better handle unknown, or rare words. This will be required for the model to be more widely used and to match the performance of current state-of-the-art machine translation systems in all contexts. Luong et. al. [7] in their work, have developed a system that examines two simple and effective classes of attentional mechanism: a global approach which always attends to all source words and a local one that only looks at a subset of source words at a time. They demonstrate the effectiveness of both approaches on the WMT translation tasks between English and German in both directions. Dataset used in this is WMT'14 training data consisting of 4.5M sentences pairs (116M English words, 110M German words). Their various attention-based models are classified into two broad categories, global and local. These classes differ in terms of whether the "attention" is placed on all source positions or on only a few source positions. For the English to German translation direction, our ensemble model has established new state-of-the-art results for both WMT'14 and WMT'15, outperforming existing best systems, backed by NMT models and n-gram LM re-rankers, by more than 1.0 BLEU Some functions can be compared on various alignment functions and shed light on which functions are best for which attentional models. It has been discussed by Cho et. al. [8], that the encoder and decoder of the proposed model are jointly trained to maximize the conditional probability of a target sequence given a source sequence. They show that the proposed model learns a semantically and syntactically meaningful representation of linguistic phrases. The project has been worked out on the dataset Europarl (61M words), news commentary (5.5M), UN (421M), and two crawled corpora of 90M and 780M words respectively. The contribution by the RNN Encoder-Decoder is rather orthogonal to the existing approach of using neural networks in the SMT system, so that we can improve further the performance by using, for instance, the RNN Encoder-Decoder and the neural net language model together. Apply the proposed architecture to other applications such as speech transcription. On the contrary, the work of Cho et. al. [9] shows that the neural machine translation performs relatively well on short sentences without unknown words, but its performance degrades rapidly as the length of the sentence and the number of unknown words increase. The project is based on the widely used and implemented dataset, which has been used for testing several models over the years, called Europarl (61M words), news commentary (5.5M), UN (421M) and two crawled corpora of 90M and 780M words respectively. A binary convolutional neural network whose weights are recursively applied to the input sequence until it outputs a single fixed-length vector. The performance of the neural machine translation suffers significantly from the length of sentences. However, both models are able to generate correct translations very well. Need to explore different neural architectures, especially for the decoder. Despite the radical difference in the architecture between RNN and grConv which were used as an encoder, both models suffer from the curse of sentence length. The research carried out by Sennrich et. al. [10], discusses the suitability of different word segmentation techniques, including simple character n -gram models and a segmentation based on the byte pair encoding compression algorithm. Neural Machine Translation using Recurrent Neural Network The neural machine translation system is implemented as an encoder-decoder network with recurrent neural networks. The dataset that has been used in this project is WMT 2015 set consists of 4.2 million sentence pairs, or approx. 100 million tokens. The neural machine translation systems are capable of open-vocabulary translation by representing rare and unseen words as a sequence of subword units. There is further potential in bilingually informed segmentation algorithms to create more align-able subword units, although the segmentation algorithm cannot rely on the target text at runtime. III. PROPOSED SYSTEM Artificial Neural Networks have been widely accepted as the base of Neural Machine Translations these days, whereas, Recurrent Neural Networks are not much popular right now for integrating them with the neural machine translation systems. Although ANN comes with several benefits including its ability to generalize and learn and model non-linear and complex relationships, there are some inevitable advantages of RNN over ANN like the fact that unlike ANN, RNN can model sequence of data for instance, the time serieswhich helps to validate the assumption that each sample is dependent on the previous samples. There are several android apps and web apps to translate the sentences and paras into different languages but none of them speak on behalf of you in a different language. Even google translator will just take the text and then translate it to a text format. Our project will listen to what you say, translate into different languages and speak up for you when required. To do this we will be using the gTTS (Google Text-to-Speech, a Python library and CLI tool) API and Google Speech Recognition API along with our RNN model for translation (into different languages). Also, at the same time, most of the Web applications or Android applications or iOS applications like Google Translator use Neural Machine Translators that uses ANN (Artificial Neural Network) for the translation process but we will be using RNN (Recurrent Neural Network) in our project. The benefit of using RNN in our project is that it will allow us to work on sequential data i.e. we can have words and sentences in a text which are based on previous context as well. This can't be achieved using ANN. Fig. 1. Flowchart of the system. The Fig. 1 explains the flow within the several modules that has been implemented in the proposed system. A. Dataset The system helps in translating from English to French, so the dataset that has been used contains English and French sentences. The dataset is taken from an open GitHub source under username 'susanli2016' in the data folder of the project title 'NLP-with-Python', titled 'small_vocab_en' and 'small_vocab_fr'. Statistically, the English corpus consists of 1,823,250 English words, out of which 227 are unique English words. In the French corpus, there are 1,961,295 French words, out of which 355 are unique French words. B. Modules The system has been broadly divided into three distinct modules, the one where the input speech is recognition and interpreted, the output of the first module is further given as input to the next module which is Machine Translationwhere the main translation takes place, and final module is the one where new speech is synthesized on the basis of output given from the Translation module. The first module is Speech Recognition. This module deals with the conversion of spoken data into the corresponding text. The text which is generated after interpretation of the spoken data, becomes the output of this module. The output which is generated is fed to the next module which deals with the translation of the text in a given language into the final language. The working of this module is carried out using the Google Speech Recognition API. Python libraries, like the widely used SpeechRecognition library and Pyaudio library have been installed for this module. Pyaudio package is python support library that helps in accessing the microphone. The SpeechRecognition library has several inbuilt functions which are used to identify the background noiseslike whispering, or people talking in the background, or loud footsteps, or construction noises, etc.and cancel those ambient noises in order to focus on the foreground data which is basically the user-recorded speech. Finally, the filtered speech is converted into text.". The second module of the system is Machine Translation module. This module deals with translation of the text which has been generated from the previous module into the language which it is supposed to translate. Before feeding the Neural Networks with the data output from the previous module of recognizing speech, data obtained is preprocessed. For preprocessing, the first task that is carried out is tokenization of the words for unique identification within a sentence. After tokenization, the sentences are padded so that all sentences become of the same length. The Machine Translation module of the system is totally based on the Recurrent Neural Network, which is used for translation purpose. The output of the second module is taken as input for the third module which deals with synthesis of sentences. The last module is Speech Synthesis module. This is the final module, where the textwhich is translated using several Recurrent Neural Network models, after interpreting the speech in source languageis converted back into speech in the destination language. This is the part of the system where the machine speaks after translation is carried out from the speech in source language to the text generation in the destination language. Google Text to Speech API has been used for implementing this module of the system. The API provided by Google Text to Speech, supports the conversion of a text in a given language into speech in the same language. V. EXPERIMENTAL RESULTS The system has been built with several RNN models, namely the Basic Recurrent Neural Network model, Recurrent Neural Network with Embedding, Bidirectional Recurrent Neural Network, and Recurrent Neural Network with embedded encoder and decoder. For visualizing the tabulated data, the python library matplotlib has been used for multiple line plot. Matplotlib is an open source python library which is widely used to deal with visualization of data in order to get better insight and analysis of the data. The system has been built over Google Colab which provides cloud services to execute codes, along with hardware supports including GPU. Table-I shows tabular comparison of accuracy percentages achieved for every round of the epoch for all of the models that are used in the system. Fig. 2. Plot of accuracy vs. epoch number for models The visualization of the data obtained through the plot shown in Fig. 2, where accuracy versus the iteration number of epochs is plotted in order to figure how the accuracy is increasing with every iteration for the four models. From the plot it can be clearly seen that the Recurrent Neural Network with encoding is having the highest accuracy with increase in iterations, bidirectional Recurrent Neural Network gives good amount of accuracy with time, and is more or less constant in nature, the same way, basic Recurrent Neural Network and Recurrent Neural Network with encoder and decoders also give kind of constant accuracy, but comparatively it is lower than the other two models. The final system combines the top two models giving the highest accuracy, in order to boost up the overall accuracy of the system. Fig. 3, the new RNN model that is created by merging of two previous RNN models, namely, bidirectional RNN and RNN with embedding, the accuracy is almost a cent percent. VI. CONCLUSION The system built combines the models of Recurrent Neural Network giving the highest accuracies within ten epochs. It has been observed that taking multiple models at a time resulted in a better model giving higher accuracy for the system as a whole. It has also been observed that as individual models, Recurrent Neural Network with embedding gives the maximum accuracy with further iterations, followed by bidirectional Recurrent Neural Network. The least performing individual models include the basic Recurrent Neural Network and the Recurrent Neural Network. The model must be chosen smartly, and while combining, it should be taken care that the system must not result to overfitting issues and other overheads, as they will lead to worse accuracy and an inefficient system for machine translation using neural networks. According to the results observed from the experiments done, bidirectional Recurrent Neural Network with encoding algorithm gives a high accuracy compared to that of the given models of Recurrent Neural Network. Single Recurrent Neural Network models like the basic RNN, only RNN with encoding, only bidirectional RNN and RNN with encoder and decoder gives lesser accuracy comparatively. The work can be extended for a larger dataset consisting of even bigger text corpus, so that the system becomes ideal for translating every possible sentence instead of just a few sentences. Further, the possible combination of models can be broadened into other models of RNN as well, considering all possible combinations within them, so that the systems can be made further accurate.
2020-05-21T00:15:30.115Z
2020-04-10T00:00:00.000
{ "year": 2020, "sha1": "18993aef0650694aaee4be608dac786a7fb829af", "oa_license": null, "oa_url": "https://doi.org/10.35940/ijeat.d7637.049420", "oa_status": "GOLD", "pdf_src": "ElsevierPush", "pdf_hash": "409cac779d77e255a884225961f51eb7387e5c60", "s2fieldsofstudy": [ "Computer Science" ], "extfieldsofstudy": [] }
259064439
pes2o/s2orc
v3-fos-license
A QTL on chromosome IV explains a natural variation of QR.pap final position in Caenorhabditis elegans In Caenorhabditis elegans , the QR neuroblast and its progeny migrate from the posterior to the anterior part of the animal during the L1 stage. We previously showed that the final position of QR.pa daughters varies among C. elegans wild isolates, with CB4932 displaying a particularly anterior QR.pap position (Dubois et al., 2021). Here, we study the genetic basis of the variation between isolates CB4932 and JU1242. We show that JU1242 alleles behave in a mostly dominant fashion. Using a Bulk Segregant Analysis, we detect a quantitative trait locus (QTL) region on chromosome IV. This QTL was confirmed using reciprocal chromosome IV introgressions. The parental genotypes CB4932 (dark blue) and JU1242 (dark orange) were phenotyped independently three times. The 40 RILs with the most anterior QR.pap position (light blue) were pooled for sequencing, as well as the 40 with the most posterior position (light orange). Dots and error bars represent the mean ± c.i. (95%); n=20 per line. A histogram of the phenotypic distribution is shown on the right. (F) Bulk Segregant Analysis: JU1242 allele frequency in the Posterior pool (orange) and the Anterior Pool (Blue). Thin horizontal black lines represent the threshold for a significant difference between log-odd ratios at p = 0.001. The JU1242 allele frequency is fixed in the Posterior Pool between 4.5 Mb to 6 Mb, revealing a QTL associated to the variation in QR.pap final position. (G) Validation of the chromosome IV effect on QR.pap final position. Near isogenic Lines were generated using successive backcrosses to exchange chromosome IV between CB4932 and JU1242. The chromosomes from CB4932 are represented in Blue and those from JU1242 in Orange. n>40 scored animals per line. Dots and error bars represent the mean ± s.d. Letters (a-c) represent groups of genotypes with a similar QR.pap mean (Dunn's test with Bonferroni adjustment post-hoc comparison). Description CB4932 and JU1242 exhibit a strong difference in QR.pap final position. During the L1 stage, the QR neuroblast migrates at a long range while undergoing three rounds of division (Sulston and Horvitz, 1977). Its granddaughter QR.pa stops migration upon expressing mig-1, encoding a Wnt receptor, and divides (Mentink et al., 2014). Thereafter, its QR.paa daughter migrates a short distance posteriorly and ventrally, whereas QR.pap migrates anteriorly and dorsally. The mean position of QR.paa and QR.pap, collectively called QR.pax, is used to quantify the end position of QR.pa migration (Ch'ng et al., 2003;Harris et al., 1996;Mentink et al., 2014;Whangbo and Kenyon, 1999). Using a set of 40 isolates, we previously showed that C. elegans wild isolates present variation in QR.pax final position (Dubois et al., 2021). We focused on two genetically close (Cook et al., 2017) isolates, JU1242 and CB4932, which present a strong difference in QR.paa and QR.pap final position ( Figure 1A,B). This difference is mainly due to QR.pap position being far more anterior in CB4932 than in JU1242 (mean comparison: 4.16 vs 5.54, W=207.5 , p-value < 10 -12 ). QR.paa is also significantly more anterior in CB4932, albeit with a smaller difference (6.25 vs 6.89, W=432, p-value < 10 -8 ) (Figure 1B, Ext. Data Table 1). F1 hybrids indicate that JU1242 alleles act in a mostly dominant fashion for QR.pap position. In order to investigate the genetic basis of the variation in QR.pap final position between JU1242 and CB4932, we performed laboratory crosses. We first tested the phenotypes of F1 heterozygous animals. To do so, old, sperm-depleted, hermaphrodites of JU1242 and CB4932 were crossed to males of the opposite genotype. We phenotyped F1 hermaphrodites, as determined by the presence of the hermaphrodite-specific neurons (HSN). The mean final position of QR.pap in heterozygotes is closer to that of JU1242 homozygotes ( Figure 1C, Ext. Data Table 2), and is significantly different from both parents in the case of the cross with CB4932 males but not in the reverse cross (5.02 vs 5.33, Z=1.97, p-value = 0.15). Overall, this suggests that the CB4932 allele at the main QTL on chromosome IV is recessive or weakly semidominant for the QR.pap phenotype. One main QTL on chromosome IV underlies the variation in QR.pap final position. We then used a Bulk Segregant Analysis (BSA) approach combined with whole genome sequencing. The main purpose of this method is to find genomic regions associated with a difference of phenotype between two parental genotypes. This method was first developed in plants (Michelmore et al., 1991) and is now commonly used in association with whole-genome sequencing in a broad range of organisms including arthropods (Kurlovs et al., 2019), yeasts (Birkeland et al., 2010;Parts et al., 2011) and nematodes (Doitsidou et al., 2010;Frézal et al., 2018). To this end, we generated 200 Recombinant Inbred Lines (RILs) by crossing CB4932 and JU1242. We singled the progeny at each generation until the fifth generation ( Figure 1D). We then measured the final position of QR.paa and QR.pap in the RILs from the 6 th generation. The two parental genotypes were phenotyped three times during the scoring of the RILs (Figure 1E, Ext. Data Table 3). The two parental lines and two pools of 40 RILs with the most contrasted phenotypes ('Anterior Pool' and 'Posterior Pool') ( Figure 1E) were whole-genome sequenced and the JU1242 allele frequency plotted for each pool (Figure 1F). A highly significant QTL peak was found on chromosome IV. Indeed, the allele frequency of JU1242 was found to be fixed between 4.5 Mb and 6 Mb in the Posterior pool, and the CB4932 allele frequency in the Anterior pool ( Figure 1F). Chromosome IV introgressions confirm the QTL. To validate the QTL on chromosome IV, we introgressed this chromosome from JU1242 into the CB4932 background and reciprocally. The introgression of the full or part (at least from 2.03 Mb to 14.6 MB) of JU1242 chromosome IV in the CB4932 background leads to a posterior position of QR.pap (Figure 1G, Ext. Data Table 4). Indeed, the final positions of QR.pap in strains JU4243, JU4244 and JU4246 are not significantly different from that of JU1242 (p-value = 1). Reciprocally, the chromosome IV of CB4932 into the JU1242 background leads to an anterior position of QR.pap, recapitulating the phenotype in CB4932 (Figure 1G, Ext. Data Table 4). Conclusions. Using a Bulk Segregant Analysis technique, we demonstrated that we were able to detect the genetic basis of the final positioning of a migrating cell. We found a QTL explaining the difference of QR.pap position between two wild isolates. This QTL was confirmed by the exchange of the chromosome IV of the two strains by introgression and appears to account for most of the difference in QR.pax position between CB4932 and JU1242. The next step would be to find the candidate allele associated with this difference and understand more closely the mechanism and evolution of final positioning of this migrating cell. Overall, this study establishes the quantitative genetic basis of a substantial QR.pap displacement in a wild isolate. Methods Caenorhabditis elegans strains and culture. Caenorhabditis elegans strains were cultured at 20°C on 55 mm diameter Petri dishes filled with NGM, and fed on Escherichia coli OP50 according to the standard procedures (Brenner, 1974). We used in this study two wild isolates: JU1242 isolated in Santeuil in October 2007 (Andersen et al., 2012) and CB4932 isolated in Taunton, Great Britain, in January 1991 (Grewal and Richardson, 1991). The Near Isogenic lines JU4243 (mfIR126), JU4244 (mfIR127), JU4246 (mfIR129) and JU4249 (mfIR132) were generated during this study ( Table 1). QR.paa and QR.pap final position measurements. The quasi-invariance of C. elegans cell lineage and development allows for identification of cells, including the QR.paa and QR.pap neurons (Sulston and Horvitz, 1977; Figure 1A). To score their position, cultures were roughly synchronized by washing away larvae and adults on plates containing unhatched embryos the day before scoring. The embryos stick to the plate and this procedure allows to obtain late L1 larvae the following morning. For scoring, the larvae were mounted on 3% agar pads with 1 mM sodium azide and observed with a 100x objective using Nomarski optics, as described (Harris et al. 1996, Dubois et al., 2021. We measured the final position of QR.paa and QR.pap relative to the V lateral epidermal seam cells ( Figure 1A). To this end, we scored on each mounted slide those late L1 larvae that were at a stage after the first division and before the second division of the V seam cells. From the anterior to the posterior, the V seam cell nuclei form at this stage a constant pattern that we used to generate a relative scale from 0 to 27 (Dubois et al., 2021). QR.paa and QR.pap have reached their final position before this stage (Sulston and Horvitz, 1977;Harris et al. 1996). Test of dominance. CB4932 and JU1242 hermaphrodites were aged until 4 days after the L4/adult transition. At this time, the sperm stock of hermaphrodites was depleted and animals laid unfertilized oocytes. Males from the same or the different genotype were crossed with these old hermaphrodites, ensuring heterozygote progenies. Only hermaphrodite L1 larvae were scored, as assessed by the presence of the HSN neuron, specific to hermaphrodites. The four crosses were performed and phenotyped at the same time. The pairwise comparisons between crosses were performed with the Dunn's test and Bonferroni adjustment to account for multiple testing. Construction of Recombinant Inbred Lines. The parental lines CB4932 and JU1242 were crossed using four males and three hermaphrodites per plate in each direction of the cross (eight plates per direction). 64 F1 individuals were singled and genotyped after laying with the primer pair par1delF-R (a 139bp deletion in par-1 of JU1242, Ext. Data Table 5). From 19 heterozygous F1 animals, 308 F2 animals were singled. For each line, a single worm was transferred at each generation until the F5 generation, in which most of the genome is assumed to be homozygous. We phenotyped 200 Recombinant Inbred Lines from the 6 th generation by measuring QR.paa and QR.pap final position in 20 animals per line. The parental phenotypes were scored three times independently during this step. DNA extraction and sequencing. After phenotyping (Ext. Data Table 3 Dubois et al. 2021) were not included in these pools. The RILs were then discarded. Genomic DNA of the two parents and the two pools were extracted (Qiagen Puregene Core® kit A) and sequenced by Illumina with 2x150bp paired-end reads, and 20 million reads in total, representing a mean coverage of 20x. Library preparation and whole genome sequencing were performed by the Eurofins Genomics company. The reads are available at NCBI SRA under BioProject PRJNA956481. Genomic analysis of the parental strains. The genomic analysis was performed using the first seven steps of the mapping-bysequencing pipeline from Besnard et al., 2017. Briefly, reads were mapped to the C. elegans reference N2 genome (WS274 genome version ftp://ftp.wormbase.org/pub/wormbase/releases/WS274/species/c_elegans/PRJNA13758/c_elegans.PRJNA13758.WS274.genomic.fa.gz) with bowtie2 (version 2.3.5.1, Langmead and Salzberg, 2012) and the --sensitive preset. The read-group information was added and the duplicated reads were marked and removed by Picard (Version 2.21.3, Broad Institute). The file was then indexed with Samtools (Version 1.9, Li et al., 2009) and filtered with BQSR by bootstrapping a first call made with HaplotypeCaller function from GATK tool suite (Version 4.1.4.0, Auwera et al., 2013). Then, a single .vcf file containing the different SNPs of the two parental genotypes was created. Bad quality (QUAL<20) and heterozygous SNPs were filtered out with VariantFiltration and SelectVariant functions from the GATK tool suite. Large insertions, inversions and deletions were detected with Pindel (version 0.2.5b9, Ye et al., 2009). We annotated SNPs with the variant predictor software SNPeff (Version 4.3t, Cingolani et al., 2012). Bulk Segregant Analysis. The genomes of the Anterior pool and the Posterior pool were mapped to the reference genome as previously described, and a variant calling was performed using GATK tool suite giving the list of variants of JU1242 and CB4932 as predefined positions. We selected only SNPs with a coverage higher than ten in each pool. At each position, we calculated the JU1242 allele frequency by dividing the number of reads of the alternative allele by the total number of reads. If variants were from CB4932, we calculated the JU1242 allele frequency by 1-(readVar/readTot). We then used a sliding window approach in order to smooth the allele frequencies, with a window of 200 bp and a step of 1 bp. To test whether the allele frequencies of the two pools differed, we calculated the log-odds ratio at each position as previously described in Frézal et al., 2018. Briefly, we calculated the log-odds ratio as: where m1 and m2 are JU1242 allele frequency in the Posterior pool and the Anterior pool, respectively, ) multiplied by the number of RILs in each pool. As the number of RILs in each pool is the same (40), we added 0.000001 to avoid infinite values. To define the threshold of significance (p=0.001), we simulated log-odds ratio for one million draws on a binomial distribution of the two pools. Generation and Phenotyping of Near Isogenic Lines. The chromosome IV of JU1242 was introgressed in the CB4932 background and reciprocally. To do so, we first crossed CB4932 hermaphrodites with JU1242 males. Then, we backcrossed males from the F1 progeny with CB4832 hermaphrodites. We singled the F2 progenies and let them self for one generation. If the chromosome IV was from JU1242, we then backcrossed again F3 progenies two times with CB4932 males and let the F5 self. If the chromosome IV at the F2 was still CB4932, we backcrossed it with JU1242 males. The male progenies were then crossed with JU1242 hermaphrodites to introgress the Chromosome IV of CB4932 in the JU1242 background. During the process of selfing, the six chromosomes were followed (Ext. Data Table 6) with the primers described in Ext. Data Table 5. The pairwise comparisons of the phenotypes between lines were performed with the Dunn's test and Bonferroni adjustment to account for multiple testing. JU4244 mfIR127 Introgression of JU1242 chrIV into CB4932 background (see Ext. Data Table 6 for genotyping data) This study JU4246 mfIR129 Introgression of JU1242 chrIV into CB4932 background (see Ext. Data Table 6 for genotyping data) This study JU4249 mfIR132 Introgression of CB4932 chrIV into JU1242 background (see Ext. Data Table 6 for genotyping data) This study
2023-06-05T05:05:27.453Z
2023-05-19T00:00:00.000
{ "year": 2023, "sha1": "ca5db39d8ddbbc2cdded94b11295fd6ad5ca4968", "oa_license": "CCBY", "oa_url": null, "oa_status": null, "pdf_src": "PubMedCentral", "pdf_hash": "ca5db39d8ddbbc2cdded94b11295fd6ad5ca4968", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [] }
237253863
pes2o/s2orc
v3-fos-license
Applications of 3D Printing Technology in Orthopedic Treatment <jats:p /> Three-dimensional (3D) printing technology, also known as additive manufacturing (AM) or rapid prototyping (RP), is a special technique which could fabricate 3D models using computer-assisted design (CAD). It was firstly developed by a Japanese doctor forty years ago and initially used in manufacturing and industry [1]. During recent decades, with the development of manufacturing technology and materials science, 3D printing has also been used in some medical fields such as dentistry, maxillofacial surgery, and neurosurgery [2]. Application of 3D printing in orthopedics is also increasingly popular, mainly including preoperative planning, surgical guides, personalized implants, and customized prostheses [3,4]. Individualized surgical treatment could be easily and accurately formulated under the aid of 3D printing and reduce the operation time and postoperative complications [5][6][7]. Depending on its unique advantages, 3D printing will lead a surgeon to precision medicine and provide patients with better treatment effects at lower cost [8,9]. At present, the Chinese government, enterprises, universities, and institutes have invested a lot of resources in related research including printing technology, raw materials, and clinical applications and have made important progress. For example, our center uses 3D printing technology to manufacture implants of porous tantalum for clinical surgical treatment; in this special issue, a great majority of the submissions come from China, which report their latest developments in 3D printing. As the editorial team, we pay attention to some recent progressive research in 3D printing technology for orthopedic treatment. Below is a summary of these accepted articles. The study by L. Kong et al. reported a set of articular spacer solutions using 3D printing technology in revision surgery for periprosthetic joint infection (PJI) after total knee arthroplasty. They compared the treatment effects between 3D printing spacer and static spacer in a retrospective study and stated that the 3D printing spacer group had less bone loss, less intraoperative blood loss, and greater knee function than the static spacer group. This technique effectively provides a new method to make accurate and personalized spacers in PJI and lower the rates of reinfection and complications. The paper by Y. Du et al. evaluated the stability of the acetabular cup with different types of bone defects in total hip arthroplasty for developmental dysplasia of the hip (DDH) using the finite element analysis (FEA) model. The authors found that the diameter of the femoral ceramic head had no significant impact on the stability of the acetabular cup. When the uncoverage rates of the cup were less than 24.5%, the stability of the cup was satisfactory even without the use of screws. However, when the uncoverage rates were more than 24.5%, it was necessary to apply screws to improve the primary stabilization of the cup. Although their study is just based on the FEA model instead of clinical application, the results are still beneficial to the subsequent clinical study. L. Yuan et al. retrospectively analyzed the bony resection accuracy during total knee arthroplasty (TKA) with patientspecific instrumentation (PSI) produced by 3D printing technology. They conducted full-length computed tomography (CT) for every patient and drafted detailed preoperative plans including the bony resection thickness. PSI was manu-factured based on the CT data and operation plan. Each bone resected in the operation was also measured with CT to reconstruct the three-dimensional radiographs. The bone resection thickness was compared between the preoperative plan and intraoperative data to assess the resection accuracy in different bone sites. The results of this study show that PSI had a generally good accuracy during the femur and tibia bone resection in TKA. X. Liu et al. evaluated the application of mixed reality (MixR) technology during transforaminal percutaneous endoscopic discectomy (TPED), and optical see-through head-mounted displays (OST-HMDs) were used to assist operation. They compared the difference of clinical effects between conventional TPED and MixR-assisted TPED and found that mixed reality (MixR) technology could significantly reduce the operation time and radiation exposure during the total operation procedure. This technology may be a powerful auxiliary tool for TPED but would increase the eye fatigue because of the application of OST-HMDs. J. Kim et al. investigated whether the postcuring process could influence the dimensional accuracy and seating of 3D printing dental prostheses. A study stone model was designed and fabricated to verify this hypothesis. Results showed that the postcuring process significantly affected the fit and dimensional precision of 3D printing polymeric prostheses. They suggested that seating on the stone model was a better choice for minimizing the deformity of the dental prosthesis and reducing adverse effects during the postcuring process. F. Gu et al. designed a three-dimensional printed patientcustomized guiding template (3DGT) to increase the efficacy and safety of unicompartmental knee arthroplasty (UKA). Personalized guiding template could provide helpful assistance in several procedures of operation planning, intraoperative positioning, and osteotomy. This study concluded that 3DGT could shorten operation time, reduce surgical trauma, and promote recovery. P. Honigmann et al. presented the first inhospital 3D printed scaphoid prosthesis using polyetheretherketone (PEEK) biomaterial via fused filament fabrication (FFF), one of the 3D printing technologies. The surface of this medical grade PEEK prosthesis did not show "FFF stair-stepping" phenomenon, which was usually common in the industrial grade scaphoid prosthesis. The biocompatible and implantable polymers such as PEEK applied in 3D printing could offer great potential in the treatment of complex joint damage in the hospital environment. M. Keller et al. reviewed the latest practical application of 3D printing in hand surgery and introduced the most common printing techniques and some materials. They provided a useful overview of the 3D printing technology applied in numerous aspects such as surgical guides, personalized implants for bone defects, customized splints, and preoperative plan. The authors hold the opinion that orthopedics, especially hand surgery, will benefit from 3D printing in the near future. L. Cheng et al. retrospectively investigated the utilization and feasibility of 3D printing technology for core decompression in patients with osteonecrosis of the femoral head (ONFH). The operation process went well and consumed less time than traditional methods with the aid of personalized guide plates and reduced the usage of intraoperative X-ray fluoroscopy. The results indicated that 3D printing had several advantages of improving efficiency, being more convenient, and accurate positioning. C. Zhang et al. revealed the efficacy of arthroscopy in treating bone cysts of the foot and ankle combined with 3D printing individualized guides. Better VAS score and AOFAS score and less intraoperative bleeding were displayed in patients with the assistance of 3D printing. It is concluded that 3D printing could significantly help surgeons to fast and smoothly establish a portal in arthroscopic ankle surgery. J. Fu et al. reconstructed the acetabular bone defect in a swine model to evaluate the bone ingrowth, biomechanics, and matching degree of the 3D printed porous prosthesis. Based on the results, the authors found that the 3D printed porous augments showed great porosity and pore size and had magnificent stiffness and elastic modulus. The anatomical matching extent was excellent, which could enhance the stability of the porous prosthesis. Although this study was conducted in minipigs, it displayed the great potential of 3D printed porous augment in the treatment of clinical severe acetabular bone defects. Y. Mao et al. compared the clinical effects of 3D printed patient-specific instrumentation (PSI) with conventional surgical techniques in medial open wedge high tibial osteotomy (MOWHTO). The results of this prospective comparative study showed that 3D printed PSI had significantly lower correction errors in terms of mFTA and mMPTA and demanded shorter duration and less radiation exposure. They concluded that 3D printing technique could be recommended as an effective assistant for MOWHTO in the treatment of varus because of its accuracy and effectiveness. W. Peng et al. reported an entirely anatomically conforming pelvic prosthesis for pelvic reconstruction. Pelvic tumor is a complex disease due to the vascular invasion of tumor issue, and most of the patients suffering from pelvic tumor undergo the surgery of tumor resection and hemipelvic replacement. The authors showed that 3D-printed prosthesis was of value for patients with complex pelvic tumors. Conflicts of Interest The editors declare that there are no conflicts of interest regarding the publication of this special issue.
2021-08-22T05:36:39.327Z
2021-08-03T00:00:00.000
{ "year": 2021, "sha1": "b68631577c27b45044ae5a396f713834794cfe5d", "oa_license": "CCBY", "oa_url": "https://downloads.hindawi.com/journals/bmri/2021/9892456.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "b68631577c27b45044ae5a396f713834794cfe5d", "s2fieldsofstudy": [ "Materials Science" ], "extfieldsofstudy": [ "Medicine" ] }
239795315
pes2o/s2orc
v3-fos-license
TREATMENT MANNERS, GLYCEMIC CONTROL, AND C-REACTIVE PROTEIN IN PATIENTS RECEIVING ANTI-DIABETIC OR ANTI-DIABETIC WITH ANTIHYPERTENSIVE DRUGS IN BASRA Objective: This study aimed to investigate the association between treatment manners, glycemic control, and C-reactive protein (CRP) serum level in patients in receipt of anti-diabetic drugs (ADM) alone or ADM by means of antihypertensive (AHT) drugs in Basra. Methods: Patients in receipt of ADM or ADM with AHT drugs, not suffering from complications, were recruited from Al-Mawanee General Hospital in Basra. Socioeconomic characteristics, blood pressure (BP), and treatment tactics were recorded. Blood sample was obtained to gauge glycated hemoglobin (HbA1c), lipids profile, and high responsive (high-sensitive CRP [hs-CRP]). Results: A total of 26 men and 50 women were involved. Lower mean HbA1c was found in patients in receipt of ADM with AHT drugs compared with those on ADM drugs merely (p=0.0013). Lower denote systolic BP (p<0.0001) and diastolic BP (p=0.0078) were established in patients receiving ADM drugs only compared with persons receiving ADM with AHT drugs. Lower denote hs-CRP was found in women receiving ADM with AHT drugs compare with those on ADM drugs only. Treatment manners had no effect on denote hs-CRP in men and women receiving ADM with AHT drugs; however, there was a significant direct association of hs-CRP with HbA1c (p=0.002) and triglycerides (p=0.009), but inversely with high-density lipoprotein cholesterol (p=0.011) in women receiving ADM drugs merely. Conclusion: High level of hs-CRP is associated with poor glycemic manage and dyslipidemia, therefore, consequently greater than before cardiovascular risk. Due to its value as a danger predictor, hs-CRP be supposed to be included in custom monitoring of type 2 diabetic patients. INTRODUCTION In Iraq, the high rate of occurrence of diabetes and hypertension has been documented. Diabetes plus hypertension is the two main risk factors in the growth of ischemic heart disease, cardiac hypertrophy, and cardiac failure. Cardiovascular disease is the most common cause of mortality in the world. Previous studies have demonstrated that persons with diabetes [1] and hypertension [2,3] have higher levels of C-reactive protein (CRP) compare with individuals without these conditions in the general population. Increased risk of cardiovascular disease has also be associated with an increased level of CRP [4,5]. CRP mixture and secretion are mainly in hepatic cells [6]. It is regulated by the action of lots of activated cytokines such as interleukin-6 (IL-6), IL-1, and tumor necrosis factor-alpha [7]. CRP is a sign of universal inflammation in the blood [8]. The normal plasma level of CRP in a fit inhabitants without proof of acute inflammation is 2 mg/L or less [9]. There is a rapid rise in the circulating CRP by as much as 3000-fold in answer to inflammation, infection, or acute hankie injuries, which drop rapidly at what time inflammation or injury is determined [10]. Many studies are focused on the association of chronic elevation of CRP with an increased risk of cardiovascular disease and atherosclerosis [11][12][13][14]. If CRP is worried about the pathophysiology of cardiovascular illness, it could be accepted that lower CRP levels would reduce the development of the disease and its complications. CRP causes atherosclerosis by various mechanism, such as the release of reactive oxygen species (ROS), CRP increases the generation of ROS by monocytes and neutrophils [15,16] directly. ROS have been concerned in the beginning and continuance of atherosclerosis [17]. Furthermore, CRP increases the look of adhesion molecules [18]. Furthermore, CRP has been worried about the destabilization of atherosclerotic plaques [19]. Moreover, CRP can arbitrate the uptake of low-density lipoprotein into macrophages to form foam cell [20]. The aim of this study was to investigate the association between drug treatments, glycemic manage, and serum height of CRP in Iraqi patients receiving anti-diabetic drugs (ADM) drugs or ADM with antihypertensive (AHT) drugs. METHODS This study was conducted during the era as of February to May 2018, and the patients were selected during their visit to Diabetes Endocrine and Metabolism Center in Al-Mawanee all-purpose Hospital in Basra. The Institutional Ethical Committee accepted the study, and informed consent was obtained from the subjects. Patients getting ADM drugs or ADM with AHT drugs, not suffering from complications, were recruited. A total of 76 diabetic patients aged between 42 and 67 years were included in this study, 50 patients were female and 26 were male. Fortytwo patients using ADM with AHT drugs, from which 30 were female and 12 were males. The other 34 patients are using ADM drugs only. Patients were excluded as of the learn if they were type 1 diabetic patients or if they have any cognitive problems. Socioeconomic characteristics, blood pressure (BP), and treatment plans were recorded. Fasting blood sample be obtained to measure glycated hemoglobin (HbA1c), lipids profile, and high-sensitive CRP (hs-CRP). HbA1c up to 7% reflected adequate glycemic control, as HbA1c >7% reflected poor glycemic manage, as optional by the American Diabetic Association rule [21]. Hypertension was distinct as a systolic BP >140 mmHg or diastolic BP >90 mmHg, or modern use of AHT drug treatment [22]. Research Article Cobas Integra 400. Serum lipids (cholesterol, triglycerides, and highdensity lipoprotein cholesterol [HDL-C]) were assayed using automated enzymatic methods (Dimension Vista 1500T Intelligent Lab System from Siemens Company-Germany) at the biochemistry laboratory. Statistical analysis Statistical analysis was performed by GraphPad Prism software (version 7.0, GraphPad Software, Inc., San Diego, CA). Descriptive statistics, such as mean ± standard deviation, were calculated for all estimated parameters. Comparison between two means was performed using unpaired Student t-test for normally distributed parameters. Associations between variables were examined using Pearson's correlation coefficients. All p values that were <0.05 were considered significantly different. RESULTS A total of 26 men and 50 women be recruited. Inferior mean HbA1c was found in patients receiving ADM with AHT drugs compare with those on ADM drugs only (p=0.0013). Lower denote systolic BP (p<0.0001) and diastolic BP (p=0.0078) were found in patients receiving ADM drugs only compared with patients receiving ADM with AHT drugs. Lower mean hs-CRP was found in women in receipt of ADM with AHT drugs compared with those on ADM drugs only (Table 1). Furthermore, the main AHT drugs used by the patients involved in this study were angiotensin receptor blockers (losartan and candesartan), angiotensinconverting enzyme inhibitors (captopril and enalapril), calcium channel blockers (CCBs) (amlodipine and diltiazem), and β-blocker drug carvedilol. Furthermore, the main ADM drugs used by the patients involved in this study were glibenclamide, glimepiride, metformin, and insulin. Treatment manner had no effect on mean hs-CRP in men; however, there was a significant direct correlation of hs-CRP with HbA1c (p=0.002) and triglycerides (p=0.009), but inversely with HDL-C (p=0.011) in women receiving ADM drugs only. Furthermore, the treatment manner had no effect on mean hs-CRP in men and women receiving ADM with AHT drugs (Table 2). DISCUSSION This study be designed to investigate the association between drug treatments, glycemic control, and serum level of CRP in Iraqi patients receiving ADM drugs or ADM with AHT drugs. Diabetes mellitus is associated with numerous complications. Hyperglycemia increased BP, dyslipidemia, oxidative stress, and inflammation are all characteristics of type 2 diabetes mellitus and are concerned in the development of vascular complications [23,24] so that control of diabetes leads to decreased risk of these complications. Most of the diabetic patients involved in our study were uncontrolled regardless of which ADM drug treatment was used. This study revealed that the lower mean of HbA1c was found in patients receiving ADM with AHT drugs compared with those on ADM drugs only. Few studies are found concerned with the combined effects of ADM with AHT drugs on the HbA1c level. The previous study revealed that the adverse effect of some AHT drugs on blood glucose homeostasis may influence their cardiovascular protection role. Different classes of AHT drugs have different effects on blood glucose homeostasis. Some of the AHT drugs such as angiotensinconverting enzyme inhibitors, angiotensin receptor blockers, some of the CCBs such as azelnidipine and manidipine, and some of the β-blockers such as carvedilol and nebivolol revealed to have advantageous effects on glucose metabolism. Conversely, diuretics and other β-blockers have an unfavorable effect on glucose metabolism [25]. Lower mean hs-CRP be found in women receiving ADM with AHT drugs compared with those on ADM drugs only. Treatment manners had no effect on mean hs-CRP in men; however, there was a significant direct correlation of hs-CRP with HbA1c and triglycerides, but inversely with HDL-C in women receiving ADM drugs only. Furthermore, the treatment manners had no effect on mean hs-CRP in men and women receiving ADM with AHT drugs. CONCLUSION These results indicated that high levels of hs-CRP are associated with poor glycemic control and dyslipidemia, therefore, consequently increased cardiovascular risk.
2021-10-26T17:12:09.691Z
2018-11-07T00:00:00.000
{ "year": 2018, "sha1": "242aecdbddfb384732591fb5df63636e4dc6f5ee", "oa_license": null, "oa_url": "https://doi.org/10.22159/ajpcr.2018.v11i11.29588", "oa_status": "GOLD", "pdf_src": "Adhoc", "pdf_hash": "0a84a0c8bacd1cdd880f30d7a48ad1374b25d869", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [] }
12221539
pes2o/s2orc
v3-fos-license
Entanglement spectrum and boundary theories with projected entangled-pair states In many physical scenarios, close relations between the bulk properties of quantum systems and theories associated to their boundaries have been observed. In this work, we provide an exact duality mapping between the bulk of a quantum spin system and its boundary using Projected Entangled Pair States (PEPS). This duality associates to every region a Hamiltonian on its boundary, in such a way that the entanglement spectrum of the bulk corresponds to the excitation spectrum of the boundary Hamiltonian. We study various specific models, like a deformed AKLT [1], an Ising-type [2], and Kitaev's toric code [3], both in finite ladders and infinite square lattices. In the latter case, some of those models display quantum phase transitions. We find that a gapped bulk phase with local order corresponds to a boundary Hamiltonian with local interactions, whereas critical behavior in the bulk is reflected on a diverging interaction length of the boundary Hamiltonian. Furthermore, topologically ordered states yield non-local Hamiltonians. As our duality also associates a boundary operator to any operator in the bulk, it in fact provides a full holographic framework for the study of quantum many-body systems via their boundary. I. INTRODUCTION It has long been speculated that the boundary plays a very significant role in establishing the physical properties of a quantum field theory. This idea has been very fruitful in clarifying the physics of the fractional quantum Hall effect, and is also the origin of the holographic principle in black hole physics. An explicit manifestation of this fact is the so-called area law. The area law states that for ground (thermal) states of lattice systems with short-range interactions, the entropy (quantum mutual information) of the reduced density operator ρ A , corresponding to a region A, is proportional to the surface of that region, rather than to the volume, at least for gapped systems [4][5][6][7]. Criticality may reflect itself by the appearance of multiplicative and/or linear logarithmic corrections to the area law [8,9]. Apart from the deep physical significance of this law, it has important implications regarding the possibility of simulating many-body quantum systems using tensor network (TN) states [10][11][12][13]. For instance, it has been shown [14] that any state of a quantum spin system fulfilling the area law in one spatial dimension (including logarithmic violations) can be efficiently represented by a matrix product state (MPS) [15,16], the simplest version of a TN. Very recently, another remarkable discovery has been made with relation to the area law [17]. It has been shown that for certain models in two spatial dimensions, the reduced density matrix of a region A has a very peculiar spectrum, which is called the "entanglement spectrum": by taking the logarithm of the eigenvalues of ρ A , one obtains a spectrum that resembles very much the one of a 1-dimensional critical theory (i.e. as prescribed by conformal field theory). This has been established for dif-ferent systems as diverse as gapped fractional quantum Hall states [17] or spin-1/2 quantum magnets [18]. Interestingly, the correlation length in the bulk of the ground state can be naturally interpreted as a thermal length in one dimension [18]. This is all very suggestive for the fact that the reduced density matrix is the thermal state of a 1-dimensional theory. However, there is a clear mismatch in dimensions: the Hilbert space associated to ρ A has two spatial dimensions, while the 1-dimensional theory obviously has only 1. Intuitively, this is clear as all relevant degrees of freedom of ρ A should be located around the boundary of region A. The main question addressed in this paper is to explicitly identify the degrees of freedom on which this 1-dimensional Hamiltonian acts. We show that projected entangled-pair states (PEPS) [19] give a very natural answer to that question. The degrees of freedom of the 1-dimensional theory correspond to the virtual particles which appear in the valence bond description of PEPS, and that "live" at the boundary of region A [19,20]. More specifically, PEPS are built by considering a set of virtual particles at each node of the lattice, which are then projected out to obtain the state of the physical spins. As we show, the boundary Hamiltonian can be thought of as acting on the virtual particles that live at the boundary of region A. Furthermore, we will present evidence that, for gapped systems, such a boundary Hamiltonian is quasi-local (i.e. contains only short-range interactions) in terms of those (localized) virtual particles. As a quantum phase transition is approached, the range of the interactions increases. Finally, we will show that the interactions lose their local character for the case of quantum systems exhibiting topological order. We will also show how operators in the bulk can be mapped to operators on the boundary. The fact that the boundary Hamiltonian is quasi-local has important implications for the theory of PEPS which go well beyond those of the area law. While PEPS are expected to accurately represent well the low energy sector of local Hamiltonians in arbitrary dimensions [21], it has not been proven that one can use them to determine expectation values in an efficient and accurate way. For that, one has to contract a set of tensors, a task which could in principle require exponential time in the size of the lattice. In order to circumvent this problem, a method was introduced [19] which successively approximates the boundary of a growing region by a matrix product density operator, which is exactly the density matrix of local virtual particles discussed before. It is not clear a priori to which extent that density matrix can be approximated by a MPS; more specifically, the bond dimension of that MPS could in principle grow exponentially with the size of the system if a prescribed accuracy is to be reached, which would lead to an exponential scaling of the computational effort. However, that MPS does nothing but approximate the boundary density operator ρ A for different regions A. In case such an operator can be written as a thermal state of a quasilocal Hamiltonian, it immediately follows that in order to approximate it by a MPS one just needs a bond dimension that scales polynomially with the lattice size [21], and thus that expectation values of PEPS can be efficiently determined. A. Model We consider a PEPS, |Ψ , of an N v × N h spin lattice in two spatial dimensions. Note that one can always find a finite-range interaction Hamiltonian for which |Ψ is a ground state [2]. We will assume that we have open (periodic) boundary conditions in the horizontal (vertical) direction: the spins are regularly placed on a cylinder and the state |Ψ is translationally invariant along the vertical direction [see Fig. (1)]. All spins have total spin S, except perhaps at the boundaries where we may choose a different spin in order to lift degeneracies related to the open boundary conditions. We will be interested in the reduced density operator, ρ , corresponding to the spins lying in the first columns; that is, when we trace all the spins from column + 1 to N h . More specifically, the effective Hamiltonian, H , corresponding to those spins, is defined through ρ = exp(−H )/Z , with Z a normalization constant. We will be interested not only in the entanglement spectrum [17], but also in the specific form of H and its interaction length, as we will define below. In order to simplify the notation, it is convenient to label the spin indices of each column with a single vector. We define I n = (i 1,n , i 2,n , . . . , i Nv,n ), where i k,n = −S/2, −S/2 + 1, . . . , S/2 for n = 2, . . . , N h − 1 (for n = 1 or n = N h we may have different spin S). Thus, we can write For a PEPS we can write Here Λ n = (α 1,n , α 2,n , . . . , α Nv,n ), where α k,n = 1, 2, . . . , D with D the so-called bond dimension. Each of the B I 's can be expressed in terms of a single tensor, where for each value of i, α, α , i α,α is a D × D matrix, with elements A i α,α ;β,β (the indices α and β correspond to the virtual particles entangled along the horizontal and vertical directions, respectively [19]; see Fig. 1). For the first (left) and last (right) column we define L I and R I similarly in terms of the D × D matricesl i α , andr i α : Thus, the tensorsÂ,l, andr (for which explicit expressions will be given later on) completely characterize the state |Ψ , which is obtained by "tiling" them on the surface of the cylinder. The first has rank 5, whereas the other two have rank 4. Here we have taken all the tensors A equal, but they can be chosen to be different if the appropriate symmetries are not present. B. Boundary density operator We now want to express the reduced density operator ρ in terms of the original tensors. In order to do that, we block all the spins that are in the first columns, and those in the last N h − , and definê where we have collected all the indices I 1 , . . . , I in I a and the rest in I b . With this notation, the state |Ψ can be considered as a two-leg ladder, ieN h = 2, andˆ = 1, where ρˆ is the density operator corresponding to a single leg. Thus, we have 1, It is convenient to consider the space where the vectors L I and R I act as a Hilbert space, and use the bra/ket notation there as well. That space, that we call virtual space, is the one corresponding to the ancillas that build the PEPS in the valence bond construction [19]. They are associated to the boundary between the -th and the + 1st columns of the original spins. The dimension is thus D Nv (see Fig. 1). In order to avoid confusion with the space of the spins, we have used |v) to denote vectors on that space. We can define the (unnormalized) joint state for the first columns and the virtual space, |Ψ L , and similarly for the last columns, |Ψ R , as with and |Λ) the canonical orthonormal basis in the corresponding virtual spaces. The state |Ψ can then be straightforwardly defined in terms of those two states. The corresponding reduced density operators for both virtual spaces are In terms of those operators, it is very simple to show that where |Γ) is an orthonormal basis of the range of σ L , σ T L is the transpose of σ L in the basis |Λ), and where we have defined an orthonormal set (in the spin space) Now, defining an isometric operator that transforms the virtual onto the spin space U = Γ |χ Γ (Γ|, we have The isometry U can also be used to map any operator acting on the bulk onto the virtual spin space; note that this map is an isometry and hence not injective, i.e. a boundary operators might correspond to many different bulk operators. This is of course a necessity, as U is responsible for mapping a 2-dimensional theory to a 1dimensional one. C. Boundary Hamiltonian The previous equation shows that ρ is directly related to the density operators corresponding to the virtual space of the ancillary spins that build the PEPS. In particular, if we have σ T L = σ R =: σ b (eg., when we have the appropriate symmetries as in the specific cases analyzed below), then ρ = U σ 2 b U † . The reduced density operator ρ is thus directly related to that of the virtual spins along the boundary. Since U is isometric it conserves the spectrum and thus the entanglement spectrum of ρ will coincide with that of σ 2 b . By writing σ 2 b = exp(−H b ), we obtain an effective one-dimensional Hamiltonian for the virtual spins at the boundary of the two regions whose spectrum coincides with the entanglement spectrum of ρ . We will be interested to see to what extent H b is a local Hamiltonian for the boundary (virtual) space. We can always write H b as a sum of terms where different spin operators. For instance, for D = 2, we can take the Pauli operators σ α (α = x, y, z) acting on different spins, and the identity operator on the rest. We group those terms into sums h n , where each h n contains all terms with interaction range n, i.e., for which the longest contiguous block of identity operators has length N v − n. For instance, h 0 contains only one term, which is a constant; h 1 contains all terms where only one Pauli operator appears; and h Nv contains all terms where no identity operator appears. We define which expresses the strength of all the terms in the Hamiltonian with interaction length equal to n. A fast decrease of d n with n indicates that the effective Hamiltonian describing the virtual boundary is quasi-local. In the examples we examine below this is the case as long as we do not have a quantum phase transition. In such a case, the length of the effective Hamiltonian interaction increases. D. Implications for PEPS In case σ b can be written in terms of a local boundary Hamiltonian one can draw important consequences for the theory of PEPS. In particular, it implies that the PEPS can be efficiently contracted, and correlation functions can be efficiently determined. The reason can be understood as follows. Let us consider again the cylindrical geometry (Fig.1), and let us assume that we want to determine any correlation function along the vertical direction, eg at the lattice points ( , 1) and ( , x). It is very easy to show that such a quantity can be expressed in terms of σ L and σ R . If we are able to write these two operators as Matrix Product Operators (MPO), ie as where the M are D × D matrices, then the correlation function can be determined with an effort that scales as N v (D ) 6 . It has been shown by Hastings [21] that if an operator can be written as exp(−H b /2), where H b is quasilocal, then it can be efficiently represented by an MPO; that is, the bond dimension D only scales polynomially with N v . Thus, we have that the time required to determine correlation functions only scales polynomially with N v . Later on, when we examine various examples, we will use MPO to represent σ b . In that case, we can directly check if we obtain a good approximation by using a MPO just by simply observing how much errors increase when we decrease the bond dimension D . We will see that the error increases when we approach a quantum phase transition. Furthermore, whenever σ b can be well approximated by a MPO, we can use the knowledge gained in the context of MPS [15,16] to observe the appearance of a quantum phase transition in the original PEPS. For that, we just have to recall that the correlation length, ξ, is related to the two largest (in magnitude) eigenvalues, λ 1,2 , of the matrix i M i,i ; ξ = 1/ log(|λ 1 /λ 2 |). For |λ 1 | = |λ 2 |, the correlation length diverges indicating the presence of a quantum phase transition. E. Qualitative discussion In order to better understand the structure of σ b , let us first consider a 1D spin chain. Even though the boundary of the chain, when cut into two parts, has zero dimensions, it will help us to understand the 2D systems. We take N v = 1 so that the PEPS reduces to a MPS. We can use the theory of MPS [15,16] to analyze the properties of the completely positive map (CPM) E (the matrices A i of the MPS are the Kraus operators of the CPM). In the limit N h → ∞, σ b is nothing but the fixed point of such a CPM. For gapped systems, E has a unique fixed point, and thus σ b is unique. For gapless systems, E becomes block diagonal (and thus there are several fixed points), the correlation length diverges, and we can write where B is the number blocks which coincides with the degeneracy of the eigenvalue of E corresponding to the maximum magnitude. In such case, the weights p n depend on the tensors l and r which are chosen at the boundaries. For critical systems, one typically finds that D increases as a polynomial in N v such that one obtains logarithmic corrections to the area law [9,22]. The 2D geometry considered here reduces to the 1D case if we take the limit N h → ∞ by keeping N v finite. According to the discussion above, we expect to have a unique σ b if we deal with a gapped system. As we will illustrate below with some specific examples, this operator can be written in terms of a local Hamiltonian H b of the boundary virtual space which is quasilocal. As we approach a phase transition, the gap closes and the correlation length diverges. In some cases, the boundary density operator can be written as a direct sum (16), eventually leading to the loss of locality in the boundary Hamiltonian. III. NUMERICAL METHODS In order to determine σ b we make heavy use of the fact that |Ψ is a PEPS. We have followed three different complementary numerical approaches that we briefly describe here. A. Iterative procedure First of all, for sufficiently small values of N v (typically N v ≤ 12) we can perform exact numerical calculations and determine σ L,R according to (10). The main idea is to start from the left and find first σ L for = 1 by contracting the tensors l i appropriately. Then, we can proceed for = 2 by contracting the tensors A i corresponding to the second column. In this vain, and as long as N v is sufficiently small we can determine σ L,R for all values of and N h . B. Exact contractions and finite size scaling The second (exact) method is a variant applicable to larger values of N v (typically up to N v = 20) but restricted to a finite width in the horizontal direction. It consists in exactly contracting the internal indices of two adjacent blocks of size N v /2 × N h . These two blocks are then contracted together in a second step. Although limited by the size 2 Nv+2N h of the half-block (which has to fit in the computer RAM), this approach can still handle systems of size 20 × 2 or 16 × 8 and be supplemented by a finite size scaling analysis. C. Truncation method Finally, to take the N h → ∞ limit we can use the methods introduced in [19] to approximate the column operators. The main idea is to represent those operators by tensor networks with the structure of a MPS. We contract one column after each other, finding the optimal MPS after each contraction variationally. In particular, since we will consider translationally invariant states, we can choose the matrices of the corresponding MPS all equal, which simplifies the procedure. We can even approach the limit N v , N h → ∞ as follows (see also [12,23]): (i) we start out with = 1, and contract the second column, obtaining another tensor network with the same MPS structure, but with increased bond dimensions. (ii) We continue adding columns, up to some = r, where we start running out of resources. At that point, we have a tensor network with the MPS structure representing σ L . Let us denote by C n α,β the basic tensor of that network, where n = 1, . . . , D 2 and α, β = 1, . . . , D 2r (n denotes the index in the horizontal direction). (iii) When the bond indices α, β grow larger than some predetermined value, say D c ≤ D 2r we start approximating the tensor network by one with bond dimension D c as follows. We first construct the tensor K α,α ;β,β = n C n α,βC n α ,β . Later on we will always deal with the case in which K is hermitean (when considered as a matrix); if this is not the case, one can always choose a gauge where it is symmetric [16]. We determine the eigenvector, X β,β , corresponding to the maximum eigenvalue of K, diagonalize X, consider the D c largest eigenvalues and build a projector onto the corresponding eigenspace. We then truncate the indices α and β by projecting onto that subspace. (iv) We continue in the same vein until the truncated tensor structure converges, which corresponds to the limit N h → ∞. (v) We can do the same with σ R by going from right to left. For the examples studied below, σ L = σ R =: σ b = σ T b , and thus we just have to carry out this procedure once. IV. NUMERICAL RESULTS FOR AKLT MODELS We now investigate some particular cases. We concentrate on the AKLT model [1,24], whose ground state, |Ψ , can be exactly described by a PEPS with bond dimension D = 2, as shown in Figs. 2 and and 3. The spin in the first and last column have S = 3/2, whereas the rest have S = 2. The AKLT Hamiltonian is given by a sum of projectors onto the subspace of maximum total spin across each nearest neighbor pair of spins, where P (s) n,m is the projector onto the symmetric subspace of spins n and m. This Hamiltonian is su(2) and translationally invariant. This invariance is inherited by the virtual ancillas, and thus σ b and H b will also be. These symmetries can be used in the numerical procedures. Note that if H b has this symmetries and has short-range interactions, then since the ancillas have spin 1/2 (as D = 2), it will be generically critical. The lattice is bipartite. It is convenient to apply the operator exp(iπS y /2) to every spin on the B sublattice: this unitary operator does not change the properties of ρ but slightly simplifies the description of the PEPS. Thus, we can write the AKLT Hamiltonian as in (17) if the spin n is in the A or B sublattice, resp. We will study finite N h -legs ladders, as well as infinite square lattices. We will start out in the next subsection with the simplest case of N h = 2. Note that for this particular case the subsystem we consider when we trace one of the legs is a spin chain itself, so that density operator ρ =1 already describes a 1-dimensional system and thus the physical spins already represent the boundary. In such a case, we do not need to resort to the PEPS formalism but we can also study other model Hamiltonians besides the AKLT one. For example, we will consider the su(2)-symmetric Heisenberg ladder Hamiltonian of S = 1/2 [ Fig. 2(a)] where the exchange couplings J n,m are parametrized by some angle θ, i.e. J leg = cos θ (J rung = sin θ) for nearestneighbor sites n and m on the legs (rungs) of the ladder. Although the ground state has no simple PEPS representation, it can be obtained numerically by standard Lanczos exact diagonalization techniques on finite clusters of up to 14 × 2 sites [18]. Similarly to the AKLT 2-leg ladder [ Fig. 2(b)], it possesses a finite magnetic correlation length ξ which diverges when θ → 0 (decoupled chain limit). The opposite limit θ = π/2 (θ = −π/2) corresponds to decoupled singlet (triplet) rungs (strictly speaking, with zero correlation length). For infinite systems, we will also be interested in the behavior of H b along a quantum phase transition. To this aim, we will also consider a distorted version of the AKLT model, and define a family of Hamiltonians (19) where Q n (∆) = e −8∆S 2 z,n . Note that the Hamiltonian is translationally invariant and has u(1) symmetry. As ∆ increases, it penalizes (nematic) states with S z = 0, and thus the spins tend to take their maximum value of S 2 z . As we will show, there exists a critical value of ∆ where a quantum phase transition occurs. A. 2-leg ladders : comparison between AKLT and Heisenberg models Let us start out with the su(2)-symmetric ∆ = 0 AKLT model in a two-leg ladder configuration, where ρ corresponds to state of one of the legs; that is, we take N h = 2, = 1, and all spins have S = 3/2 as shown in Fig. 2(b). The Hamiltonian is gapped [1,24], and the ground state is a PEPS with bond dimension D = 2. The tensors corresponding to the two legs, l and r, coincide and are given by r m α1,α2,α3 = s m |α 1 , α 2 , α 3 , where α i = ±1/2, and |s m is the state in the symmetric subspace of the three spin 1/2 with S z |s m = m|s m , m = −3/2, −1/2, 1/2, 3/2. We first examine the entanglement spectrum of H b computed on a 16×2 ladder. It is shown on Fig. 4(b) as a function of the momentum along the legs, making use of translation symmetry (the vertical direction is periodic) enabling to block-diagonalize the reduced density matrix in each momentum sector K. Note that it is also easy to implement the conservation of the z-component S z of the total spin so that each eigenstate can also be labelled according to its total spin S. The low-energy part of the spectrum clearly reveals zero-energy modes at K = 0 and K = π consistent with conformal field theory of central charge c = 1. It is of interest to compare the 2-leg AKLT results to the ones of the 2-leg S=1/2 Heisenberg ladder (18) sketched in Fig. 2(a) and investigated in Ref. 18. Fig. 4(a) obtained on a 14 × 2 ladder for a typical parameter θ = π/3 shows the entanglement spectrum of ρ which, again, is very similar to that of a single nearest-neighbor Heisenberg chain. As mentioned in Ref. 18, in first approximation, varying the parameter θ (and hence the ladder spin-correlation length) only changes the overall scale of the energy spectrum. Hence, it has been suggested [18] to connect this characteristic energy scale to an effective inverse temperature β eff . The above results strongly suggest that H b is "close" to a one-dimensional nearest-neighbor Heisenberg Hamiltonian. To refine this statement and make it more precise, we perform an expansion in terms of su(2)-symmetric extended-range exchange interactions, where RX stands for the "rest", i.e. (small) multi-spin interactions. The amplitudes A r can be computed from simple trace formulas, requiring the full knowledge of the eigenvectors of H b (i.e. of σ b ). A 0 is fixed by some normalization condition, e.g. tr σ b = 1. AssumingX is normalized as an extensive operator in N v , i.e. 1 Nv tr{X 2 } = 2 Nv , the amplitude R is given by: The coefficients A r and R of 2-leg Heisenberg ladders are plotted in Fig. 5(a) as a function of the parameter θ, both in the Haldane (J rung < 0 i.e. ferromagnetic) and rung singlet phases (J rung > 0 i.e. antiferromagnetic). Generically, we find that H b is not frustrated, i.e. all couplings at odd (even) distances are antiferromagnetic (ferromagnetic), A r > 0 (A r < 0). Clearly, the largest coupling is the nearest-neighbor one (r = 1). Fig. 5(b) shows the relative magnitudes of the couplings at distance r > 1 w.r.t. A 1 . These data suggest that the effective boundary Hamiltonian H b is short range, especially in the strong rung coupling limit (θ → π/2) where |A r /A r | → 0 for r > r. The amplitude A 1 of the nearest-neighbor interaction can be identified to the effective inverse temperature β eff which, therefore, vanishes (diverges) in the strong (vanishing) rung coupling limit. Next, we investigate the functional form of the decay of the amplitudes |A r | with distance. The ratio |A r |/A 1 versus r are plotted (using semi-log scales) in Figs. 6(a,b) for 12 × 2 and 14 × 2 Heisenberg ladders with different values of θ. Similar data for a 20 × 2 AKLT ladder is shown in Fig. 7(a), providing clear evidence of exponential decay of the amplitudes with distance, i.e. |A r | ∼ exp (−r/ξ b ). The Heisenberg ladder data are also consistent with such a behavior (even though finite size corrections are stronger than for the AKLT case, especially when θ → 0 or π). It is not clear however how deep the connection between the emerging length scale ξ b and the 2-leg ladder spin correlation length ξ is. Note that the latter can be related [18] to some effective thermal length associated to the inverse temperature β eff ∝ A 1 . A r Thanks to the PEPS representation of their ground state, AKLT ladders can be (exactly) handled up to larger sizes than their Heisenberg counterparts (typically up to N v = 20) enabling a careful finite size scaling analysis of the boundary Hamiltonian (20). As shown in Fig. 8(a), we observe a very fast (exponential) convergence of the coefficients A r with the ladder length N v . Hence, one gets at least 7 (3) digits of accuracy for all distances up to r = 5 (r = 7). FIG. 7: (Color on line) AKLT ladders -(a) Ratio |Ar|/A1 plotted using a logarithmic scale as a function of r. Results are approximation-free for finite N h while the N h → ∞ limit is obtained by finite size scaling (see Fig. 8(b)). (b) Comparison with dr+1/d2 (full symbols) computed (see text) on 2-leg and infinitely long (N h = ∞) cylinder. In fact, as pointed out previously, the boundary Hamiltonian H b should not contain only two-body spin interactions. However, the total magnitude of all left-over (multi-body) contributions, R, is remarkably small in the AKLT 2-leg ladder : as shown in Fig. 8(a), R < A 4 . In fact, the full magnitude of all many-body terms extending on r + 1 sites is given by d r+1 and can be compared directly to |A r | (after proper normalization). Fig. 7(b) shows that d r+1 /d 2 and |A r |/A 1 are quite close, even at large distance. Note however that multi-body interactions are significantly larger in the boundary Hamiltonian of the Heisenberg ladder, as shown in Fig 5 (although no accurate finite size scaling analysis can be done in that case). B. N h -leg AKLT ladder Now we consider the AKLT model on an N h -leg ladder configuration; we take = N h /2. The spins in the first and last legs have S = 3/2, and the corresponding tensors coincide with the ones given above. The rest of the spins have S = 2, and the corresponding tensor is A m α1,α2,α3,α4 = s m |α 1 , α 2 , α 3 , α 4 , where α i = ±1/2, and |s m is the state in the symmetric subspace of the four spin 1/2 with S z |s m = m|s m , m = −2, −1, 0, 1, 2 (see Fig. 2(c)). An example of a 4-leg ladder and of a schematic representation of ρ is shown in Fig. 3 Let us now follow the same analysis (20) of the boundary Hamiltonian as we did for the case of 2 legs. The decay with distance of the coefficients A r are reported in Fig. 7(a) for 4-leg, 6-leg and 8-leg AKLT ladders. Clearly, the decay is still exponential with distance for all values of N h studied but the characteristic length scale associated to this decay (directly given by the inverse of the slope of the curve in such a semi-log plot) smoothly increases with N h . A careful finite size scaling is performed in Fig. 8(b) to extract the N h → ∞ limit of all A r (accurate up to r = 7). The extrapolated values are reported in Fig. 7(a) showing that A r also decays exponentially fast with r in an infinitely long cylinder (N h = ∞). The characteristic emerging length scale is estimated to be still very short around 1. Lastly, we compute the Von Neumann entanglement entropy defined by S VN (ρ ) = −tr{ρ ln ρ } with the normalization tr ρ = 1. S VN scales like N v ("area" law) and is bounded by N v ln 2. Fig. 8(c) shows that the entropy converges very quickly with N h to its thermodynamic value which is very close to the maximum value. The entanglement of the two halves of the AKLT cylinder is therefore very strong. C. Thermodynamic limit and phase transitions Now we consider the N v , N h → ∞ for the deformed AKLT model in order to investigate the phase transition. We will compare some of the results with the 2-leg ladder as well. The spins in the first and last legs have S = 3/2, and the rest S = 2. The corresponding tensors are defined according to where α i = ±1/2, and |s m is the state in the symmetric subspace of the three (four) spin 1/2 with S z |s m = m|s m , m = −3/2, −1/2, 1/2, 3/2 (m = −2, −1, 0, 1, 2), respectively. We will use the approximate procedure sketched in Section III-C. In particular, for N v larger than the correlation length the obtained tensors C n α,β will be independent of N v . We have considered those tensors (with D c = 50 and 100 iterations), and built σ b and H b out of them. Note that the su(2) symmetry is explicitly broken by a finite ∆ so that it becomes more convenient to use the variable d n of Eq. (14) instead of A r to probe the spatial extent of H b . We recall that (d n ) 1/2 is the mean amplitude of all interactions acting at distance r = n − 1. We have plotted in Fig. 9 all d n , n ≤ N v /2, for N v = 16 as a function of ∆. As ∆ increases, we see that the interaction length of the effective Hamiltonian increases and one sees a long-range interaction appearing. This indicates that we approach a phase transition. For the case of the ladder, the interaction length remain practically constant for the same range of variation of ∆. Similarly to the investigation of the Heisenberg ladder [18], it is interesting to define an effective inverse temperature via the amplitude of the nearest-neighbor interaction, where the pre-factor is introduced conveniently so that β eff = A 1 in the su(2)-symmetric limit ∆ = 0. As seen in the inset of Fig. 9, the inverse temperature of the ladder scales linearly with ∆. For the infinite cylinder, no singularity of β eff is seen at the cross-over between short and long-range interactions. Next, we plot the inverse correlation length as a function of ∆ both for one dimension (i.e. an infinitely long ladder) and for two dimensions (i.e. N v = N h = ∞) in Fig. 10(a), obtained with D c = 150 and 100 iterations (no difference are observed by taking D c = 50 and 50 iterations). Clearly, the divergence of ξ (i.e. ξ −1 = 0) shows the appearance of a phase transition at ∆ = 0.0061 in two dimensions. In contrast, ξ −1 never crosses zero in the case of the ladder (i.e. in one dimension). We have compared ξ with the "emerging" length scale ξ b obtained by fitting the decay of the coefficients of H b as d r+1 /d 2 ∼ exp (−r/ξ b ) on N v = 16 2-leg and infinitely long (i.e. N h = ∞) cylinders. In the two leg ladder, we see that the divergence of the correlation length ξ for ∆ → ∞ results from the interplay between (i) a (moderate) increase of the range ξ b of the Hamiltonian H b and (ii) a linear increase with ∆ of the effective temperature scale β eff , therefore approaching the T eff → 0 limit when ∆ → ∞. This contrasts with the case of two dimensions (N v = N h = ∞) where the divergence of ξ occurs at finite effective temperature when H b becomes "sufficiently" long-range. It is however hazardous to fit the decay of the coefficients of H b to obtain its functional form at the phase transition. Finally, in Fig. 10(b) we have plotted the truncation error made by taking different D c in the limit N h → ∞, and, again around ∆ ≈ 0.006 the error increases. This is consistent with the expectation that as H b contains longer range interaction, the boundary density operator σ b requires a higher bond dimension to be described as a TN state. V. NUMERICAL RESULTS FOR ISING PEPS We now continue by considering the Ising PEPS introduced in [2]. They all have bond dimension D = 2 and exhibite the Z 2 -symmetry of the transverse Ising chain. They depend on a single parameter, θ ∈ [0, π/4]. For θ ∼ π/4 one has a state with all the spins pointing in the x direction, whereas for θ ∼ 0 the state is of the GHZ type (a superposition of all spins up and all down). In the thermodynamic limit (N v , N h → ∞) for θ ≈ 0.35 they feature a phase transition, displaying critical behavior, where the correlation functions decay as a power law. Thus, by changing θ we can investigate how the boundary Hamiltonian behaves as one approaches the critical point. As seen in Fig. 11 the entanglement spectrum of the 2leg ladder is gapped for all θ values and resemble the one of an Ising chain (equally spaced levels) with small quantum fluctuations revealed by the small dispersion of the bands. The effective inverse temperature, qualitatively given by the gap (or the spacing between the bands), decreases for increasing θ. The interaction length of the boundary Hamiltonian for the ladder is displayed in Fig. 12(a). The strength of the interactions decay exponentially with the distance for all values of θ. As we increase this angle, one only observes a decrease of the interaction length. Note that as opposed to the AKLT models studied in the previous sections, d 1 with a single Pauli operator σ x , describing an effective transverse field in H b . Thus, that Hamiltonian is given by a transverse Ising chain in the non-critical region of parameters. We have also plotted the inverse correlation length ξ −1 as a function of θ in Fig. 13 (blue empty dots). While the correlation length increases as θ decreases, it only tends to infinite in the limit θ → 0, as it must be for a GHZ state. No signature of a phase transition is found otherwise. B. Thermodynamic limit and phase transitions We now move to the case of an infinitely long cylinder. As above to grow the cylinder in the horizontal direction, one considers rank-5 tensors, which here take the form A m α1,α2,α3,α4 = a m (α 1 )a m (α 2 )a m (α 3 )a m (α 4 ), and use the same approximation scheme with 100 iterations as before. The parameters d n describing the boundary Hamiltonian H b behave very differently in the ladder and infinite cylinders as shown in Fig. 12. While for the Ising PEPS ladder H b remains short-ranged with exponential decay of d n vs n, the infinite cylinder shows a transition towards long-range interactions suggesting the existence of a phase transition. This is very similar to what occurred in the AKLT distorted model. As long as H b remains short-range, the density matrix ρ can be (qualitatively) mapped onto the thermal density matrix of an effective quantum Ising chain (including a "family" of transverse-like fields) and, therefore, no ordering is expected (at finite effective temperature). A phase transition however can appear when H b becomes long-ranged as it is the case for an infinitely long cylinder. This is evidenced by the the behavior of correlation lengths computed for the 2-leg and infinitely long (N h = ∞) cylinder and reported in Fig. 13. These correlation lengths are compared to the respective "emerging" length scales ξ b characterizing the decay of √ d n with n. In the 2-leg ladder case, ξ b increases quite moderately when θ → 0 (ξ b ∼ 1) so that the divergence of the correlation length ξ in this limit is only attributed to a vanishing of the effective temperature scale T eff . In contrast, as for the AKLT model, the phase transition in two dimensions occurs at finite (effective) temperature at the point where ξ b → ∞. In summary, these results evidence that whenever we approach a phase transition, the interaction length of the boundary Hamiltonian increases. VI. TOPOLOGICAL KITAEV CODE Let us finally consider systems with topological order. We will focus on Kitaev's code state [3]: It can be defined on a square lattice with spin-1 2 systems (qubits) on the vertices, with two types of terms in the Hamiltonian, (where X and Z are Pauli matrices), each of which acts on the four spins adjacent to a plaquette, and where the h X and h Z form a checkerboard pattern (see Fig. 14(a)). The ground state subspace of the code state can be represented by a PEPS with D = 2 [2]; a particularly convenient representation is obtained by taking 2 × 2 blocks of spins across h Z type plaquettes, and jointly describing Ising PEPS the spins in each block by one tensor of the form [26] A i1,2,i2,3,i3,4,i4,1 α1,α2,α3,α4 Here, i x,x+1 denotes the spin located between the bonds α x and α x+1 (numbered clockwise) as shown in Fig. 14(b). It can be checked straightforwardly that the resulting tensor network is an eigenstate of the Hamiltonians of Eq. (25). Excitations of the model correspond to violations of h X -terms (charges) or h Z -terms (fluxes), which always come in pairs [3]. We put the code state on a cylinder of N h × N v tensors (i.e., 2N h × 2N v sites), where we choose boundary conditions This yields a state which is also a ground state of h b Z = Z ⊗2 terms at the boundary, but not of the corresponding X ⊗2 boundary terms; in other words, charges (Pauli Z errors) can condense at the boundaries of the cylinder [27]. The full Hamiltonian-including the h b Z terms at the boundary-has a two-fold degenerate ground state which is topologically protected, and where the logical X and Z operators are a loop of Pauli X's around the cylinder and a string of Pauli Z's between its two ends (where they condense), respectively. To compute ρ , we start by considering the PEPS on the cylinder without the boundary conditions (27), i.e., with open virtual indices at both ends (labelled B and B ). Cutting the cylinder in the middle leaves us with with σ BL , the joint reduced density operator for the virtual spaces at the boundary, B (or B ), and the cut, L (or R). From (26), one can readily infer that the transfer operator for a single tensor is 1 1 ⊗4 + X ⊗4 , and thus, (28) where the two tensor factors correspond to the B (B ) and L (R) boundary, respectively. Imposing the boundary condition |χ θ )(χ θ |, Eq. (27), at B (B ), we find that (up to normalization) ρ ∝ (1 + sin 2 θ) 1 1 ⊗Nv + (2 sin θ) X ⊗Nv , which is the thermal state ρ ∝ exp[−β eff H ] of H = −sign(sin θ) X ⊗Nv at an effective inverse temperature The fact that H acts globally is a signature of the topological order, and comes from the fact that measuring an X loop operator gives a non-trivial outcome (namely sin θ). Note that the entropy S(ρ ) increases by one as 1/β eff goes from zero to infinity. This can be understood as creating an entangled pair of charges |vac + f (β eff ) |c, c * across the cut, thereby additionally entangling the two sides by at most an ebit, and subsequently condensing the charges at the boundaries. Instead of considering σ L , one can also see the topological order by looking at σ BL : It is the zero-temperature state of a completely non-local Hamiltonian X ⊗Nv ⊗ X ⊗Nv which acts simultaneously on both boundaries in a maximally non-local way; this relates to the fact that the expectation values of any two X loop operators around the cylinder are correlated. By imposing boundary conditions at B, one arrives at ρ = sin θ |0)(0| ⊗Nv + cos θ |1)(1| ⊗Nv , which is the thermal state of of the classical Ising Hamiltonian for β → ∞. Thus, for the Ising model, ρ is described by a local Ising Hamiltonian, rather than a completely non-local interaction as for Kitaev's code state. The same holds true for σ BL , which is the ground state of a classical Ising model without field: while it has correlations between the two boundaries, they arise from a local (i.e., few-body) interaction coupling the two boundaries, rather than from terms acting on all sites on both boundaries together. Correspondingly, the long-range correlations in the Ising model can be already detected by measuring local observables, instead of topologically nontrivial loop operators as for Kitaev's code state. VII. CONCLUSIONS AND OUTLOOK In this paper, we have introduced a framework which allows to associate the bulk of a system with its boundary in the spirit of the holographic principle. To this end, we have employed the framework of Projected Entangled Pair States (PEPS) which provide a natural mapping between the bulk and the boundary, where the latter is given by the virtual degrees of freedom of the PEPS. This framework allows to map the state of any region to a Hamiltonian on its boundary, in such a way that the properties of the bulk system, such as entanglement spectrum or correlation length, are reflected in the properties of the Hamiltonian. Since our framework also identifies observables in the bulk with observables on the boundary, it establishes a general holographic principle for quantum lattice systems based on PEPS. In order to elucidate the connection between the bulk system and the boundary Hamiltonian, we have numerically studied the AKLT model and the Ising PEPS. We found that the Hamiltonian is local for systems in a gapped phase with local order, whereas a diverging interaction length of the Hamiltonian is observed when the system approaches a phase transition, and topological order is reflected in a Hamiltonian with fully non-local interactions; thus, the quantum phase of the bulk can be read off the properties of the boundary model. Our holographic mapping between the bulk and the boundary in the PEPS formalism has further implications. In particular, the contraction of PEPS in numerical simulations requires to approximate the boundary operator by one with a smaller bond dimension, which can be done efficiently if the boundary describes the thermal state of a local Hamiltonian, i.e., for non-critical systems. Also, since renormalization in the PEPS formalism requires to discard the degrees of freedom in the bond space with the least weight [25], the duality allows to understand real space renormalization in the bulk as Hamiltonian renormalization on the boundary. Our techniques can also be applied to systems in higher dimensions, and in fact to arbitrary graphs, to relate the boundary of a system with its bulk properties. The mapping applies to arbitrary regions in the lattice, such as simply connected (e.g., square) regions used for instance for the computation of topological entropies. Also, relating the bulk to the boundary using the PEPS description can be generalized beyond spin systems by considering fermionic or anyonic PEPS [28], as well as continous PEPS in the case of field theories [29,30]. Finally, when studying edge modes, the one-dimensional system which describes the physical boundary is given by a Matrix Product Operator acting on the virtual boundary state, and thus, the relation between bulk properties and the virtual boundary implies a relation between the properties of the bulk and its edge modes physics.
2011-03-17T14:53:23.000Z
2011-03-17T00:00:00.000
{ "year": 2011, "sha1": "94b6921087fe7cd7af90a4650017d4ea68816fd3", "oa_license": "publisher-specific, author manuscript", "oa_url": "https://link.aps.org/accepted/10.1103/PhysRevB.83.245134", "oa_status": "HYBRID", "pdf_src": "Arxiv", "pdf_hash": "964e7a702f3dae9d85e48cf67dd5a544e1fdb97e", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
258881434
pes2o/s2orc
v3-fos-license
WM–STGCN: A Novel Spatiotemporal Modeling Method for Parkinsonian Gait Recognition Parkinson’s disease (PD) is a neurodegenerative disorder that causes gait abnormalities. Early and accurate recognition of PD gait is crucial for effective treatment. Recently, deep learning techniques have shown promising results in PD gait analysis. However, most existing methods focus on severity estimation and frozen gait detection, while the recognition of Parkinsonian gait and normal gait from the forward video has not been reported. In this paper, we propose a novel spatiotemporal modeling method for PD gait recognition, named WM–STGCN, which utilizes a Weighted adjacency matrix with virtual connection and Multi-scale temporal convolution in a Spatiotemporal Graph Convolution Network. The weighted matrix enables different intensities to be assigned to different spatial features, including virtual connections, while the multi-scale temporal convolution helps to effectively capture the temporal features at different scales. Moreover, we employ various approaches to augment skeleton data. Experimental results show that our proposed method achieved the best accuracy of 87.1% and an F1 score of 92.85%, outperforming Long short-term memory (LSTM), K-nearest neighbors (KNN), Decision tree, AdaBoost, and ST–GCN models. Our proposed WM–STGCN provides an effective spatiotemporal modeling method for PD gait recognition that outperforms existing methods. It has the potential for clinical application in PD diagnosis and treatment. Introduction With the increase in the aging population, age-related cognitive disorders have become more prevalent in recent years. Parkinson's disease (PD), a common progressive degenerative disease of the central nervous system, is characterized by movement disorders such as muscle stiffness, hand tremor, and slow movement. Early detection of PD is crucial for timely treatment and proper medication. Gait is an important indicator of health status, and the detection of gait abnormalities can serve as an indication to obtain further medical assessment and treatment. Reference [1] observes that analyzing a patient's gait could be utilized as a clinical diagnostic tool to help doctors recognize two dementia subtypes, Alzheimer's disease (AD) and Lewy body disease (LBD). This study distinguished LBD and AD using four key gait features: step time variability, step length variability, step time asymmetry, and swing time asymmetry. Beauchet et al. [2] found that a high mean and coefficient of variation of stride length were characteristic of moderate dementia, while an increased coefficient of variation of stride duration was associated with mild cognitive impairment status. Mirelman A. et al. [3] studied the effect of Parkinson's disease on gait. They highlighted the gait features unique to Parkinson's disease. In the early stages of Parkinson's disease, patients have a slower gait and shorter stride length compared to healthy individuals. These gait changes are common in patients with Parkinson's disease but are not unique, as many diseases reduce gait speed. However, decreased arm swing and smoothness of movement and increased interlimb asymmetry are more specific to Parkinson's disease and are usually the first motor symptoms. Gait stiffness and staggering may also occur in later stages. Clinical gait assessment is a commonly used method for performing gait analysis, which is an assessment performed by a clinician. Specifically, the physician needs to observe the patient's walking performance and then give a score based on criteria of the Unified Parkinson's Disease Rating Scale (UPDRS) [4] and Simpson-Angus Scale (SAS) [5]. Moreover, utilizing different types of sensors is a popular method. For example, sensors are embedded in the shoe insoles to measure the pressure of the foot against the ground while walking [6]; inertial measurement units and goniometers are fixed to joints, such as the waist and elbow, to measure the walking speed and acceleration [7]. Moreover, some studies have proposed video-based methods [8][9][10]. For example, reflective markers are attached to diverse locations on the human body. The location and trajectory of the markers are analyzed to provide kinematic information by recording with a digital camera. The Vicon Vantage system [10] requires about 8-14 high-precision cameras to provide accurate 3D motion data for gait analysis. These existing gait analysis methods either require specialist assessment or particular sensors and equipment. It is too costly to deploy such systems. Furthermore, constructing a specific testing environment and training a team to calibrate the system and manage complex data necessitate substantial investment. To solve this issue, a convenient, low-cost, and clinically practical method is needed to recognize Parkinsonian gait. In clinical practice, Parkinson's disease screening, follow-up, regular examination, and evaluation of treatment efficacy can be performed in a way that is easily implemented in a clinical setting and is both feasible and effective for patients. With advancements in computer vision, advanced techniques, such as human pose estimation algorithms, have made remarkable progress. Pose estimation is a process that involves localizing a person's joints in an image or video, and it has been applied to vision-based gait analysis. Previous work on vision-based gait assessment explored the use of the Microsoft Kinect sensor, thus using the 3D joint position provided by the system to analyze Parkinson's disease gait [11,12]. However, due to the technical limitations of the Kinect depth sensor, 3D joint positions can only be accurately extracted when the participant is located between (0.5 and 4.5) meters from the sensor, which limits the scenarios that can be widely used [13,14]. Recently, there has been an upsurge of interest amongst researchers in conducting gait analysis on conventional color video, which eliminates the requirement for depth sensors and enables the analysis of whole walking durations using a solitary camera. The emergence of novel computer vision techniques and machine learning algorithms has enabled more robust and automated analysis of video data captured by consumer-grade devices. In particular, advanced human pose estimation libraries, such as OpenPose, Detectron, and AlphaPose, have demonstrated their proficiency in extracting precise 2D joint pixel coordinates from video recordings [15][16][17]. Prior research has investigated the utilization of 2D joint trajectories to compute domain-specific features for the identification of Parkinsonian gait and dyskinesia rating from color videos, as highlighted in Refs. [18][19][20][21]. Moreover, the study conducted by Lu et al. [22] delved into the utilization of 3D joint trajectories extracted from video for predicting gait scores related to Parkinson's syndrome. Model training in deep learning requires an extensive amount of data. However, there are various restrictions on medical sample acquisition: video collection is restricted by laws and patient privacy, while clinicians are not sufficiently motivated to record patients walking data. The lack of data hinders the application of deep learning. An alternative approach to obtaining real data is to generate synthetic data [23,24]. For example, random noise can be added to existing data, thus extending the available real data and training deep learning models [25]. Hence, data augmentation may be a valuable tool to overcome the inaccessibility of real data in the medical field [26]. Moreover, the input data in the spatial domain is skeletal data, which can be represented in graphical form, while convolution functions on the time axis can be used to capture temporal features such as joint dynamics (frequency, velocity). Naturally, the spatiotemporal graph convolutional network (ST-GCN) [27] is a well-suited model, as it leverages the inherent graph structure of human skeletons, providing an efficient mechanism for learning directly from joint trajectories. The advantage is that it is no longer necessary to develop and compute engineered gait features from joint trajectories, as ST-GCN can learn to utilize the most significant aspects of gait patterns directly from joint trajectories. ST-GCNs have been effectively combined with human pose estimation libraries to score Parkinsonian leg agility [28]. However, the use of these models to recognize Parkinsonian gait directly on a forward video remains unexplored. In this paper, we hypothesize that Parkinson's patients have unique gait features that reflect disease-specific cognitive features and underlying pathology. We focus on developing a novel video-based Parkinsonian gait recognition method, using the skeleton and joint location from pose estimation to extract gait features and detect PD gait. The correct identification of brain damage diseases is very useful for clinicians to design appropriate treatment methods. The present work offers major contributions in three aspects: (1) We propose to use a novel spatiotemporal modeling method based on skeleton data to recognize Parkinsonian gait; in addition, we construct a graph neural network to capture the topological properties of the human skeleton; (2) We design the weighted matrix with virtual connections to meet the specific demands in gait skeleton modeling and propose a multi-scale temporal convolution network to improve the temporal aggregation capability; and (3) An experiment on the dataset shows that compared to other machine learning methods, the proposed model achieves superior performance. Related Work This section provides a review of related works from two perspectives: gait patterns analysis and Parkinson's gait analysis using machine learning. Gait Patterns Analysis In the gait analysis domain, two main data modalities are commonly employed: sensorbased and vision-based approaches. The promising performance of sensors has drawn interest in their application to gait analysis. Lou et al. [29] developed an in-shoe wireless plantar pressure measurement system with a flexible pressure sensor embedded to capture plantar pressure distribution for quantitative gait analysis. Camps et al. [30] proposed to detect the freezing of gait in Parkinson's disease patients by using a waist-worn inertial measurement unit (IMU). Seifert et al. [31] used radar micro-Doppler signatures to classify different walking styles. Although the sensor-based approach has demonstrated the ability to reflect human kinematics, the need for specific sensors or devices and their requirement to be worn on the human body have limited their convenience in some applications. The vision-based approaches are more convenient and only require cameras for data collection. Prakash et al. [32] utilized an RGB camera to capture joint coordinates from five reflective markers attached to the body during walking, while Seifallahi et al. [33] employed a markerless system using Kinect cameras to capture RGB-D data to detect Alzheimer's disease from gait. Recently, skeleton data have become a popular choice in gait analysis. Some studies have utilized the Microsoft Kinect camera and its camera SDK to generate 3D skeleton data. For example, Nguyen et al. [34] proposed an approach to predict the gait abnormality index by using the joint coordinates of the 3D skeleton as inputs for auto-encoders and then distinguishing abnormal gaits based on reconstruction errors. Elsewhere, Jun et al. [35] proposed a two-recurrent neural network-based autoencoder to extract features from 3D skeleton data for abnormal gait recognition and assessed the performance of discriminative Sensors 2023, 23, 4980 4 of 20 models with these features. In our study, we propose to extract gait features using the skeleton and joint locations obtained from pose estimation. Parkinson's Gait Analysis Using Machine Learning Researchers have experimented with data collected by various sensors for Parkinson's disease gait analysis. Shalin et al. [36] utilized LSTM to detect freezing of gait (FOG) in PD from plantar pressure data. The experiment required participants with PD to wear pressure-sensitive insole sensors while walking a predefined, provoking path. Labeling was then performed, and 16 features were manually extracted. The best FOG detection model had an average sensitivity of 82.1% and an average specificity of 89.5%. However, these particular sensors and devices are too costly to deploy. In addition, they need to be operated on in a specific place under the guidance of a professional doctor. Due to the advances in action recognition [27,[37][38][39][40][41], a growing number of researchers have applied it to gait recognition [42][43][44], and several studies have used video-based methods to automatically analyze dyskinesia symptoms in PD patients. Mandy Lu et al. [21] proposed a novel temporal convolutional neural network model to assess PD severity from gait videos, which extracts the 3D body skeleton of the participant and estimates the MDS-UPDRS score. Li et al. [20] extracted human joint sequences from videos recorded by PD patients and calculated motion features using a pose estimation method. Then, they applied random forest for multiclass classification and assessed clinical scores based on the UPDRS and Unified Dyskinesia Rating Scale (UDysRS) [45]. Sabo et al. [19] proposed the utilization of a spatiotemporal graph convolutional network (ST-GCN) architecture and training procedure to predict clinical scores of Parkinson's disease gait from videos of dementia patients. K. Hu et al. [46] proposed a graph convolutional neural network architecture that represents each video as a directed graph to detect PD frozen gait. The experimental results based on the analysis of over 100 videos collected from 45 patients during clinical evaluation have indicated that the proposed method performs well, achieving an AUC of 0.887. Based on our literature survey, although several studies have evaluated gait videos of Parkinsonian patients, their focus has primarily been on estimating Parkinson's severity and detecting frozen gait, while recognizing PD gait versus normal gait from the forward video has yet to be reported. Additionally, traditional engineering solutions have proven insufficient to accurately assess motor function based on videos. To address this limitation, we have developed a novel deep-learning based framework to extract skeletal sequence features from forward videos of PD patients, with the ultimate goal of recognizing Parkinson's gait. Materials and Methods This part explains our dataset and how the data was preprocessed, and then the model is explained clearly. Figure 1 shows our methodology framework. Our method consists of two phases: feature extraction and gait recognition. Firstly, we augmented the video and then used OpenPose to extract skeleton data. In addition, we augmented the joint coordination space. Secondly, the skeleton data was constructed into a spatiotemporal graph and input to WM-STGCN, and the information in both temporal and spatial dimensions was aggregated by the spatiotemporal graph convolution operation to perform Parkinsonian gait recognition. Dataset We collected the data in an enclosed room for the normal walking video. The wall color was white, with no other colors. The space was 8 m long and 3 m wide, so the cameras could be located. Figure 2 shows the data collection environment. We used two Samsung mobile phones as our recording devices. The video parameters were 1080 × 1920 pixels at 30 Hz. As depicted in Figure 3, the cameras should be placed in forward of the patient's walking direction. Dataset We collected the data in an enclosed room for the normal walking video. The wall color was white, with no other colors. The space was 8 m long and 3 m wide, so the cameras could be located. Figure 2 shows the data collection environment. We used two Samsung mobile phones as our recording devices. The video parameters were 1080 × 1920 pixels at 30 Hz. As depicted in Figure 3, the cameras should be placed in forward of the patient's walking direction. Participants wear their comfortable clothes (recommended wear: pants and sweatshirt or T-shirt) and walk straight from beginning to end, then turn around and walk back. During the walk, participants should walk at a normal speed, and for each sequence, the time length is kept to approximately (10 to 20) seconds. Dataset We collected the data in an enclosed room for the normal walking color was white, with no other colors. The space was 8 m long and cameras could be located. Figure 2 shows the data collection environm Samsung mobile phones as our recording devices. The video p 1080 × 1920 pixels at 30 Hz. As depicted in Figure 3, the cameras sho forward of the patient's walking direction. Participants wear their comfortable clothes (recommended w sweatshirt or T-shirt) and walk straight from beginning to end, then turn back. During the walk, participants should walk at a normal speed, and f the time length is kept to approximately (10 to 20) seconds. After that, we processed the data to make sure the content was only the frontal view walking. Table 1 lists the collected data details. Participants wear their comfortable clothes (recommended wear: pants and sweatshirt or T-shirt) and walk straight from beginning to end, then turn around and walk back. During the walk, participants should walk at a normal speed, and for each sequence, the time length is kept to approximately (10 to 20) seconds. After that, we processed the data to make sure the content was only the frontal view walking. Table 1 lists the collected data details. We obtained six videos from YouTube for Parkinsonian walking data [47][48][49][50][51][52]. To ensure clarity, their resolution was at least 652 × 894 pixels, and the frame rate was 30 fps. The video clips of a Parkinson's patient walking toward the camera without the assistance of others were selected as the data used in our study. Data Augmentation The difficulty in obtaining videos of PD patients walking resulted in a low amount of data. To reduce the class imbalance, we needed to perform data augmentation. Additionally, augmentation can increase the generalization capability of the system. There are two approaches: video augmentation and joint coordinate space augmentation; Figure 4 shows the augmentation pipeline. After that, we processed the data to make sure the content was only the frontal view walking. Table 1 lists the collected data details. We obtained six videos from YouTube for Parkinsonian walking data [47][48][49][50][51][52]. To ensure clarity, their resolution was at least 652 × 894 pixels, and the frame rate was 30 fps. The video clips of a Parkinson's patient walking toward the camera without the assistance of others were selected as the data used in our study. Data Augmentation The difficulty in obtaining videos of PD patients walking resulted in a low amount of data. To reduce the class imbalance, we needed to perform data augmentation. Additionally, augmentation can increase the generalization capability of the system. There are two approaches: video augmentation and joint coordinate space augmentation; Figure 4 shows the augmentation pipeline. We first used temporal partition to crop the original videos, then flipped the video horizontally. After extracting skeleton data, we made joint coordinates space augmentation by translating and adding Gaussian noise. We first used temporal partition to crop the original videos, then flipped the video horizontally. After extracting skeleton data, we made joint coordinates space augmentation by translating and adding Gaussian noise. Video Augmentation In the video augmentation field, temporal partition and horizontal flipping are two effective tools to augment data on videos. We used temporal cropping to implement partition: each video sequence of length l was temporally cropped to a fixed new sequence length k, where k = 90 frames, as shown in Figure 5. This allowed a video sequence to be partitioned with an interval of 20 frames. For horizontal flipping, we flipped the entire video to obtain a new video sequence. Video Augmentation In the video augmentation field, temporal partition and horizontal flipping are tw effective tools to augment data on videos. We used temporal cropping to implement partition: each video sequence of length was temporally cropped to a fixed new sequence length , where = 90 frames, shown in Figure 5. This allowed a video sequence to be partitioned with an interval of frames. For horizontal flipping, we flipped the entire video to obtain a new vid sequence. Joint coordinate Space Augmentation After we extracted skeleton data from videos, a natural idea to augment data is directly focus on the joint coordinates space. The skeleton data are stored as a dictiona data structure (JSON format files) to allow key and value search to modify the joint valu We performed the coordinate space augmentation processing in the following tw ways: 1. Joint coordinates were translated in the horizontal direction to a new position allow change in the viewing angle. As shown in Figure 6a, we set the offset ∆ (−0.1, 0.15, 0.2), which means we translated the coordinates of the skeleton data wi ∆. 2. Gaussian noise was added to the joint coordinate. Figure 6b shows that the additi of appropriate noise perturbs the skeletal data within a certain range, which allow errors in joint coordinate calculation-for example, interference with t environment, such as background color or cloth texture. We set three Gaussi parameter groups for the experiment for ( , ), = 0, = (0.01, 0.05, 0.1). Joint Coordinate Space Augmentation After we extracted skeleton data from videos, a natural idea to augment data is to directly focus on the joint coordinates space. The skeleton data are stored as a dictionary data structure (JSON format files) to allow key and value search to modify the joint value. We performed the coordinate space augmentation processing in the following two ways: 1. Joint coordinates were translated in the horizontal direction to a new position to allow change in the viewing angle. As shown in Figure 6a, we set the offset ∆ =(−0.1, 0.15, 0.2), which means we translated the coordinates of the skeleton data with ∆. 2. Gaussian noise was added to the joint coordinate. Figure 6b shows that the addition of appropriate noise perturbs the skeletal data within a certain range, which allows errors in joint coordinate calculation-for example, interference with the environment, such as background color or cloth texture. We set three Gaussian parameter groups for the experiment for ϕ(µ, σ), µ = 0, σ = (0.01, 0.05, 0.1). In the video augmentation field, temporal partition and horizontal flipping are effective tools to augment data on videos. We used temporal cropping to implement partition: each video sequence of leng was temporally cropped to a fixed new sequence length , where = 90 frame shown in Figure 5. This allowed a video sequence to be partitioned with an interval frames. For horizontal flipping, we flipped the entire video to obtain a new v sequence. Joint coordinate Space Augmentation After we extracted skeleton data from videos, a natural idea to augment data directly focus on the joint coordinates space. The skeleton data are stored as a dictio data structure (JSON format files) to allow key and value search to modify the joint v We performed the coordinate space augmentation processing in the following ways: 1. Joint coordinates were translated in the horizontal direction to a new positio allow change in the viewing angle. As shown in Figure 6a, we set the offset (−0.1, 0.15, 0.2), which means we translated the coordinates of the skeleton data ∆. 2. Gaussian noise was added to the joint coordinate. Figure 6b shows that the add of appropriate noise perturbs the skeletal data within a certain range, which al errors in joint coordinate calculation-for example, interference with environment, such as background color or cloth texture. We set three Gaus parameter groups for the experiment for ( , ), = 0, = (0.01, 0.05, 0.1). Skeleton Data Extraction The video sequences are processed to extract 2D skeleton features, where each frame is analyzed using OpenPose, owing to its proficient and robust detection capabilities for 2D joint landmarks in upright individuals. We extract 25 landmarks in the OpenPose-skeleton format, which encompass 2D coordinate values (x, y) and an associated confidence score c that indicates the level of estimation reliability. The key points roughly correspond to body parts: 0 To obtain sequential key-point coordinate data for each gait sequence, we performed 2D real-time 25-key point body estimation on every image using OpenPose. Figure 7 illustrates the resulting skeleton sequence for a typical normal participant. Skeleton Data Extraction The video sequences are processed to extract 2D skeleton features, where each frame is analyzed using OpenPose, owing to its proficient and robust detection capabilities for 2D joint landmarks in upright individuals. We extract 25 landmarks in the OpenPoseskeleton format, which encompass 2D coordinate values ( , ) and an associated confidence score that indicates the level of estimation reliability. The key points roughly correspond to body parts: 0 To obtain sequential key-point coordinate data for each gait sequence, we performed 2D real-time 25-key point body estimation on every image using OpenPose. Figure 7 illustrates the resulting skeleton sequence for a typical normal participant. Graph Structure Construction To construct a spatiotemporal graph structure from a sequence comprising N nodes and T frames [27], we employed a pose graph = ( , ). The node set V = { | = 1, … , = 1, … } denotes the joint positions, where represents the -th joint at theth frame. The feature vector of consists of the two-dimensional coordinate of this joint and the confidence score. The edge set E includes: (a) the intra-skeleton connections, which connect the nodes of each frame according to the connections of human joints, where these edges form spatial edge; Figure 8a shows that we notate it as = { |( , ) ∈ }, where is a set of naturally connected human joints. (b) The inter-frame connections that connect the same joints (nodes) in two consecutive frames, where these edges form temporal edges. Figure 8b shows that we notate it as { +1 }. Graph Structure Construction To construct a spatiotemporal graph structure from a sequence comprising N nodes and T frames [27], we employed a pose graph G = (V, E). The node set V = v i t t = 1, . . . T, i = 1, . . . N denotes the joint positions, where v i t represents the i-th joint at the t-th frame. The feature vector of v i t consists of the two-dimensional coordinate of this joint and the confidence score. The edge set E includes: (a) the intra-skeleton connections, which connect the nodes of each frame according to the connections of human joints, where these edges form spatial edge; Figure 8a shows that we notate it as E s = v i t v j t (i, j) ∈ H , where H is a set of naturally connected human joints. (b) The inter-frame connections that connect the same joints (nodes) in two consecutive frames, where these edges form temporal edges. Figure 8b shows that we notate it as v i t v i t+1 . WM-STGCN 3.4.1. WM-STGCN Structure Figure 9 shows the proposed WM-STGCN model architecture, which takes a sequence of human joint coordinates extracted from gait videos as input and predicts the gait category. Figure 9a provides an overall depiction of the proposed structure, whereas Figure 9b depicts the spatial module, and Figure 9c shows the temporal module. The whole network comprises N GCN blocks (N = 9), with output channels of 64, 64, 64, 128, 128, 128, 256, 256, and 256, respectively. A global average pooling layer is added to the back end of the network, and the final output is sent to a SoftMax classifier to obtain the ultimate prediction result. To ensure training stability, residual connections are included in each basic block. WM-STGCN 3.4.1. WM-STGCN Structure Figure 9 shows the proposed WM-STGCN model architecture, which take sequence of human joint coordinates extracted from gait videos as input and predicts gait category. Figure 9a provides an overall depiction of the proposed structure, whe Figure 9b depicts the spatial module, and Figure 9c shows the temporal module. The whole network comprises N GCN blocks (N = 9), with output channels of 64 64, 128, 128, 128, 256, 256, and 256, respectively. A global average pooling layer is ad to the back end of the network, and the final output is sent to a SoftMax classifier to ob the ultimate prediction result. To ensure training stability, residual connections included in each basic block. Each GCN block ℱ comprises a spatial module and a temporal module . spatial module combines the features of different joints using sparse matrices deri from the adjacency matrix , as illustrated in Figure 10a. The output of is subseque processed by to extract temporal features. The computations of ℱ can be summari as follows: ℱ( ) = ( ( , )) + Figure 10b illustrates the input feature map of the first GCN block, wherein a skele feature ∈ × × is given as input, where denotes temporal length, repres the number of skeleton joints, and signifies the number of channels. Notably, th input to the first GCN block equals 3. Each GCN block F comprises a spatial module G and a temporal module T . The spatial module G combines the features of different joints using sparse matrices derived from the adjacency matrix A, as illustrated in Figure 10a. The output of G is subsequently processed by T to extract temporal features. The computations of F can be summarized as follows: F (X) = T (G(X, A)) + X Spatial module : Graph Convolution in the Spatial Domain In the spatial domain, the convolution of the graph on a certain node is defined as follows: where and represent the input and output feature maps, respectively; represents a particular node in the spatial dimension; represents the sampling area for the convolution of that node (in this work, is the 1-neighbor set of ); is the normalizing term, which equals the cardinality of the corresponding subset; and represents the weight function that provides the weight matrix. We divided the neighborhood into three subsets of self-connection, physical connection, and virtual connection, and different labels can be assigned to each subset. We discuss the virtual connection in Section 3.4.3. Here, is a mapping function: ( ) → {0, … , , ( = 3)}, which maps a node in the neighborhood to its subset label. Figure 11a shows a graph of the input skeleton sequence, and 1 represents the root node itself (orange), 2 represents the physically connected node (blue), and 3 represents the virtually connected node (green). We use node 1 as the root node of this convolutional computation to explain the mapping strategy. Nodes 2, 4, 9 are its sampled neighboring nodes, which form the neighborhood , where node 9 provides a virtual connection. Accordingly, as shown in Figure 11b, the adjacency matrix of node 3 is divided into three submatrices , but ensure that where = ∑ , = 1,2,3. Figure 10b illustrates the input feature map of the first GCN block, wherein a skeleton feature X ∈ R T×V×C is given as input, where T denotes temporal length, V represents the number of skeleton joints, and C signifies the number of channels. Notably, the C input to the first GCN block equals 3. Spatial Module G: Graph Convolution in the Spatial Domain In the spatial domain, the convolution of the graph on a certain node v i is defined as follows: where f in and f out represent the input and output feature maps, respectively; v i represents a particular node in the spatial dimension; B i represents the sampling area for the convolution of that node (in this work, B i is the 1-neighbor set of v i ); Z is the normalizing term, which equals the cardinality of the corresponding subset; and w represents the weight function that provides the weight matrix. We divided the neighborhood B into three subsets of self-connection, physical connection, and virtual connection, and different labels can be assigned to each subset. We discuss the virtual connection in Section 3.4.3. Here, l i is a mapping function: l i v j → {0, . . . , K, (K = 3)} , which maps a node in the neighborhood to its subset label. Figure 11a shows a graph of the input skeleton sequence, and x 1 represents the root node itself (orange), x 2 represents the physically connected node (blue), and x 3 represents the virtually connected node (green). We use node 1 as the root node of this convolutional computation to explain the mapping strategy. Nodes 2, 4, 9 are its sampled neighboring nodes, which form the neighborhood B, where node 9 provides a virtual connection. Accordingly, as shown in Figure 11b, the adjacency matrix of node 3 is divided into three submatrices A k , but ensure that where A = ∑ k A k , k = 1, 2, 3. Simplifying and transforming Equation (2), the spatial graph convolution can be implemented using the following: Simplifying and transforming Equation (2), the spatial graph convolution can be implemented using the following: where, in Equation (3) represents the amount of convolutional kernel, which is 3 according to the mapping strategy; is an N × N normalized adjacency matrix; Λ is a normalized diagonal matrix. is a 1 × 1 convolution operation, which represents the weight function in Equation (2). In the spatial domain, the input is represented as ∈ × × ; upon applying the spatial graph convolution, the resulting output feature map is denoted ∈ × × . Simplifying and transforming Equation (2), the spatial graph convolution can implemented using the following: where, in Equation (3) represents the amount of convolutional kernel, which i according to the mapping strategy; is an N × N normalized adjacency matrix; Λ a normalized diagonal matrix. is a 1 × 1 convolution operation, which represents weight function in Equation (2). In the spatial domain, the input is represented as ∈ × × ; upon applying spatial graph convolution, the resulting output feature map is denoted where, k in Equation (3) represents the amount of convolutional kernel, which is 3 according to the mapping strategy; A k is an N × N normalized adjacency matrix; Λ Simplifying and transforming Equation (2), the spatial graph convolution can be implemented using the following: where, in Equation (3) represents the amount of convolutional kernel, which is 3 according to the mapping strategy; is an N × N normalized adjacency matrix; Λ − 1 2 is a normalized diagonal matrix. is a 1 × 1 convolution operation, which represents the weight function in Equation (2). In the spatial domain, the input is represented as ∈ × × ; upon applying the spatial graph convolution, the resulting output feature map is denoted ∈ × × . Weighted Adjacency Matrix with Virtual Connection The spatial structure of the skeleton is represented by an artificial, predefined adjacency matrix, which represents the a priori knowledge of the connections of the human skeleton. However, it cannot generate new connections between non-adjacent joints during training, which means that the learning ability of the graph convolutional network is limited and that such an adjacency matrix is not an optimal choice. To address the above problems, we design a novel adjacency matrix, which has the following two features: Virtual connection. We combined some unique features of Parkinson's gait compared to normal gait (including small amplitude of arm swing, fast frequency and small stride In the spatial domain, the input is represented as G in ∈ R T×V×C in ; upon applying the spatial graph convolution, the resulting output feature map is denoted G out ∈ R T×V×C out . Weighted Adjacency Matrix with Virtual Connection The spatial structure of the skeleton is represented by an artificial, predefined adjacency matrix, which represents the a priori knowledge of the connections of the human skeleton. However, it cannot generate new connections between non-adjacent joints during training, which means that the learning ability of the graph convolutional network is limited and that such an adjacency matrix is not an optimal choice. To address the above problems, we design a novel adjacency matrix, which has the following two features: Virtual connection. We combined some unique features of Parkinson's gait compared to normal gait (including small amplitude of arm swing, fast frequency and small stride length of foot movement, and random little steps) and introduced some virtual connections, i.e., unnaturally connected joints. Weighted adjacent matrix. We used a scalar to multiply with the original adjacency matrix to get a new adjacency matrix, which makes distinct kinds of joints with different weights. With these new designs, we make it possible to generate connections between nonadjacent joints, and give different weights for physical connections, virtual connections and self-connections. We design a new adjacency matrix and obtain a skeletal space structure that is more suitable for describing the Parkinson samples, thus enabling better gait recognition. Specifically, a ij is a scalar: If we set the value of a ii = 0, this indicates that we eliminate the self-connection of each joint. Additionally, we distinguish between physical and virtual dependencies between joints. The physical dependency, represented by β and depicted as blue solid lines in Figure 12a, captures the intrinsic connection between joints. The virtual dependency, depicted as orange dashed lines in Figure 12a, represents the extrinsic connection between two joints, which is also crucial for gait recognition. We use the parameter γ to model this virtual relationship. For example, although the left hip and left hand are physically disconnected, their relationship is essential in identifying Parkinsonian gait. With these new designs, we make it possible to generate connections between nonadjacent joints, and give different weights for physical connections, virtual connections and self-connections. We design a new adjacency matrix and obtain a skeletal space structure that is more suitable for describing the Parkinson samples, thus enabling better gait recognition. Specifically, is a scalar: If we set the value of = 0, this indicates that we eliminate the self-connection of each joint. Additionally, we distinguish between physical and virtual dependencies between joints. The physical dependency, represented by and depicted as blue solid lines in Figure 12a, captures the intrinsic connection between joints. The virtual dependency, depicted as orange dashed lines in Figure 12a, represents the extrinsic connection between two joints, which is also crucial for gait recognition. We use the parameter to model this virtual relationship. For example, although the left hip and left hand are physically disconnected, their relationship is essential in identifying Parkinsonian gait. After adding weights, the graph convolution formula in spatial dimension can be transformed from Equation (3) to the following: Figure 12b shows the process of weight addition, where the adjacency matrix of each layer consists of and weight together, denotes the number of subsets, and the dashed line indicates that the residual convolution operation is required only when is different from . For the experiment, we tested 4 cases: ① α = 1, β = 1, γ = 0; ② α = 1, β = 1, γ = 0.5; ③ α = 0, β = 1, γ = 0.5 ; ④ α = 0.2, β = 1, γ = 0.5 . This means that we tested the performance of the model with self-connection, 0.5 weight virtual connection, without self-connection and 0.2 weight self-connection and 0.5 weight virtual connection. Figure Simplifying and transforming Equation (2), the spatial graph convolution can be implemented using the following: where, in Equation (3) represents the amount of convolutional kernel, which is 3 according to the mapping strategy; is an N × N normalized adjacency matrix; Λ is a normalized diagonal matrix. is a 1 × 1 convolution operation, which represents the weight function in Equation (2). In the spatial domain, the input is represented as ∈ × × ; upon applying the spatial graph convolution, the resulting output feature map is denoted ∈ × × . Weighted Adjacency Matrix with Virtual Connection The spatial structure of the skeleton is represented by an artificial, predefined adjacency matrix, which represents the a priori knowledge of the connections of the Simplifying and transforming Equation (2), the spatial graph convolution ca implemented using the following: where, in Equation (3) represents the amount of convolutional kernel, which according to the mapping strategy; is an N × N normalized adjacency matrix; Λ a normalized diagonal matrix. is a 1 × 1 convolution operation, which represent weight function in Equation (2). In the spatial domain, the input is represented as ∈ × × ; upon applying spatial graph convolution, the resulting output feature map is denoted ∈ × × Weighted Adjacency Matrix with Virtual Connection The spatial structure of the skeleton is represented by an artificial, predef adjacency matrix, which represents the a priori knowledge of the connections of Figure 12b shows the process of weight addition, where the adjacency matrix of each layer consists of A k and weight a together, k denotes the number of subsets, and the dashed line indicates that the residual convolution operation is required only when C in is different from C out . Temporal Module : Graph Convolution in Temporal Domain captures the spatial dependencies between adjacent joints, and to mo temporal changes of these features, we employed a multi-scale temporal conv network (MS-TCN). Unlike many existing works that employ temporal conv networks with fixed kernel sizes × 1 throughout the architecture, we designed TCN, as shown in Figure 14, to promote the flexibility and temporal modeling ca by using multi-group convolution. The adopted multi-scale TCN contains five branches: a 1 × 1 convolution br Max-pooling branch, and three temporal convolutions with kernel size 5 and d from (1 to 3). Every branch contains a 1 × 1 convolution, which is used to reduce c dimension before the expensive convolution 3 × 1. Additionally, the 1 × 1 conv introduces additional nonlinearity via a nonlinear activation function, thereby inc the network's complexity, and enabling it to be deeper. This output continues to into the spatial graph convolution, as shown in Figure 9, and it is fed into th connected layer only in the last GCN block. Temporal Module T : Graph Convolution in Temporal Domain G captures the spatial dependencies between adjacent joints, and to model the temporal changes of these features, we employed a multi-scale temporal convolution network (MS-TCN). Unlike many existing works that employ temporal convolution networks with fixed kernel sizes k t × 1 throughout the architecture, we designed a MS-TCN, as shown in Figure 14, to promote the flexibility and temporal modeling capability by using multi-group convolution. The MS-TCN enhances vanilla temporal convolution layer's receptive fields, and improves the temporal aggregation capability. At the same time, it reduces computational cost and parameters through reduced channel width for each branch. Implementation Details We used NVIDIA GeForce RTX 2080Ti GPU with 12 GB memory, Intel(R) Core(TM) i9-10900 CPU with 2.80 GHz 64 GB RAM to build the deep learning framework using The adopted multi-scale TCN contains five branches: a 1 × 1 convolution branch, a Max-pooling branch, and three temporal convolutions with kernel size 5 and dilations from (1 to 3). Every branch contains a 1 × 1 convolution, which is used to reduce channel dimension before the expensive convolution 3 × 1. Additionally, the 1 × 1 convolution introduces additional nonlinearity via a nonlinear activation function, thereby increasing the network's complexity, and enabling it to be deeper. This output continues to be fed into the spatial graph convolution, as shown in Figure 9, and it is fed into the fully connected layer only in the last GCN block. The MS-TCN enhances vanilla temporal convolution layer's receptive fields, and improves the temporal aggregation capability. At the same time, it reduces computational cost and parameters through reduced channel width for each branch. Implementation Details We used NVIDIA GeForce RTX 2080Ti GPU with 12 GB memory, Intel(R) Core(TM) i9-10900 CPU with 2.80 GHz 64 GB RAM to build the deep learning framework using PyTorch in Windows 10 environment. We used CUDA, Cudnn, OpenCV, and other required libraries to train and test the Parkinsonian gait recognition model. The batch size during training and testing was 16. The base learning rate was 0.1. We chose SGD as optimizer with step [20,30,40,50]. Following data preprocessing, we obtained 160 normal samples, and 150 Parkinsonian samples. We split our dataset into a training set and a test set, with a ratio of 80% and 20%, respectively. The test set comprised 32 normal samples and 30 Parkinsonian samples. Evaluation Metric In this study, we defined Parkinsonian gait samples as positive and normal gait samples as negative. We utilized widely accepted evaluation metrics, including True Positive (TP), False Negative (FN), False Positive (FP), and True Negative (TN), to accurately classify samples into these categories. To evaluate the performance of our method, we selected accuracy, precision, sensitivity, specificity, false alarm, miss rate, and F1 score as our evaluation metrics. A higher value for accuracy, precision, sensitivity, specificity, and F1 score indicates better model performance. In contrast, a smaller value for false alarm and miss rate indicates better performance. Accuracy reflects the ability of the model to correctly judge the overall sample, i.e., the ability to correctly classify Parkinsonian samples as positive, and normal samples as negative. Precision reflects the ability of the model to correctly predict the positive samples, i.e., how many of the predicted Parkinsonian samples are true Parkinsonian samples. Sensitivity is defined as the proportion of Parkinsonian samples predicted to be Parkinsonian samples to the total number of Parkinsonian samples. Specificity reflects the proportion of normal samples that are predicted as normal samples to the total normal samples. False alarm, also known as false positive rate or false detection rate, is obtained by calculating the proportion of normal samples predicted as Parkinsonian samples to the total normal samples. Miss rate is obtained by calculating the proportion of Parkinsonian samples that are predicted as normal samples to the total Parkinsonian samples. miss rate = FNR = FN TP + FN (12) Furthermore, F1 score is widely used in model evaluation. This is the harmonic mean of the precision and recall, which can reflect the performance of the model in a balanced way. Results and Discussion We experimented with different parameters of Gaussian noise augmentation with µ = 0, and σ = (0.01, 0.05, 0.1). In Table 2 and Figure 15, the experimental results show that the model had the highest accuracy of 85.48% for σ = 0.1. Although the precision was 4.87% lower compared to the group with σ = 0.01, the sensitivity increased from 60% to 80%, improving the performance of predicting Parkinsonian samples as positive, which was the best of the three experimental groups. Meanwhile, the miss rate was only 20%, which was much lower than the 40% at σ = 0.05. Overall, the model showed the best performance for detecting Parkinson's samples at σ = 0.1. Figure 16 shows the accuracy during training based on several groups of Gaussian noise. For the different weight adjacencies, we tested four cases. When α = 1, β = 1, γ = 0, which is the original matrix containing only self-connections and physical connections. In Table 3, the experimental results showed that the accuracy reached 72.58%, and the recognition miss rate of Parkinson's gait was 46.67%, the lowest among the four groups. When adding γ = 0.5, i.e., 0.5 weight of virtual connections, we found that although the accuracy rate decreased slightly from 72.58% to 70.97%, the sensitivity and miss rate increased. After removing the self-connection, we found that the accuracy increased by 14.51% and sensitivity increased by 23.33%, while the miss rate decreased from 43.33% to 20%. For the different weight adjacencies, we tested four cases. When α = 1, β = 1, γ = 0, which is the original matrix containing only self-connections and physical connections. In Table 3, the experimental results showed that the accuracy reached 72.58%, and the recognition miss rate of Parkinson's gait was 46.67%, the lowest among the four groups. When adding γ = 0.5, i.e., 0.5 weight of virtual connections, we found that although the accuracy rate decreased slightly from 72.58% to 70.97%, the sensitivity and miss rate increased. After removing the self-connection, we found that the accuracy increased by 14.51% and sensitivity increased by 23.33%, while the miss rate decreased from 43.33% to 20%. This indicates that removing the effect of joint self-connection aids the correct recognition of gait. Finally, we achieved the best results with 0.2 weight of the joint self-connections and 0.5 weight of the virtual joint, where the accuracy was 87.10%, the sensitivity was 86.67%, and the miss rate was the smallest, at 13.33%. Figures 17a and 17b show the confusion matrix and loss function, respectively. Through our experiments, our best result showed an accuracy of 87.10%. Table 4 compares the performance with the other well-known machine learning models of LSTM, KNN, Decision Tree, AdaBoost, and ST-GCN. In particular, Lstm-layer1 means a one-layer network, layer2 means a two-layer network, and the weak learner model in the AdaBoost classifier is 50 decision trees of depth 1. Through our experiments, our best result showed an accuracy of 87.10%. Table 4 compares the performance with the other well-known machine learning models of LSTM, KNN, Decision Tree, AdaBoost, and ST-GCN. In particular, Lstm-layer1 means a onelayer network, layer2 means a two-layer network, and the weak learner model in the AdaBoost classifier is 50 decision trees of depth 1. We conducted an analysis to investigate the superior performance of WM-STGCN in comparison to other models based on the following factors. The first factor is the utilization of a weighted adjacent matrix with virtual connections. The weighted adjacency matrix with virtual connections plays a crucial role in WM-STGCN. While an adjacency matrix without weights can be used to represent adjacency information, a weighted adjacency matrix allows for a more sophisticated representation of adjacency information. Moreover, weights can reflect the structure of the graph in a more granular way, for example, by adjusting weights based on the connection types to emphasize relationships with physical connections or virtual connections. Therefore, using a weighted adjacency matrix enables WM-STGCN models to reflect more detailed graph structures and make better predictions. The second factor is the integration of a multi-scale temporal convolutional network. The multi-scale temporal convolutional network used in this study can enhance the receptive field of temporal convolution, improve time aggregation ability, and extract features from various time intervals. At the same time, it can reduce the computational cost and parameters by reducing the channel width of each branch. Finally, we use a separately designed data augmentation method for both raw video and skeletal data, which also effectively improves the performance of the model. These advantages enable effective recognition of Parkinson's disease from gait data. However, there are also some shortcomings. For example, due to equipment limitations, we focused on the RGB color video of the front view, but users cannot guarantee to record high-quality video when using it, which will affect the recognition accuracy. At the same time, our model performance can be further improved by using multi-modal analysis methods, such as adding sensor data. In the future, our WM-STCGN model is expected to be applied to research on gait-related diseases in the elderly, including not only Parkinson's disease but also dementia, stroke, and other related conditions. Conclusions In this paper, we proposed a novel spatiotemporal modeling approach, known as WM-STGCN, which employs a weighted adjacent matrix with virtual connections and multi-scale temporal convolutional networks to recognize Parkinsonian gait from forward walking videos. Our experimental results demonstrated the effectiveness of the proposed method, which outperformed the machine learning-based methods such as LSTM, KNN, Decision Tree, AdaBoost, and ST-GCN. This method could provide a promising solution for PD gait recognition, which is crucial for the early and accurate diagnosis of PD. We believe that our method can be further improved by integrating it with other advanced deep learning techniques and can be extended to the fields of healthcare and biomedicine. Informed Consent Statement: Normal person informed consent was obtained from all subjects involved in the study. Patient informed consent was waived due to the reason that the data were obtained from online public records. Data Availability Statement: The data presented in this study are available on reasonable request from the corresponding author. Conflicts of Interest: The authors declare no conflict of interest.
2023-05-25T15:15:19.771Z
2023-05-01T00:00:00.000
{ "year": 2023, "sha1": "e81975d10b1698fc94688013325c4e43eeb23786", "oa_license": "CCBY", "oa_url": "https://doi.org/10.3390/s23104980", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "f3617ded44eebf5405392faece0f0a091d5a3566", "s2fieldsofstudy": [ "Computer Science" ], "extfieldsofstudy": [ "Computer Science" ] }
1561136
pes2o/s2orc
v3-fos-license
Spatial localization of co-regulated genes exceeds genomic gene clustering in the Saccharomyces cerevisiae genome While it has been long recognized that genes are not randomly positioned along the genome, the degree to which its 3D structure influences the arrangement of genes has remained elusive. In particular, several lines of evidence suggest that actively transcribed genes are spatially co-localized, forming transcription factories; however, a generalized systematic test has hitherto not been described. Here we reveal transcription factories using a rigorous definition of genomic structure based on Saccharomyces cerevisiae chromosome conformation capture data, coupled with an experimental design controlling for the primary gene order. We develop a data-driven method for the interpolation and the embedding of such datasets and introduce statistics that enable the comparison of the spatial and genomic densities of genes. Combining these, we report evidence that co-regulated genes are clustered in space, beyond their observed clustering in the context of gene order along the genome and show this phenomenon is significant for 64 out of 117 transcription factors. Furthermore, we show that those transcription factors with high spatially co-localized targets are expressed higher than those whose targets are not spatially clustered. Collectively, our results support the notion that, at a given time, the physical density of genes is intimately related to regulatory activity. INTRODUCTION The cell's regulatory state is, to a large extent, reflected by the particular conformation that the genome assumes at any given particular instance (1)(2)(3)(4)(5). This has been observed at the level of pairs of genes whose proximity in the nucleus is dependent on the developmental stage (6,7). Particular loci have also been shown to be associated with many distantly located genomic loci (8,9), demonstrating the plasticity of the genome. Recently developed experimental methods (10) enable the systematic study of these phenomena. In particular, chromosome conformation capture (3C) followed by high-throughput sequencing greatly improves our ability to globally model genomic structure. Using this approach and its derivatives, the genomic structures of Saccharomyces cerevisiae, Schizosaccharomyces pombe, Drosophila melanogaster and human have been determined for particular conditions. The initial analyses of these datasets have already led to insights into the structure of the genome, including the fractal nature of the human genome (11), the centromere co-localization and Rabl conformation in brewer's yeast (12), the proximity of functionally related genes in fission yeast (13) and the physical demarcation of chromosomal domains in Drosophila (14). The ability to measure genomic architecture in three dimensions provides an opportunity to address long-standing questions involving how genomic structure encodes the phenotype, and addressing these will require new computational tools with an appropriate framework for analysis. Of particular interest is the notion of nuclear transcription factories, and their role in establishing the regulatory states that underlie physiological stages. Most gene targets of S. cerevisiae transcription factors (TFs) have been determined with high confidence, revealing an average of 70 gene targets per TF (15,16). Coupling this data with genome structure enables the study of the co-localization of TF targets. For example, are the targets of the same TF co-localized to the same spatial arrangement as the transcription factory model suggests? Under which conditions does such co-localization occur? Previous analyses have addressed this question leading to contradictory results. Dai and Dai compared the number of interactions in different gene sets and observed statistical enrichment under the hypergeometric null model for interactions among TF targets (17). However, Witten and Noble argued that edges in the 3C interaction graph are not statistically independent, as was assumed by Dai and Dai, and as such co-localization events would be over-counted (18). To correct for this, Witten and Noble applied a re-sampling methodology under which no signal for TF target co-localization was detected. Importantly, while the previous studies treated genomic proximity differently than spatial proximity, this was done by examining only inter-chromosomal distances. In additional, the spatial organization of the genome was not directly compared with the primary gene order in terms of their respective functional enrichment. This latter point is important because genomic analyses have revealed that neighboring genes tend to have similar expression profiles (19). Furthermore, genes with housekeeping functions in particular tend to be co-positioned along chromosomes (20). In particular, gene targets of the same TF are enriched for proximity in their genomic order (21). Thus, controlling for the genomic clustering is crucial for unbiased evidence regarding the degree to which the spatial clustering contributes to regulating functionally related genes. Here we introduce a statistical framework for modeling chromatin structure relying on a minimum set of assumptions and assaying the spatial proximity of functionally related genes while controlling for effects from linear co-localization along the genome. Our analysis is more subtle and flexible in refining gene sets for detecting the optimally clustered subset and defines enrichment environments more loosely based on this subset. Additionally, we apply a direct approach for controlling against results that may have emerged primarily from genomic proximity, thereby focusing our results on the phenomenon of spatial co-localization. Notably, our approach uses the hypergeometric test for assessing spatial co-localization at a particular locus, thereby disentangling dependencies that arose in previous analyses (17). We applied this approach on a parsimoniously interpolated 3C contact matrix. Our results indicate that for most TFs, the targets are significantly more co-localized in space than they are co-localized in genomic loci. We further found that TFs with spatially co-localized targets are also expressed higher under the same measurement condition, suggesting that regulatory activity is correlated with the presence of transcription factories. As more genomic structures are produced, our method promises to be of importance to the study of transcription factories. Natural neighbor interpolation of 3C data The raw frequency measurements provided by the yeast 3C experiment of Duan et al. (12)-using the HindIII libraries filtered at P < 10 À3 False discovery rate (FDR)-corrected-was represented as a scattered sparse block matrix, where each block corresponds to a pair of chromosomes. Each read of a mapped paired-end insert was assigned to the mid-base of a restriction enzyme fragment in its corresponding unique location along the genome. Each block of the raw data matrix was subjected to interpolation using a continuously differentiable C 1 interpolant. The natural neighbor interpolation method (22) was implemented at 1-kb resolution using the TriScatteredInterp function in Matlab with the following modifications. First, the frequency of each position with itself was set to the highest observed frequency in the dataset. These measurements are not captured by the 3C method for technical reasons (12), but are required for the multi-dimensional scaling (MDS) to preserve positive definiteness. The results are robust to a wide range of different set diagonal frequencies (Supplementary Figure S3). For each diagonal block matrix, 'ghost points' (23) were added at a distance equivalent to 10% the distance of the chromosome size and set to a frequency of zero. This enabled extrapolation near telomeres where there are little to no data. Finally, due to rounding errors in the interpolation the resulting matrix was non-symmetric, which is resolved by averaging it with its transpose. The Voronoi tessellation, on which natural neighbor interpolation relies, is shown in Figure 1A, where the colored domains are Voronoi cells. Each cell is generated by the intersect of all half-spaces imposed by the orthogonal separating planes between the point inside the cell and every other point separately. The EcoRI library used in the original experiment (12) was used for comparison and validation of the resulting interpolation. Modeling genome structure The interpolated contact frequency matrix was used as input for modeling the structure. The matrix was embedded to coordinates in an arbitrary 3D Euclidean space using non-linear metric MDS (also referred to as principal coordinate analysis) (24). The three principal dimensions from the linear embedding were used as a starting reference for the genomic coordinates. Next, the isotonic least-squares optimization was used to minimize the deviation of distances between coordinates to that of the input matrix while preserving the order of pairwise distances. The target function was the Kruskal stress-1 criterion (24), which measures relative deviations from the input matrix: where d ij is the distance between coordinates i, j in the original input data, and x ij is the distance between coordinates i, j in the resulting model. For the whole-genome embedding, we re-sampled the genome using 5-kb resolution per coordinate. This lower resolution allowed the embedding process to converge at the whole-genome scale. To visualize this model at 1-kb resolution, we use piecewise cubic Hermite interpolation, a C 1 interpolant for univariate data (25). Functional enrichment of 3D and 1D loci For each gene g, we compute the functional enrichment in 3D and 1D environments according to the following method. All other genes are ordered separately according to the following: (1) Their interpolated contact frequency with respect to g (3D proximity to g), (2) Their genomic distance (1D) from g. For any given TF, we compute the minimum hypergeometric statistic (mHG) (26,27) for the enrichment of its target in both the 1D and 3D neighborhoods of g. Annotation data for TF targets were taken from a previous analysis [orfs_by_factor_p0.005_cons1 from (15)]. Briefly, for a given ranked list of genes (for an example see Figure 2A), mHG finds a prefix of the list that maximizes the statistical enrichment of genes pertaining to an annotation set. The mHG P-value represents the likelihood of observing such an enrichment, at some prefix, under a null model [see (26,27)]. We obtain a bound on the mHG P-value, per annotation term, and per centered gene g by multiplying the calculated mHG statistic by the number of genes in the annotation term. To correct for multiple testing, these are later Bonferroni-corrected across the different annotation terms. Because the process is applied on both the genomic and spatial orderings of genes, we limit the threshold search to the size of g's chromosome, which results in comparable P-values for the most enriched spatial and genomic environments centered on g. Hence, this implementation of mHG is partition limited as previously described (26,27). Peaks of enrichment ( Figure 3A) were detected using Matlab's findpeaks function. We limited the peak calling to a minimum distance of 10 loci from one another and a height of Àlog 10 (0.05). As a supplement to the present work, we are providing the software package INSP3CT (Interpolation and Statistical Proximity of 3C Tables) as an implementation for similar datasets to identify and compare spatial and genomic co-localization of genomic annotated markers. To compare the observed enrichment results, for a fixed given TF, to a background model, the genes were first sorted according to the log odds ratio of the 3D and 1D enrichments. Next, the same quantities were computed for each of 100 shuffled genomes (with gene identities randomly permuted), thus yielding Z-scores for each rank in the list of genes sorted by the actually observed log-odds. This comparison is exemplified in Figure 4A. An unconstrained 1-kb resolution model of the yeast genome using natural neighbor interpolation and embedding The systematic analysis of genome structure and of 3D features of genome organization requires a coherent and comprehensive representation of the contacts between genomic loci. However, actual data resulting from 3C measurement assays are scattered across irregular genomic intervals. Thus, our first goal was to use the previously determined dataset (12) to study the characteristics of the yeast genomic structure as it relates to function. To accomplish this, we first set out to regularize and provide a uniformly spaced contact matrix. For this purpose, we used a natural neighbor interpolation to arrive at a 1-kb resolution frequency matrix. Because the median size of the intervals in the primary data is 1800 bp (median restriction fragment length) (12), we chose to interpolate at a 1-kb interval. This choice stemmed from the notion that the interpolated resolution must not greatly exceed that inherent in the primary data. We thus effectively binned the linear yeast genome to 12 071 regularly spaced 1-kb coordinates. Figure 1A shows a representation of the raw data from the 3C Figure 2. Comparing functional enrichment between the genomic and spatial regions of the genome. (A) Two genomic distances. The schematic shows the gene neighborhood surrounding a particular gene (red). The neighboring genes may be ranked by their genomic proximity (left) or their spatial proximity (right). (B) Detecting areas of enrichment for TF-cohorts. In ranked gene lists, generated by either genomic or spatial proximity, the genes annotated as targets of a particular TF are indicated as black lines. The P-value of the enrichment of the targets for each threshold is indicated on the right. The threshold with the best P-value is indicated by the dashed line (see 'Materials and Methods' section). This analysis is shown for two genomic loci surrounding both YHL050C and YHL050W-A (top) and YCL012C (bottom) genes respectively, and querying for targets of GLN3. (C) Local structures of the two loci examined in B. Colors indicate distinct yeast chromosomes. The red circles indicate the center gene around which co-localization was tested. The content shown in each sphere is the environment that corresponds to the mHG threshold, dictated by the most enriched spatial environment for GLN3 targets. Bars on the right mark the loci along the linear genome, which participate in the most enriched environment by both the genomic and spatial rankings. Black dots, both in the bars and the visualized structure, indicate gene targets of GLN3. Scale bars were calculated according to an average size estimate for 1 kb of chromatin ffi0.33 mm. Chromosomes are colored as indicated in the legend. measurement assay (12) such that each measured data point (pair of observed restriction fragments, represented by a black dot in Figure 1A) is mapped to the respective genomic loci in chromosome I. We note the sparseness of the data at some loci, as reflected by the large and irregular domains for many of the data points (see 'Methods' section), indicating the limited resolution of the data for the interaction between the respective loci. Related to this sparse sampling are the sharp discontinuities present in the data ( Figure 1A). Figure 1B shows our implementation of a natural neighbor interpolation (see 'Materials and Methods' section) on the same data for chromosome I, which addresses this sparseness and sharpness by setting the local contact behavior to what would be expected of a continuously differentiable (smooth) curve. From the perspective of its differential geometry, a chromosome is expected to behave continuously owing to its polymer structure and be differentiable owing to the mechanical angular limitations imposed by its chemistry. The resulting interpolated contact frequency map was compared with that corresponding to a library generated using a second restriction enzyme (EcoRI) in the original dataset (12). The high correlation (R = 0.98, P < 10 À300 , Supplementary Figure S9) provided validation of the quality of the interpolation. A B C To model the structure of the genome using the interpolated frequency matrix, we invoked a non-linear MDS (24). This method is grounded in the well-established algebraic method of non-classical dimensionality reduction and yields a deterministic 3D view of the yeast genome using an unconstrained and unsupervised methodology (see 'Materials and Methods' section). The linear embedding reduced the dimensionality of the dataset to orders-ofmagnitude-more dimensions than is expected of a shape measured in 3D space, reflecting the biological and measurement noise inherent in the 3C method (Supplementary Figure S2). Applying this method on the intra-chromosomal interaction data of chromosome I resulted in a crescent-like curve, crumpled near the centromere ( Figure 1C). Figure 1D shows the application of the method to the entire genome, resulting in a 'water-lily' conformation of the chromosomes, consistent with other models proposed in the literature (12), with centromeres somewhat interwoven in one end, and chromosome arms extending outward. The quality of this embedding was quantified using the Kruskal stress-1 criterion (28). The resulting stress value of our model is 0.28, which we propose as a measure of the noisiness of the 3C data. This model is stable under small perturbations, as we show in Supplementary Figure S3. In summary, our natural neighbor interpolation coupled with non-linear MDS provides a natural 3D model of the genome at 1-kb resolution. Statistical assessment of spatial functional enrichment controlled by genomic order Using the structural model of the genome, we asked whether genes regulated by the same TF cluster together spatially along the genome. For this we developed a method for assessing the functional enrichment in a 3D environment. We designed the method based on three principles: (i) Direct comparison of any spatial enrichment with that observed for the linear genomic ordering; (ii) Detection of enrichment of a subset rather than of correlation for the entire set (26,27); and (iii) Detecting enrichment for variable-size environments, as the exact size of enriched regions was not known. The first was done to correct for the known functional co-localization of genes along the chromosomes (21). In the comparison, enrichment was favored over correlation, as it is more sensitive at detecting signals at individual genomic locations, whereas genome-wide correlation methods will be dominated by noise and by effects outside of the scope of a possible transcription factory. As a statistical method, we invoked the robust, sensitive and threshold-free mHG method that has been successfully applied in other contexts (26,27,(29)(30)(31). For each gene in the yeast genome, our method proceeds by ranking all other genes by either their genomic (linear) or their spatial (3D) distance to the gene (Figure 2A). Given a specific TF of interest, the mHG test is then applied to both of these two rankings to test whether the targets of that TF are enriched in the genomic and spatial neighborhoods of that gene (see 'Methods' section). Of particular interest are the most enriched environments, both in the genomic and in the spatial perspective, centered around a gene, as they can be compared on an equal setting. For any given locus, we quantify whether the spatial enrichment of targets is more significant than the genomic enrichment; for example, by examining the log odds ratio of the 3D and 1D enrichment P-values. We demonstrate the method in Figure 2B with two specific loci in the yeast genome. In the first example ( Figure 2B, top), we compare the enrichment of the targets of the TF GLN3 in the linear genomic and spatial neighborhoods centered at YHL050C and YHL050W-A, whose transcription start sites map to the same 1-kb region. For the first 140 genes added according to either genomic or spatial distance, the enrichment is similar. However, as the spatial distance is allowed to increase, the enrichment sharply increases in contrast to the respective genomic enrichment ( Figure 2B, bottom). The analysis is terminated at 200 genes, as the end of the chromosome is reached (chromosome III) and so the comparison with the linear genomic ordering is no longer possible for large neighborhoods. A similar pattern is observed in the other example of GLN3 targets when considering neighborhoods centered around YCL012C on chromosome VIII. The spatial enrichment, measured by the hypergeometric P-value, of the targets of GLN3 increases ( Figure 2B, blue line) as the radius of the ball examined (centered at YCL012C) is expanded (i.e. more genes at greater distances are included). In the close neighborhood of YCL012C, the enrichment is the same for both spatial and genomic proximity, suggesting that the genes most spatially proximate to YCL012C are identical to those proximate to it in the linear genomic order. Interestingly, as the number of genes included exceeds the first 100, the spatial enrichment becomes even more significant, surpassing the linear genomic enrichment. This enrichment then peaks for an environment containing $125 genes (hypergeometric P < 10 À12 ), after which the addition of more distant genes diminishes the statistical significance. In comparison, the most significant enrichment based on the genomic order alone is P < 10 À5 obtained at a neighborhood that includes the nearest 80 genes. Thus, we conclude that for the environment centered on YCL012C, GLN3 targets are significantly more highly enriched in space than along the linear genome. We note that when randomly shuffling the genomic positions of the genes we did not find any significant enrichment of colocalization, spatial or genomic, such as those shown in Figure 2B. Examining the structural environments of the two genomic loci described above ( Figure 2B) provided insight into the detected enrichments. Figure 2C shows the environments along with the corresponding genomic regions that are mapped to them. In both cases, regions from different chromosomes contribute to the significant spatial enrichment. The thin part of the chromosome on which the center gene (marked in red) is located indicates the interval with the most significant linear genomic enrichment around the center gene. Widespread spatial regions enriched for TF targets Our method allowed us to systematically test the spatial and genomic enrichments of TF targets surrounding each gene in the genome, as shown for GLN3 targets in YCL012C ( Figure 2B). The genomic landscape depicted in Figure 3A highlights the most significant spatial enrichment results surrounding each locus (marked in red) as well as the most significant linear genomic enrichment (marked in blue). The two specific regions shown in Figure 2C are noted with dashed boxes. Strikingly, in many loci we observe significant spatial enrichment that is higher than that obtained for genomic order enrichment. To evaluate this result, we used two controls. First, we tested whether a shuffled genomic ordering-maintaining the locations of the genes but randomizing their identities-would still lead to enrichment results, and found that, as expected, it does not ( Figure 3A, inset). We also tested cyclic permutations of gene locations along each chromosome by cyclically shifting gene locations by selecting the shift size to be 10-90% of the chromosome size. Such shifted data maintain all 1D gene density properties of the genome. We observed that the linear genomic enrichment is conserved (as clearly expected), while the spatial enrichment is eliminated (Supplementary Figure S1). Finally, we compared the hypergeometric P-values with those resulting from a shuffled null model and found significant differences (Supplementary Figure S11) indicating that our use of the hypergeometric test does not produce spurious results. To further quantify the observed higher spatial enrichment, compared with that obtained in linear genomic order, we first examined for each TF, the region with maximum enrichment at the 3D level and compared it with the 1D region that is most enriched. For GLN3, the most significant 3D region has an associated P-value of 10 À9 , while the most significant 1D region has a P-value of 10 À8 (Figure 3A). Examining all 116 TFs, we found that 32 TFs have a more significant 3D region, while six have a more significant 1D region ( Figure 3B). This indicates that when examining neighborhoods of genes, the 3D region captures more significant enrichment than an examination of solely the 1D order. Next, we deployed a peak-detection algorithm on the genomic landscape to identify distinct regions of locally maximal enrichment. We assigned each peak to either the 3D or 1D enrichment depending on which is more significant, delineated to both in the case of a tie. Using GLN3 again as an example, we detected 70 and 5 for the 3D-and 1D-enriched peaks, respectively ( Figure 3A, black arrows). A paired t-test on the 3D and 1D enrichment peaks indicated the significance of spatial enrichment (P < 10 À6 ). Thus, for this TF, more enrichment is detected at the spatial level than in the genomic level, providing evidence for the tendency of the genome to co-localize its targets in transcription factories. Expanding these analyses to the rest of the TFs, we found an overall preponderance of 3D clusters relative to 1D clusters (P < 10 À30 Kolmogorov-Smirnov test between the distributions of the number of peaks in 3D versus those in 1D). For some TFs, this effect is particularly strong ( Figure 3C), while for three TFs-ROX1, YRR1 and ARG81-the signal is reversed, a more significant 1D clustering than 3D. SIP4 shows the most extreme spatial co-localization relative to genomic order (84-5, respectively, Supplementary Figure S4). Of 117 TFs, 64 show a significant (P < 0.05, FDR-corrected, one-tailed two-sample t-test) enrichment of spatial (and 10 of 117, a significant enrichment of genomic) co-localization of their targets. We found that this result is also observed in a second replicate of the dataset (Supplementary Figure S7) as well on the dataset following correction for potential biases using a recently proposed method (32) (Supplementary Figure S8). The peak analysis may be biased because we filter out genomically consecutive signals (1D) but not potentially overlapping 3D signals. To address this, we compared our observed enrichments to a suite of 100 genomes whose gene order has been shuffled using a ranking-based approach (see 'Materials and Methods' section). Comparing with the randomly annotated genomes has the additional feature of direct P-value estimations without recourse to multiple testing corrections and parametric distribution assumptions. For GLN3, filtering for genes with two orders of magnitude more significant 3D to 1D and vice versa (non-grey region), the Z-scores indicate strong significance relative to the shuffled genomes ( Figure 4A). Repeating this analysis for all of the available TFs, we found that for most TFs the Z-scores are positive, indicating that 3D enrichment is significantly greater than 1D enrichment when comparing with the random background model. Interestingly, some TFs show a wide bimodal distribution, indicating that the TF has both significant 1D and 3D regions of significant enrichment. We conclude that for most TFs we detect significant spatial co-localization of the targets. TFs whose gene targets are spatially enriched are more highly expressed If the targets of a particular TF show significant co-localization in the genome, one would expect that TF to be functional under the conditions sampled for the genomic structure. A proxy for the function of a TF is its expression level, and thus we asked whether those TFs showing the strongest signals of co-localized targets are also more highly expressed (33). We sorted TFs according to the ratio of spatial to genomic co-localization of their targets, an indication of their 3D co-localization. The expression of the top 50 TFs was then compared with that of the bottom 50. We detected a significant difference in expression levels (P < 10 À2 , Kolmogorov-Smirnov test, Figure 5A). Overall, the correlation between the degree of colocalization (target co-localization P-value) and the average gene expression level was r = 0.25 (P < 10 À2 Supplementary Figure S6). We further validate that this result is not confounded by the number of targets of the particular TFs and the choice of threshold (Supplementary Figure S10). While not highly significant, this correlation between expression levels and large-scale target co-localization supports the possible role of genomic configuration in accommodating different transcription factories. Finally, we queried for the spatial location of the apparent transcriptional factories. For each gene, we computed the number of instances in which a spatial region including that gene is enriched for TF gene targets more than for the genomic order, across the set of 107 TFs. Figure 5B shows these locations superimposed on the genomic structure. We found that regions that are enriched for such 'transcriptional factories' indeed form distinct clusters. In particular, we observe a high degree of association of genes with transcription factories in the periphery, mainly located on chromosome II, and also on chromosome XV and chromosome XVI ( Figure 5B). Comparing the expression of the set of genes highly associated with factories (>25 TF sets) relative to the genes only weakly associated with factories (<25 TF sets), we find that the former genes are more highly expressed (P < 0.05, Kolmogorov-Smirnov, Supplementary Figure S5). This provides further evidence that transcriptional factories generally correspond to transcriptionally active regions. DISCUSSION Any advancement of biological methods to identify the precise structure of the genetic material throughout the life of an organism must be matched in rigor by the computational and statistical platforms that are used to interpret their measurement results. 3C has emerged as the most generalized method for establishing the structure of the genome in a systematic fashion (10). However, the statistical methods to make the most of the resulting data are only starting to be developed (11,12,32,34). Here, we report a novel approach to several aspects of the analysis of spatial conformation data. We model the structure of the S. cerevisiae genome without the previously imposed assumptions (see below), thus capturing an unbiased representation of the data in 3D. Our method is based on standard approaches in computational geometry, statistics and linear algebra (24), invoked here for the first time to the problem of genomic structure. We use the resulting contact matrix to ask whether functionally related genes are co-localized in the 3D structure. Using a rigorous and controlled statistical approach, we provide evidence for this notion. In this section, we consider the advantages and limitations of all aspects of our methodology including the choice of interpolation and embedding procedures, internal reference to the 1D gene order as a control. Finally we discuss the notion of widespread transcriptional control by spatially defined factories. Existing literature that addresses directly the problem of contact map completion in the context of 3C data relies either on a convolution with a fixed environment size (12,13,35) or a statistical background model to estimate either enrichment or depletion of observed contacts (14,34). Convolution-based approaches lead to locally smoothed regions, while disproportionately distorting structures in data-sparse or outlier-rich regions. Both of these previously used approaches are dependent on a subjective choice of parameters such as the environment size and latent variables for statistical model. Because our method is fully reliant on a complete contact map, we established a robust approach to generate a full contact map by interpolating missing data. We propose that the most appropriate interpolation method for completing 3C data is a modification of natural neighbor interpolation (C 1 family of interpolants). Natural neighbor interpolation is immune to the disadvantages inherent in nearest neighbor interpolation, where different genomic loci may optimally occupy the same position in space and tie-breaking scenarios are typically addressed in an arbitrary fashion. Further, natural neighbor is not as simplistic as bilinear interpolation, where only the two flanking data points in each dimension contribute to the interpolated value. Additionally, natural neighbor interpolation has been previously applied successfully for problems of smooth surface reconstruction (36), which relate to our problem in nature. Based on a tessellated view of the data (see 'Methods' section), natural neighbor interpolation computes the weighted average of all the neighboring data points that can contribute to the information of the contact between the locations under interpolation. We note that our interpolation approach-and likewise all interpolations-does not necessarily yield inter-point contacts that mathematically qualify as a metric, and as such, the resulting contact map does not necessarily describe a structure residing in a Euclidean space precisely. To visualize the resulting interpolated contact map, we attempted to generate a structural model that best captures the data. Our analysis was performed at a 1-kb binning of the resulting interpolation; however, as the resolution improves in future studies, we expect our method to have greater statistical potential, as less genes will be co-binned. Previous studies attempting to generate a structural model for chromatin used supervised rule sets, a random starting conformation, and optimization algorithms to fix each coordinate pair in its expected distance (if available) from one another (12,13,35). We propose that because such methods rely on an underdetermined process, they cannot be rigorously applied to explore the most likely conformation. Our approach uses metric dimensionality reduction as a starting point, which sets as a starting conformation the principle 3D outline of the shape. This outline is expected to capture the essence of the underlying geometry of the data. The optimization process preserves the order among contacts, maintaining the coherence of contacts in the resulting structure. MDS is a classical algebraic and statistical approach that is well established in the literature (24). MDS relies on a practical assumption and attempts to minimize the square error of inter-point distances while maintaining their order when comparing the input data with the resulting model. Our approach thus minimally intervenes with the underlying measurements applying a parsimonious genome modeling preferences. We provide a solid statistical framework to determine enrichment in the spatial co-localization of genomic elements and apply it to detect a significant co-localization of TF targets. We also show a correlation between co-localization and higher expression of the targeting TF. Our results are thus consistent with previous studies, attempting to link gene organization with control and regulation of transcription (6,7,9,(37)(38)(39)(40)(41), and further extends previous systematic approaches to provide the imperative comparison to the genomic proximity of co-regulated genes. Collectively, these results indicate that genome remains poised for the expression of co-regulated genes by adjusting their conformation to enrich for their co-localization. This conformation may likely have benefits in terms of the operations of an activated TF, which if shuttled to a region with enriched targets, it will have a reduced number of possible gene targets to interact with by diffusion. This scenario would suggest that the mechanism for co-localization (whether active, or passively selected for), along with higher expression for the active TF, work in concert to regulate gene circuits, and the interplay between them is crucial to understanding expression regulation. Future directions will no doubt include a comprehensive analysis of co-localization of genomic elements to detect functional partitioning and to better characterize transcription factories. Additionally, it will be interesting to examine the extent to which these findings will be conserved across organisms and tissues. Single-cell-based 3C methods-currently unavailable but sorely neededwill be able to produce a more accurate picture of genome structure, rather than a population-mean approach. Using sophisticated statistics for the detection of co-enrichment of ordinal measurements, similar methodology will surely be applied directly to non-binary or thresholded experimental results, such as the ones from chromatin immunoprecipitation (ChIP) experiments to provide more unbiased views on annotated features. AVAILABILITY A Matlab software package called INSP3CT is provided to analyse contact frequency datasets and genomic annotations by performing spatial and genomic enrichment on selected loci. INSP3CT takes as input files describing restriction sites, inter-and intra-chromosomal contact frequencies, the genomic sequence, loci of interest along the genome (for example genes) in bin coordinates and vectors of annotation with the number of co-binned loci of interest per bin. INSP3CT outputs a figure for each vector of annotation comparing 3D with 1D enrichment across loci. INSP3CT also provides access to the interpolated contact frequency matrix, the corrected enrichment scores per loci and the size of enrichment environment. INSP3CT is available at http://shayben. github.com/INSP3CT.
2014-10-01T00:00:00.000Z
2013-01-07T00:00:00.000
{ "year": 2013, "sha1": "54563b8d3178aace91c67458b71b370243db7733", "oa_license": "CCBYNC", "oa_url": "https://academic.oup.com/nar/article-pdf/41/4/2191/25341484/gks1360.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "4791acb0995b67d738a4eaf777a384c7c6afc3d8", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
13770621
pes2o/s2orc
v3-fos-license
The RNA binding protein HuR determines the differential translation of autism-associated FoxP subfamily members in the developing neocortex Forkhead-box domain (Fox) containing family members are known to play a role in neocorticogenesis and have also been associated with disorders on the autism spectrum. Here we show that a single RNA-binding protein, Hu antigen R (HuR), dictates translation specificity of bound mRNAs and is sufficient to define distinct Foxp-characterized subpopulations of neocortical projection neurons. Furthermore, distinct phosphorylation states of HuR differentially regulate translation of Foxp mRNAs in vitro. This demonstrates the importance of RNA binding proteins within the framework of the developing brain and further confirms the role of mRNA translation in autism pathogenesis. Neurodevelopment relies on specific and timely expression of the genes dictating its course. Transcriptional control is one key mechanism by which appropriate development occurs and manages the expression of fate-determining genes 1,2 . Specifically, the transcription factors Foxp1 and Foxp2, members of the Forkhead-box containing (Fox) family, are associated with neocortical development, sensorimotor behavior, and abnormalities such as speech disorders and autism [3][4][5][6][7][8] . In addition to transcriptional control, post-transcriptional regulation has emerged as another key mechanism in the spatiotemporal dynamics of brain development. RNA binding proteins (RBPs) control every step of post-transcriptional processing and as such are emerging as fate-determining molecules in neocortical development 9,10 . In particular, RBPs such as Fragile X mental retardation protein (FMRP) may regulate a subset of genes associated with the autism spectrum disorders 11 . Another important family of RBPs is the Hu antigens. Hu antigens (HuA/HuR, HuB, HuC, HuD, and Hel-N1) are named for their role in Hu syndrome, an autoimmune paraneoplastic neurological syndrome, and all have roles in brain development 12 Mammalian HuR, homologous to ELAV1 (Embryonic Lethal Abnormal Vision) in Drosophila, is a globally expressed Hu antigen and has defined roles in brain, breast, lung, and colon [13][14][15][16] . Our group has previously investigated both HuR and HuD RBPs, and revealed an association with neurodevelopmental defects in each case 17,18 . Within the brain, our group previously showed that HuR is an essential regulator of neocortical development and mRNA translation. We found that HuR regulates temporal mRNA association with active translation sites (polysome complexes). This suggests its crucial role in the timing of when a transcript will become a protein in the developing neocortex. However, direct mRNA targets of HuR in neocortical development are unknown and a direct link between HuR and autism has not been observed. Here, we show that HuR associates with specific and common subsets of mRNAs during early and late phases of prenatal neocortical neurogenesis. The consequence of such differential association is the regulated translational control of Foxp1 and Foxp2 at distinct time points in neocortical development. We additionally observed that phosphorylation of distinct HuR sites dictates the dynamic regulation of Foxp1 and Foxp2 in vitro. Results The RBP HuR binds distinct groups of mRNAs throughout development. To elucidate neocortical HuR-bound mRNAs during early neurogenesis (E11 & E13) and late neurogenesis (E17 & E18), we performed RBP-immunoprecipitation coupled to a microarray assay (RIP-Chip) with an HuR RIP-Chip certified antibody and corresponding IgG ( Fig. 1a; GEO accession number GSE77712). Principal component analysis (PCA) showed substantial differences in grouping between IgG and HuR pull-downs at distinct developmental stages and clustering among HuR precipitates from neocortices at early (E11 & E13) compared to later (E17 & E18) neurogenesis, substantiating the quality of the RIP experiment (Fig. 1a right). Using stringent criteria (fold-change > 2 and False Discovery Rate < 5%), we found 6552 transcripts bound in early neocortical neurogenesis, significantly fewer transcripts bound during late neurogenesis (3689), and an overlap of 2710 transcripts bound across these developmental periods (Figs 1b and S1a). In order to confirm coverage of our screen, we compared our results with a study whose aim it was to determine all binding targets of HuR in HeLa cells. Out of the 1091, 1862, and 564 genes in the early, common, and late lists, respectively, there were 119 (p = 7.92 × 10 −15 ), 273 (p = 1.59 × 10 −59 ), and 73 (p = 5.29 × 10 −13 ) overlaps between our screens. It should be noted that CNS-specific mRNAs which are expressed in murine neocortical tissue may not have been expressed in the HeLa line, accounting for some of the variance between the lists. We noticed genes associated with autism (members of the FoxP family and DYRK1A) to be HuR-bound and wanted to determine if targets of HuR were enriched for genes related to autism. We did not observe significant overlap between neocortical targets of HuR and genes found in the SFARI autism database, but observed that 12 genes overlap with early, 13 with common, and 2 with late HuR RIP targets; www.sfari.org, Simons Foundation, Table S1). Four genes from this list (NF1, DYRK1A, FOXP1, and GSK3B) were found to be in common with the previous HeLa screen 19 . Additionally, we found a significant overlap of early-bound HuR target genes and those bound by FMRP 9 (51 genes, Fig. S1b, Table S2). These data suggest that HuR may contribute to regulation of FMRP targets. HuR can regulate Foxp1 and Foxp2 via their 3' UTRs. HuR-bound mRNAs were clustered using functional gene annotation (WebGestalt) 32 , and we found a group of TFs known to be involved in neurogenesis and associated with autism (e.g., Mef2c 33 , Foxp1 4 , Foxp2 4 ; Fig. 1c, S1b). Subsequent qPCR analyses of additional HuR-RIPs (n = 3 per developmental stage) at E12, E15, and E18 confirmed HuR binding of different members of the Fox family (Fig. 1d, S1c). Foxp1 mRNA was enriched early-on. Meanwhile, Foxp2 mRNA was bound throughout neocortical neurogenesis, similarly to the known HuR target Ptma 20 , and in contrast to the negative control-identified as not bound by HuR in our screen, Foxo6 (Fig. 1d). HuR did not preferentially bind any splice variant of Foxp1 or Foxp2 (Fig. S1d). Though the expression and roles of Foxp1 and Foxp2 protein in the developing brain are well documented 8,23-25 ; their upstream regulation has yet to be fully understood. HuR is known to bind the untranslated regions (UTRs) of mRNAs, which have major roles in post-transcriptional regulation 22 . We found that both Foxp1 and Foxp2 had putative HuR binding sites in their 3′ UTRs (Fig. 1e) using RNPmap software (rbpmap.technion.ac.il). We tested for the functional regulation of FoxP1 or Foxp2 3′ UTRs in the presence of HuR using a luciferase assay in HEK293T cells 24 . When normalized over Renilla luciferase, we found that over-expression (OE) of HuR had no appreciable\effect on the expression of empty firefly luciferase vector nor of Foxo6 3′ UTR-cloned firefly luciferase ( Fig. 1f; n = 3; p > 0.05). However, with either Foxp1 or Foxp2 3′ UTRs cloned downstream of the luciferase open reading frame (ORF), we found a significant reduction in firefly luciferase expression in the presence of the HuR OE, but not the control vector ( Fig. 1f; p < 0.05; n = 3). Collectively, these data suggest that HuR may regulate autism-associated Foxp1 and Foxp2 expression during neocorticogenesis. HuR conditional mutants reveal time-sensitive role of post-transcriptional regulation of Foxp1 and Foxp2. To examine the role of HuR in the post-transcriptional processing of Foxp1 and Foxp2 mRNA in the E13 neocortex in vivo, we generated an HuR conditional knock-out (HuR cKO) line by crossing HuR-floxed mice to Emx1-Cre (HuR/Emx1-Cre cKO; HuR depleted in radial glia progenitors and postmitotic neurons.). We performed qRT-PCR on total mRNA isolated from E13 WT and HuR cKO littermates and found a significant decrease in Foxp1 mRNA levels ( Fig. 2b; n = 2; p < 0.05). Protein levels in HuR cKOs were assessed using immunohistochemistry (IHC) and western blotting (WB) (Fig. 2a,c,d). Surprisingly, despite lower Foxp1 mRNA levels, we found increased Foxp1-protein expression in RGs at E13, indicating HuR suppression of Foxp1 translation during early neocortical neurogenesis. When we assessed Foxp2 mRNA levels, we found them to be unchanged at E13 (Fig. 2b). Furthermore, IHC revealed that Foxp2 protein was not increased in E13 HuR/Emx1-Cre cKO neocortices (Fig. 2b), corroborated by WB (data not shown). This suggests distinct temporal regulation of Foxp1 and Foxp2 transcripts by HuR. We next sought to determine if HuR is required for Foxp2 protein expression later in development. IHC of P0 WT, HuR/Emx1-Cre cKO, and HuR/Nex-Cre cKO (HuR depleted in postmitotic neurons.) neocortices revealed decreased Foxp2 protein expression in both HuR cKOs (Fig. 2e, red) and presence of Foxp1 protein at P0 (Fig. 2e, green); WB confirmed decreased Foxp2 protein in HuR cKO (Fig. 2f). qRT-PCR confirmed no changes in total Foxp2 mRNA levels in the HuR cKO (Emx1-Cre, Fig. 2g) nor in subcellular distribution between nucleus and cytoplasm (Emx1-Cre, Fig. S2a). These findings indicate a role for HuR in regulation of Foxp2 mRNA translation. We further determined that HuR was acting in a cell-autonomous fashion ( Fig. S2c-h). Phosphorylation sites on HuR act in post-transcriptional regulation of Foxp1 and Foxp2. Given HuR's ability to inhibit translation of Foxp1 and promote Foxp2, we wanted to explore HuR's capacity to both bind similar targets and temporally regulate them. HuR has multiple residues which are amenable to phosphorylative regulation (Fig. 2h, www.phosphosite.org) and these have been shown to influence HuR's function as an RBP 28,29 . To test this hypothesis, we transfected HuR phosphomutants and phosphomimics along with 3′ UTR Foxp1-or Foxp2-luciferase constructs in HEK293T cells and performed a translation assay 24 . For the Foxp1-luciferase transfected cells, the phosphomimic S100D and the phosphomutant T118A had significantly higher translation of the luciferase gene than the wild type HuR overexpression (Fig. 2i). In Foxp2-luciferase transfected cells, the phosphomimic S100D, as in Foxp1, had higher translation than wild type HuR (Fig. 2j). However, in contrast to Foxp1, the phosphomutants S88A and S242A had significantly lower levels of Foxp2 translation than the wild type. This suggests that the S100 site is important to the regulation of translation of both Foxp1 and Foxp2 mRNA transcripts, while the S88, T118, and S242 sites contribute to the differential regulation of these two related transcripts. Discussion While there is much known about the downstream targets and effects of Foxp2 28 few studies have identified the mechanisms behind Foxp2 or Foxp1 mRNA regulation. Recently, we identified WNT3 as a positive post-transcriptional regulator of Foxp2, but not Foxp1 24 . Another study examining post-transcriptional regulation of Foxp2 found it to be a target of the miRNAs miR-9 and miR-132 22 hindbrain, but not the telencephalon, a precursory structure to the cerebral cortex 29 . Though these results have yet to be confirmed in higher taxonomic organisms, this is particularly interesting evidence of regulation from an evolutionary perspective. Given this dearth, our study provides needed insight into the regulation of Foxp2. What is particularly compelling about the regulation of Foxp1 and Foxp2 protein synthesis is the dependence on time. Early neocortical neurogenesis has been known to be susceptible to teratogenic influence that then results in autism-like phenotypes, further underscoring the importance of timing and timed control in brain development 30,31 . Our data show the existence of groups of transcripts bound by an RBP, HuR, during early neurogenesis, late neurogenesis, and throughout. Collectively, these data support the idea that some RBP-bound mRNAs form RBP-defined operons 10,18 which are functionally and structurally related mRNAs poised for translation in developing systems, allowing for prompt and rapid response to developmental needs and specification signals. While Foxp2 mRNA levels did not change in HuR cKO, we unexpectedly found decreased Foxp1 mRNA paralleled by increased Foxp1 protein at E13. This unusual finding may be the consequence of a misbalanced steady state of Foxp1 mRNA. For example, not enough Foxp1 mRNA is being transcribed and to compensate for the absence of HuR binding, this mRNA is now being prematurely translated and decayed. As one of many options, this intriguing finding should be further characterized in future studies. While many studies have shown that HuR can be regulated via its phosphorylation sites, we provide the first evidence that differential HuR phosphorylation states may regulate mRNA translation of the bound autism-associated genes Foxp1 and Foxp2. Sole phospho-sites have been involved in either inhibiting or promoting HuR activity 26,[34][35][36] . In addition to the work presented here, one other study showed that tandem phosphorylation of HuR at S221 and S318 was necessary for mRNA binding (COX-2, cyclin-A, and cyclin-D) and nuclear translocation 37 . Our data agree with the hypothesis that the HuR-phosphorylation pattern is responsible for differential translational regulation of bound mRNAs. However, we cannot exclude the possibility that the distinct combinations of phosphorylated sites are further involved in the dynamic temporal regulation of Foxp1 and Foxp2 expression levels, with certain sites activated at certain time points in development. While there are some known kinases targeting HuR, including Cdk1(for S202) and PKC α /δ (for S221), no kinase has yet been identified for S242 26 . The precise spatiotemporal mechanism of HuR-phosphosite-mediated regulation as well as which sites are active when in vivo are key questions that remain to be answered. Within the neocortex, Foxp1 and Foxp2 define functionally distinct subpopulations of neocortical glutamatergic projection neurons 23 and mutations in each are associated with autism 3-8 . This novel mechanism of post-transcriptional regulation demonstrates how intrinsic HuR can differentially regulate translation of bound mRNAs during the intricate steps of neocorticogenesis. In conclusion, this study demonstrates the importance of RBPs within the framework of the developing brain and further confirms that altered mRNA translation contributes to autism pathogenesis. Animals. All procedures and animal husbandry were approved by and carried out in accordance with the Rutgers University Institutional Animal Care and Use Committee (IACUC) guidelines (protocol number: I12-065). Isolation of wild type (WT) embryonic cortices and in utero electroporation experiments were performed in pregnant CD-1 dams (Charles River). Generation of HuR conditional deletion and WT littermate control animals was accomplished using Jackson laboratory Emx1-Cre (Strain Name: B6.129S2-Emx1tm1(cre)Krj/J, Stock Number#: 005628) and Nex-Cre (kind gift from M. Schwab). To generate embryonic HuR deletion mice, we produced timed pregnancies by placing a male with females overnight and checking for plugs the next day. The presence of plugs was considered embryonic day 0.5 (E0.5). Neocortical dissections were performed under RNAse free conditions on ice in PBS or Hank's Balanced Salt Solution (HBSS) supplemented with D-glucose and HEPES 38 . Neocortices were either used immediately or stored at − 80 °C until use. At least eight neocortices per the genotype were analyzed in each of the experimental approaches. Immunocytochemistry. Brains were dissected in PBS and immediately submerged in 4% paraformaldehyde (PFA) (Sigma-Aldrich #P6148). Three ten-minute washes were carried out at room temperature while agitating and brains were subsequently left agitating in fresh PFA at 4 °C overnight. Fixed brains were washed once with PBS and placed in 30% sucrose (J.T. Baker #4072-05) and stored at 4 °C. Fixed brains were embedded in 3% agarose (Lab Express #2001) and sectioned (Leica Vibratome VT100S) at 70 μ m. Sections were stored in 30% sucrose. For immunostaining, sections were first subjected to three five-minute PBS washes while gently agitating. Next, sections were blocked in PBS with 5% normal donkey serum [Jackson Immuno, 1% bovine serum albumin (BioMatik #A2134), 0.1% glycine (VWR #BDH4156)] containing 0.1% L-lysine (Sigma-Aldrich 100999242) and 0.4% Triton-X (Fisher #BP151) pH 7.5] for 45 minutes at room temperature. After blocking, sections were placed in their respective primary antibodies (Table S3) in blocking solution with 0.4% Triton-X 100 overnight or up to two nights with gentle shaking at 4 °C. Sections were washed in PBS three times and placed in Triton-free blocking solution with secondary antibodies (Table S4) for two hours while gently shaking at room temperature. Sections were then washed three times in PBS for five minutes with gentle shaking and placed in PBS with DAPI (Roche diagnostics #10-236-276-001) for five minutes shaking at room temperature. They were washed again in PBS for five minutes two times. Sections were mounted in VectaShield (Vector Laboratories, Inc. H-1000), cover-slipped, and sealed with nail polish. Imaging was carried out using an Olympus multi-photon/confocal microscope FV1000MPE. Western Blot. Protein lysates were prepared using T-PER reagent (Thermo Scientific #78510), Lysates were cleared by centrifugation for 10 minutes at approximately 13,000 × g. For loading volume adjustment, lysates were analyzed for total RNA content using a spectrophotometer. We used the Invitrogen SureLock Western blot system with Invitrogen Bis-Tris 4-12% gradient gels and followed the manufacturer's protocol for running and transfer onto nitrocellulose membranes (BioRad #162-0214). After transfer, membranes were washed in PBS solution with 0.04% Triton-X 100 (PBS-T), shaking at room temperature for five minutes. Membranes were blocked (blocking solution: 5% milk, 10% FBS, PBS-T) for one hour at room temperature and placed in probing solution (PBS-T and 10% FBS) with primary antibody (see Table S3) overnight or up to two nights, shaking at 4 °C. The blots were washed in PBS solution for five minutes three times, and placed in probing solution with secondary antibody (Table S4) Quantitative Real Time PCR (qRT-PCR) and Subcellular Distribution. RNA was isolated using the PARIS kit for cortical and nuclear/cytoplasmic fractionation or Trizol-LS (Life Technologies #10296028), following the manufacturer's protocol. RNA samples were DNAsed (Ambion #AM1907) to remove contaminating DNA. RT-PCR was performed using Applied Biosystems StepOne Real-Time PCR system with Step-one v2.1 software (#4376373) using RNA-Ct 1-Step Taqman kit (#4392653) and Taqman probes (Table S5) according to the manufacturer's protocol with reactions adjusted to 10-μ l total volume. Each qRT-PCR was performed with more than three technical replicates within more than four biological samples per developmental stage or experimental condition. In utero Electroporation. E13 neocortices were in utero electroporated as previously described 17,38 . Briefly, E13 pregnant CD-1 dams (Charles river) were anesthetized, embryos removed, and 1 ug total DNA was injected into the ventricular space using a pulled glass pipette (prepared with Sutter Instrument Co. P-87 pipette puller). Five electric pulses were delivered (BTX Harvard apparatus ECM 830; voltage: 37 V, pulse length: 50 ms, interval between pulses: 950 ms) to embryos by placing the plus electrode above the head to drive DNA into the neocortex. The surgeries lasted approximately twenty-five minutes. Pups were placed back into the mother, muscles and skin were sutured (Ethicon, Silk, 5-0), and gestation was allowed to continue until the appropriate age for subsequent analyses. At least two or three transfected neocortices were analyzed per experimental condition. HuR RIP. HuR-associated RNAs from neocortices at distinct ages (E11, E12, E13, E15, E17, E18, P0) were isolated using the RIP kit (MBL international #RN1001) and rabbit anti-HuR (MBL international #RN004P) RIP certified antibody according to manufacturer's instructions. HuR function was shown to react to UV 39,40 used in HITS CLIP, which is avoided by the RIP-ChIP kit. Previously, targets of HuR were identified in a human HeLa cell line using the PAR-CLIP method. To compare our findings, we first converted their list of target human gene symbols (Supplementary Table 2) to mouse gene symbols. Starting with 1216 human genes, we used the biomaRt tool within R to query the Ensembl database and found the mouse homolog associated gene name for each. With some imprecision in matching for species, this produced 1214 results, but only 1192 unique mouse gene symbols. From these, we then used the GeneOverlap package in R to apply Fisher's exact test for significance of the overlap between lists of gene symbols from our RIP-ChIP screen and the PAR-CLIP one. Scientific RepoRts | 6:28998 | DOI: 10.1038/srep28998 Microarray analysis. Total RNA from developing neocortices and RIPs were sent for exon microarray hybridization at the UMDNJ-RWJMS facility using the GeneChip Mouse Exon 1.0 ST Array (Affymetrix; n = 2/developmental stage or experimental condition). The Array image (CEL) files were read into either Partek's Genome Suite (Affymetrix) or R/BioConductor using the oligo package 41 RMA normalized, and assembled into core transcript models using NetAffx-supplied annotations. A linear model was fit with the limma package 42 and enrichment contrasts selected based on a log 2 fold-change of + 1 or greater and a Benjamini-Hochberg-adjusted p value of 0.05 or less. Early (E11 and E13) and late (E17 and E18) samples were grouped separately. Of the 4,092 and 2,604 transcripts found to be significantly enriched by HuR RIP at early and late times, respectively, there were 2,244 and 1,803 unique, assigned Entrez Gene IDs and so these were selected for functional analysis using WebGestalt 32 . These transcripts were consolidated into gene symbols for testing overlaps with FMRP-binding mRNAs or autism-associated mRNAs (Fig. S1) using the GeneOverlap package in Bioconductor. Probeset and exon plots were produced using the GenomeGraphs package in Bioconductor. Cloning and Constructs. The full-length of Foxp2 3′ UTR was amplified by PCR using the forward primer 5′-TCGCGACTAGTGAACGAACTTGTGACACCTCAGTG-3′ and reverse primer 5′-CATATGCGGCCGTGTACTTCAGAAATGTAACCAACTG-3′. The Foxp1 3′ UTR was amplified with the forward primer 5′-TCGCGACTAGTACATGGAGTGAACCTCTGGGC-3′ and reverse primer 5′-CATATGCGGCCGCATTTAAGAATGCGCTCATGTCAG-3′ . Restriction enzyme sites Spe I and Eag I were added to each of the forward primers and reverse primers, respectively, and used for cloning the Foxp2 and Foxp1 3′ UTRs downstream of a firefly Luciferase cassette in the pcDNA3.1-Luciferase plasmid which digested with Xba I and Eag I 24 . Translational Luciferase Assay. To test the effects of these phosphorylation sites on Foxp1 and Foxp2 translation, we performed double transfections with the phosphomutants, phosphomimics, and Foxp1 or Foxp2 with firefly luciferase proteins in their 3′ UTR regions IN HEK293T cells 24 . At least six biological replicates and two technical replicates were performed for each sample; HuR-TAP was biologically replicated at least twenty-four times. Luciferase readings were taken to determine protein expression of the Foxp1/2 constructs (Luc) on a TD-20/20 Luminometer (Turner Designs) and using the Dual-Luciferase Reporter Assay (Promega E1910). Renilla readings were taken as a control to ensure comparable cell counts. Luciferase reagents were made fresh and came from the same kit 44 . Additionally, RNA was isolated from these samples and quantified with qRT using a luciferase probe (luc) ( Table S5). Using these two readings (Luc/luc), protein and mRNA levels in each sample, we were able to generate a proxy for translation by normalizing protein levels to levels of mRNA 24 . A metric for measuring translation is useful, as levels of mRNA and protein are both informative points of information to the actual state of the cell. For example, in relativity to each other, high levels of mRNA and low protein levels indicate low translation whereas low levels of mRNA and high levels of protein indicate high translation. Data is presented in Fig. 2h,i as their log 10 values, as we are generating a ratio of mutant translation levels over wild-type translation levels. Outlier analysis was determined by the ROUT test with Q = 1% and the data was subsequently analyzed without the outlier. 6 outliers were removed from the Foxp1 assay (Fig. 2h) out of 160 data points (3.8%) and 10 outliers were removed from the Foxp2 assay (Fig. 2i) out of 172 data points (5.8%). Statistical analysis. One-way ANOVA and SPSS software were used for statistical analysis when multiple conditions were compared, while t-test was used when only two experimental conditions within a developmental stage were compared, unless otherwise indicated. N of each experiment and the respective p-values are indicated in either figure legend, results, methods, or in the main text. A Kruskal-Wallis Multiple Comparisons test was performed on data represented in Fig. 2h,i and was calculated in GraphPad's Prism 6.0 software. A two-way ANOVA was used to compare cultured neurons represented in Fig. S2f (n = 975 neurons, 4 slides, 8 coverslips) and S2g (n = 1082 neurons, 4 slides, 8 coverslips). Scoring and statistical analysis was performed by a blinded experimenter.
2018-04-03T06:12:56.386Z
2016-07-07T00:00:00.000
{ "year": 2016, "sha1": "e57433c221224e2cd4cc32ee681448cc6267fde2", "oa_license": "CCBY", "oa_url": "https://www.nature.com/articles/srep28998.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "e57433c221224e2cd4cc32ee681448cc6267fde2", "s2fieldsofstudy": [ "Biology", "Medicine" ], "extfieldsofstudy": [ "Medicine", "Biology" ] }
234464218
pes2o/s2orc
v3-fos-license
The impact of an external obstacle daylight in deep office spaces at hot arid zone The use of effective daylighting systems is one of the main factors that inflounces energy consumption in buildings especially in deep office spaces. Therefore, the need for innovative daylighting systems with optimum overall daylighting performance - especially in term of saving energy performance - is an essential design element for such spaces. Light shelf systems are the most major potential lighting systems that can be used to provide such innovative daylighting systems. The parametric approach was implemented where all possible combinations of the various design parameters for the main two design elements of the light shelf system (light shelf and room specifications) were computed through Diva-for-Rhino (Grasshopper). The study of optimized light shelf systems in deep office spaces has been carried out in case of an external obstacle (neighbor) and without it – according to the Egyptian building codes - in order to identify the impact of such design elements on the performance of the optimized light shelf system in the main four directions in hot arid zone climate. Introduction Light shelf is a daylight-redirecting system intended to bounce daylight to the deepest side of a room. This function suites well to all climates [1]. Light shelf was developed to create uniform indoor illuminance. However, in hot climates the unshaded clerestory above the shelf transmits high solar heat gain. In dense urban context, these advantages and disadvantages might vary regarding the context and position of the fenestration [2]. This confirms the necessity of studying the effect of the presence of a neighbor integrated with the reflectors system. employed scaled physical models and computational simulation methods to examine the daylighting performance of lightshelves under several tropical sky conditions in Subang, Malaysia, i.e., intermediate sky with direct sunlight, intermediate sky without direct sunlight and overcast sky. Under clear sky conditions, daylighting performance of light shelf depends on the dynamic movement of solar position. Franco [3] proposed tilted and automatic lightshelves to solve daylighting problems in hot tropics by adjusting the elevation of the internal shelf to the dynamic movement of solar position.Previous studies on light shelf were limited in building context. Recently, studies on urban environment have widely emerged following the awareness of the impacts on the urban climatology. Shafaghat consists of aspect ratio and street orientation, as the most important urban features. The impacts of urban height to width ratio aspect and spacing distance to length ratio on the building energy demand had been studied by Kesten et al. [5]. Additionally, canyon surfaces have been understood to be vital in determining the thermal performance of the urban canyon [4], that affected on the surrounding building energy consumption. Akbari et al. [6] found that the surrounding's surface albedo or emissivity could modify the energy balance of the buildings. High albedo materials reduce the amount of solar radiation absorbed by the building envelope. In multi-story buildings, the urban geometry and texture effects on the indoor illuminance and radiation fluxes may vary for each floor level. Therefore, it could be interesting to relate these factors to the light shelf planning in multi-story buildings in terms of building energy performance. In order to measure the effectiveness of light shelf. Methadology This study comes as a part of the completion of the study of the development of the performance of the light shelf systems. This study, based on the benefit of the outcome of what has been concluded in a previous study phase, especially the identification of design indicators and key design elements to develop the performance of light shelf systems. Based on this conclusion, The results were studied from a different perspective, which is the effect of integrating the neighbor's presence with the light shelf system in four orientations. this various cases experimental tests by using parametric design approach by using Diva for Rhino-Grasshopper program, in order to define The impact of an external obstacle (neighbor) or its absence on the performance of optimized light shelf system in deep office spaces in the main four orientation in hot arid zone climate. (Ahmed,Reham,Doha, 2017&2018) [7,8]& [9].based on CIE Standard. Light Shelf Models Light shelf is a parametric panel installed at a height below the clerestory and above the view window for The hypothetical office is 6x12x3.3 m, which represents a large deep space in a multi-story Building Which properties as follows. Figure ( Light shelf obstruction angel: The light shelf obstruction angle is measured as the degree (α) which is the angle located in between the extension line of the external light shelf ' s upper surface plan (a) and the line of light shelf beginning point and neighbor end point (b) fig. (2). The obstructions angel depends mainly on the distance from the neighbor which has defined for the exploring test phase in this study, up to the Egyptian construction law in new cities (building heights = 1.5 street width), which ensure the constancy of the impact of the neighbor distance whatever changed, because it is directly proportional to the height, which mean the constancy of the obstruction angle. Science 18 meters (49 0 obstruction angel) is more commonly use in planning office building zone areas (building height = 27 m = 6 level), thus this study has define a distance of 18 meter away from the neighbor which make an obstruction angel of (49 0 ) for the further developed study cases in the case of neighbor Figure ( Daylighting and Energy Simulations The calculation of energy use for lighting was based on the daylighting performance computed using Diva for Rhino-Grasshopper program based on IES/ LEED v4/ASHRAE Standard, two metrics in LEED v4 are codified for evaluating daylight autonomy design, which allow a daylight space to be evaluated for a one-year period using two different performance parameters: sufficiency of daylight luminance and the potential risk of excessive sunlight penetration [10]. These two metrics are Spatial Daylight Autonomy (sDA) and Annual Sun Exposure (ASE) metrics, which together form a clear picture of daylight performance. sDA describes how much space receives sufficient daylight, which, for office building spaces, must achieve (sDA 300 lux / 50% of the annual occupied hours) for at least 55-75% of the floor area. sDA has no upper limit on luminance levels, and, therefore, ASE is used to describe how much space receives too much direct sunlight which can cause visual discomfort (glare) or increase the cooling loads. ASE measures the percentage of floor area that receives at least 1000 lux for at least 250 occupied hours per year that must not exceed 10% of floor area [11]. The first stage: Data Analysis Methods The basic case (without any obstruction& light shelf system) in south, west, east and north orientations achieved ( 80.5% , 75% , 72% and 86% sDA 300, 50%) Respectively and (27.5%, 39.5% , 39.5% and 0% ASE1000, 250h) Respectively . We find that the amount of sunlight entering is acceptable, but the amount of direct light is higher than acceptable. The second stage: This case (without obstruction & with light shelf system) in south orientation achieved ( 76.5 % , 72%, 72% and 75.5% sDA 300, 50% ) Respectively and (7% , 18.5% , 16% and 0% ASE1000, 250h) Respectively . We find that the amount of sunlight entering is minimal, but the amount of direct light is minimal also the reason for existence Light shelf. The third stage: This case (with obstruction & without light shelf system) in south orientation achieved (43%, 53% ,53% and 60.5% sDA 300, 50% ) Respectively and (10%, 12%, 12% and 0% ASE1000, 250h) Respectively. We find that the amount of sunlight entering is minimal, but the amount of direct light is minimal also the reason for existence neighbor. The fourth stage: This case (with obstruction & without light shelf system) in south orientation achieved (53% , 64.5% , 64% and 74% sDA 300, 50% ) Respectively and ( 5%, 6% ,6% and 0% ASE1000, 250h) Respectively. We find that the neighbor integrated with light shelf system caused amount of sunlight entering more about the third case and the amount of direct light is minimal. Summary and conclusion In the total study of the four stages of four orientations, we find that the best results in terms of Daylight performance and energy consumption were in the presence of the light shelf only without the presence of the neighbour (the second stage), but if the neighbour is found, it is better to integrate with the light shelf (the fourth stage) As for the effect of the presence of the neighbour with the light shelf system on the total energy consumption, it is a result of energy consumption in the electric lighting loads that we resort to as an alternative to the lack of quality of the available daylight and the reason was not the change in cooling or heating loads. We find that the arrangement of the positive effect of the presence of the neighbour is integrated with the light shelf system in the four directions as follows, firstly the northern direction, because it collects light from the south and sends it to the north, then the eastern and western direction, then the southern direction, where light is collected from the north, it is weaker. In the end, the study determined the effect of the presence of the neighbour integrated with the presence of the light shelf system, but it is recommended that the design of the light shelf needs further development in the all orientations of achieving balance between the amount of daylight entering and blocked to reach the harmony of daylight and prevent the glare and thus reduce dependence on electric lighting during the day and thus reduce energy consumption with Achieving daylight efficiency and this is the optimum goal. The fourth stage
2020-12-31T09:03:00.747Z
2020-12-30T00:00:00.000
{ "year": 2020, "sha1": "8e8c74a8f3e41bcb3dc406010469dba4281a890d", "oa_license": null, "oa_url": "https://doi.org/10.1088/1757-899x/974/1/012029", "oa_status": "GOLD", "pdf_src": "IOP", "pdf_hash": "ece727e2076dd8cdf359975533d828972f88b612", "s2fieldsofstudy": [ "Engineering" ], "extfieldsofstudy": [ "Physics", "Environmental Science" ] }
209353993
pes2o/s2orc
v3-fos-license
Fatty Acids Regulate Porcine Reproductive and Respiratory Syndrome Virus Infection via the AMPK-ACC1 Signaling Pathway Lipids play a crucial role in the replication of porcine reproductive and respiratory syndrome virus (PRRSV), a porcine virus that is endemic throughout the world. However, little is known about the effect of fatty acids (FAs), a type of vital lipid, on PRRSV infection. In this study, we found that treatment with a FA biosynthetic inhibitor significantly inhibited PRRSV propagation, indicating the necessity of FAs for optimal replication of PRRSV. Further study revealed that 5′-adenosine monophosphate (AMP)-activated protein kinase (AMPK), a key kinase antagonizing FA biosynthesis, was strongly activated by PRRSV and the pharmacological activator of AMPK exhibited anti-PRRSV activity. Additionally, we found that acetyl-CoA carboxylase 1 (ACC1), the first rate-limiting enzyme in the FA biosynthesis pathway, was phosphorylated (inactive form) by PRRSV-activated AMPK, and active ACC1 was required for PRRSV proliferation, suggesting that the PRRSV infection induced the activation of the AMPK–ACC1 pathway, which was not conducive to PRRSV replication. This work provides new evidence about the mechanisms involved in host lipid metabolism during PRRSV infection and identifies novel potential antiviral targets for PRRSV. Introduction Porcine reproductive and respiratory syndrome (PRRS) has been one of the most economically prominent swine diseases worldwide for decades [1]. It is typically characterized by reproductive failure in pregnant sows, reduced boar semen quality, and severe respiratory disease in infected newborn and young pigs [2]. PRRS virus (PRRSV), the etiologic agent of PRRS, is an enveloped, positive sense, single-stranded RNA virus classified within the order Nidovirales in the family Arteriviridae [3]. Unfortunately, the current commercial vaccines for PRRS fail to provide sustainable disease control due to the immunosuppression and genetic heterogeneities of PRRSV, and no efficient antiviral agents against PRRS are available presently, which leads to globally rising outbreaks of PRRS and subsequent tremendous economic losses [4][5][6]. The development of potent broad-spectrum antiviral therapy against PRRS, by better understanding the pathogenesis of the disease, is essential to reduce the transmission of PRRS [7]. Viruses always exploit and reprogram cellular components to form an optimal environment for the replication of viral progenies, many of which are dependent on cellular lipid signaling, synthesis, and metabolism [8][9][10]. The close interaction between virus and host cellular lipids occurs at several stages in the virus replication cycle, including replication, assembly, and secretion [11]. As more is learned about the role of lipids in virus replication, the reprogramming of cellular lipid metabolic pathways under virus infection, such as glycolytic pathway and cholesterol and fatty acid (FA) synthesis signaling, will be a rapidly emerging theme. For example, Dengue virus (DENV) provokes a remarkable increase in intracellular cholesterol and FA levels and stimulates glycolysis for optimal replication [12][13][14]. Accordingly, pharmacological inhibitors targeting lipid metabolic pathways involved in the viral replication cycle provide novel targets for future antiviral agent development. The drugs that disrupt FA biosynthesis pathways have been reported to possess an antiviral effect against multiple enveloped viruses, including hepatitis delta virus, hepatitis C virus (HCV), human immunodeficiency virus, Rift Valley fever virus, and Hepatitis B virus [15][16][17][18][19], confirming the significance of FAs in virus replication. 5 -adenosine monophosphate (AMP)-activated protein kinase (AMPK), a heterotrimeric complex consisting of a catalytic alpha subunit and regulatory beta and gamma subunits, is an evolutionarily conserved serine/threonine kinase [20]. The first known and most important function of AMPK is the regulation of lipid metabolism. AMPK is activated through phosphorylation of the threonine (Thr) residue 172 on the alpha subunit, which inhibits both FAs and cholesterol synthesis, mainly by separately inducing the phosphorylation of their key rate-limiting enzymes, acetyl-coA carboxylase 1 (ACC1) and HMG-CoA Reductase (HMGCR) [21,22]. Additionally, AMPK plays a significant role in maintaining dynamic energy homeostasis [23]. Intensive studies spanning decades have demonstrated that AMPK is closely linked with multiple metabolic pathways and physiological processes. An imbalance in AMPK activity is associated with various chronic diseases including metabolic syndrome, obesity, stress, type II diabetes, or even reduced longevity and the promotion of cancer [24][25][26][27]. Because of its significance, AMPK has been considered a potential target in the treatment of multiple diseases. In the work described here, we demonstrated that the pharmacological inhibitor (C75) of the FA synthesis pathway can suppress PRRSV infection, suggesting a significant role of FAs during PRRSV infection. Furthermore, we found that the AMPK activity was positively regulated in PRRSV-infected cells and PRRSV-activated AMPK drove a decline of ACC1 activity in turn. Both pharmacological activators of AMPK and inhibitors of ACC1 had anti-PRRSV effects, indicating that host cells antagonized PRRSV infection via activating the AMPK-ACC1 signaling pathway. These findings highlight FA metabolism as a new potential antiviral target. Cells, Virus, and Reagents PK-15 CD163 cells (gifted by En-min Zhou at Northwest A&F University, China) [28], a pig kidney cell line stably expressing the PRRSV receptor CD163, were cultured in Dulbecco's Modified Eagle's Medium (DMEM) (Invitrogen, CA USA). Primary porcine alveolar macrophages (PAMs) were kept in Roswell Park Memorial Institute (RPMI)-1640 medium (HyClone, Utah, USA). PRRSV strain WUH3, a highly pathogenic type 2 (North American) PRRSV, was isolated from the brains of pigs suffering from high-fever syndrome in China [29]. PRRSV was amplified, and the titer was determined in PK-15 CD163 cells. The FA synthase inhibitor C75 (C5490) was purchased from Sigma (MA, USA), dissolved in DMSO at a concentration of 15 mM, and stored at −80 • C. The AMPK activator A769662 (HY-50662) and ACC1 inhibitor CP-640186 (HY-15259) were obtained from MedChemExpress (MCE, NJ, USA) and stored at −80 • C at the concentration of 150 mM and 5 mM, respectively. Small Interfering RNAs (siRNAs) and Transfection The siRNAs targeting porcine AMPK or negative control (NC) siRNA were designed by Genepharma (Suzhou, China), and the sequences are listed in Table 1. The siRNAs were dissolved in DEPC water and transfected at a final concentration of 50 nM using jetPRIME®Transfection Reagent (Polyplus-transfection ® SA, Strasbourg, France). Table 1. The siRNA sequences targeting porcine 5 -adenosine monophosphate (AMP)-activated protein kinase (AMPK). TCID 50 Assay PK-15 CD163 cells were pretreated with DMSO or inhibitors for 6 h and then infected with PRRSV (MOI = 0.5) in the presence of inhibitors at corresponding concentration for 24 h. PRRSV samples were harvested through repeated freezing-thawing and centrifugation for TCID 50 assay. Next, fresh PK-15 CD163 cells were seeded in 96-well plates and then infected with serial 10-fold dilutions of PRRSV samples in eight replicates. The plates were incubated for approximately 96 h before virus titers were calculated. PRRSV titers were expressed as TCID 50 per milliliter using the Reed-Muench method. Plaque Assay All the samples were harvested as described in 2.6. PK-15 CD163 cells seeded in six-well plates were infected with serial 10-fold dilutions of PRRSV samples for 2 h, the viral inoculums were removed, and cells were overlaid with a mixture containing half 1.6% low-melting-point agarose and half 2% DMEM. These were incubated at 4 • C for 10 min before being cultured at 37 • C for approximately 48 h. Finally, cells were stained with neutral red and the number of plaques was counted. Statistical Analysis GraphPad Prism 5 software (GraphPad Software, CA, USA) was used for data analysis using two-tailed unpaired t tests. RNA Extraction and Quantitative Real-Time PCR (qRT-PCR) Total RNA was extracted with TRIzol reagent (Invitrogen, CA, USA) and then reverse transcribed into cDNA by reverse transcriptase (Roche, Mannheim, Germany). The qRT-PCR experiments were performed in triplicate. Absolute quantitative mRNA levels of PRRSV NSP9 gene (5 -GTTGATGGTGGTGTTGTGCT-3 and 5 -AGACCAATTTTAGGCGCGTC-3 ) were calculated using its standard plasmid's amplification curve. Real-time PCR was performed using Power SYBR green PCR master mix (Applied Biosystems, CA, USA) in an ABI 7500 real-time PCR system (Applied Biosystems, CA, USA). Blocking Fatty Acid Synthesis Inhibits PRRSV Replication To investigate the role of FAs in PRRSV infection, C75, a FA inhibitor targeting FA synthase (FASN) was used. First, a cell cytotoxicity assay was tested in PK-15 CD163 cells and concentrations below 20 µM of C75 were chosen for further experiments ( Figure 1A). The results of the western blot assays revealed that treatment with C75 significantly inhibited PRRSV-N production in a dose-dependent manner, as seen in Figure 1B. Moreover, we observed a notable decrease in the fluorescence intensity of PRRSV-N protein and virus titer in the presence of C75 through the indirect immunofluorescence assay and TCID 50 assay, as seen in Figure 1C and 1D. These confirmed the anti-PRRSV activity of C75, indicating that FAs are important players during PRRSV infection. PRRSV Infection Upregulates AMPK Activity FAs are required for optimal replication of PRRSV, which prompted us to further investigate whether intracellular FA biosynthesis signaling pathways are affected by PRRSV. Given the importance of AMPK as a key kinase in FA metabolism, AMPK activity variation during PRRSV infection was evaluated through detection of the phosphorylation levels of AMPK at residue Thr172 of the alpha subunit [30]. Only minor differences in total protein levels of AMPK were observed, whereas AMPK phosphorylation levels rose in a time-dependent manner with a 2.6-fold increase in PK-15 CD163 cells, as seen in Figure 2A, and a 6.0-fold increase in PAMs, as seen in Figure 2B AMPK Activation Restricts PRRSV Infection The above data demonstrated that PRRSV significantly activated AMPK, so we continued to investigate whether AMPK activation affects PRRSV replication. AMPK activator A769662 was selected for further research. First, the cytotoxicity of A769662 was evaluated with the MTT assay, which revealed that no obvious cytotoxicity was observed in PK-15 CD163 cells treated with A769662 at concentrations below 150 µM, as seen in Figure 3A. Next, PRRSV-infected PK-15 CD163 cells were treated with various concentrations of A769662. The results of western blot assay indicated that A769662 reduced PRRSV-N expression in a dose-dependent manner, as seen in Figure 3B. Likewise, the results of the plaque assay revealed an approximately four-fold decline in the number of plaques formed in A769662-treated cells compared with DMSO-treated cells, as seen in Figure 3C. Altogether, these data indicated that activated-AMPK had an antiviral effect on PRRSV in PK-15 CD163 cells. To further evaluate the effect of AMPK on PRRSV infection, two AMPK-specific siRNAs (siAMPK-1 and siAMPK-2) were designed, and the interference efficiency of each siRNA was determined by western blot assay. The results show that both siRNAs were able to knockdown AMPK expression, and siAMPK-2 displayed higher knockdown efficiency, as seen in Figure 4A. In PK-15 CD163 cells transfected with these two siRNAs, PRRSV mRNA levels and virus titers were examined by qRT-PCR assay and plaque assay, respectively. As shown in Figure 4B,C, knockdown of AMPK promoted the RNA levels of the PRRSV NSP9 gene and increased PRRSV titers. Moreover, siAMPK-2 had better silence efficiency and was more favorable for PRRSV infection. All the above results suggest that PRRSV-induced AMPK activation is not conducive to PRRSV. AMPK is Involved in PRRSV-Mediated ACC1 Activity Reduction Considering that AMPK inactivated acetyl-CoA carboxylase 1 (ACC1), the rate-limiting enzyme of FA synthesis, and suppressed the levels of cellular FAs, as seen in Figure 5A, the effect of PRRSV-activated AMPK on ACC1 activation in PRRSV-infected cells was tested to further investigate the mechanism by which AMPK reduces PRRSV replication. As shown in Figure 5B, the levels of phosphorylated ACC1 (inactivated ACC1) increased in a time-dependent manner after PRRSV infection, which was consistent with the activation status of AMPK. Furthermore, we found that specific siRNA-mediated knockdown of AMPK significantly decreased PRRSV-induced ACC1 phosphorylation, as seen in Figure 5C, suggesting that ACC1 activity is modulated by AMPK during PRRSV infection. This proved that the AMPK-ACC1 pathway, related to the negative regulation of FA metabolism, was activated in PRRSV-infected cells. ACC1 Inhibitor Disrupts PRRSV Replication To further assess the effect of ACC1 on PRRSV infection, PK-15 CD163 cells were treated with CP-640186, an ACC1-specific pharmacological inhibitor. As shown in Figure 6A, no obvious cytotoxicity was observed after treatment with CP-640186 at concentrations below 10 µM. Next, PRRSV-infected PK-15 CD163 cells were treated with various concentrations of CP-640186 (0.2, 1, and 5 µM). Western blot analysis showed that CP-640186 inhibited the expression of PRRSV-N protein in a dose-dependent manner, as seen in Figure 6B. Additionally, we observed a notable decrease in the fluorescence intensity of PRRSV-N protein and PRRSV titers in the presence of CP-640186 through indirect immunofluorescence assay and TCID 50 assay, respectively, as seen in Figure 6C,D. These data are consistent with the above results that both C75 (FA synthase inhibitor) and A769663 (AMPK activator) restricted PRRSV proliferation, collectively emphasizing the significance of FAs in PRRSV infection. Discussion In recent years, we have gained insights into the importance of lipids in virus replication. For non-enveloped viruses, glycosphingolipids (GSLs) can directly function as attachment receptors to initiate infection [31]. For enveloped viruses, such as HCV and bovine viral diarrheal virus (BVDV), lipid traffic receptors have been described as indirect co-receptors for viral entry [32]. Furthermore, enveloped viruses can regulate lipid metabolism to rearrange intracellular membrane systems, mainly endoplasmic reticulum (ER), mitochondria, and Golgi, and to establish replication sites to concentrate viral and host proteins required for viral replication [33]. Additionally, lipids can recruit viral RNAs to separate them from innate immune sensors, which is a novel strategy for viruses to avoid the host antiviral immune response [34]. For PRRSV, our previous research and that of other groups demonstrated that cholesterol and lipid rafts are potent participants in the PRRSV replication cycle [35][36][37][38]. Here, we found that fatty acids, another important kind of lipid, are essential components for PRRSV infection. However, the mechanism(s) involved in the FA regulation during PRRSV proliferation remains unknown. Previous studies have found that alteration of FA properties can drastically affect the topological structure of cellular membranes [39,40], interfere with membrane fusion, and remodel the envelopment required for virus replication [41,42], these may be potential subjects for future mechanistic studies. FA synthesis is a process catalyzed by various enzymes; first, ACC1 catalyzes the transformation of acetyl-CoA into malonyl-CoA, which is then converted to de novo palmitate through FA synthase. Next, palmitate undergoes chain propagation to produce saturated FAs via FA elongase [43], as seen in Figure 7. It is common to suppress FA synthesis by inhibitors targeting related enzymes, which have been proven to decrease the replication of various viruses, including respiratory viruses [44] and flaviviruses [45]. Additionally, evidence has shown that drugs targeting ACC1 are effective for antiviral treatment [46][47][48]. In this study, the antiviral effects of ACC1 inhibitor (CP-640186) and FASN inhibitor (C75) on PRRSV replication were demonstrated as well, supporting the significance of studies on FA metabolism for the development of novel antiviral agents. AMPK is a potent FA metabolic inhibitory regulator. Previous studies reported that some viruses inhibit AMPK activity to increase lipid deposition, which creates a conducive cellular lipidic environment for virus replication, such as Epstein-Barr virus, HCV, and DENV [49][50][51][52]. Conversely, human cytomegalovirus can activate AMPK to benefit its infection [53]. In this study, we found that PRRSV infection dramatically increased the levels of phosphorylated AMPK (active form), which antagonized PRRSV replication. Activated AMPK inhibited FA synthesis by reducing the activity of ACC1, the rate-limiting enzyme of FA biosynthesis pathways. Some viruses, such as Rift Valley fever virus, activate the AMPK-ACC1 pathway, leading to a decrease in FA levels and viral progenies [18]. Concomitant with that, enhanced ACC1 phosphorylation (inactive form) in PRRSV-infected cells was also shown to depend on AMPK activation, and the activation of the AMPK-ACC1 pathway resulted in suppressed PRRSV replication, as seen in Figure 7. However, FA synthesis is enhanced during infection with some other viruses through different mechanisms. For example, DENV NS3 protein interacts with FASN, which is then recruited to viral replication sites to upregulate FA production [13]. Moreover, human cytomegalovirus induces the expression of FA elongase to produce more saturated FA required for virus replication via mTOR and SREBP-1 pathways [54]. It is well known that viral infections are competitive processes between viruses and host cells. However, little is known about the molecular details of the influences of FAs on PRRSV replication. As an enveloped RNA virus, PRRSV might induce membrane expansion for entry and membrane rearrangement of ER to form a replication complex. This suggests that FAs, key components of cellular membranes, may be involved in the entry and replication stages in the PRRSV replication cycles. Moreover, for some viruses, such as HCV and enterovirus, virus proteins and RNA are transported to lipid droplets (LDs) as sites of virus assembly [55,56]. Considering that FAs are related to the formation of LDs, there is the possibility that FAs regulate PRRSV replication by affecting the production of LDs, which requires further research to confirm. In addition to the previously described regulatory functions, FA metabolism is involved in immune responses, such as inflammation signaling [57]. This information provides new opportunities for the study of potential associations between FA metabolism and PRRSV-induced clinical features, including interstitial pneumonia. Further investigation of the interactions between FAs and PRRSV infection is essential to improve our knowledge about the pathogenesis of PRRSV and provide important insights for the development of novel anti-PRRSV agents. Conclusions In conclusion, we found that PRRSV infection requires and scrambles FAs of host cells to produce progenies, while in response, host cells develop countermeasures to activate the AMPK-ACC1 signaling pathway involved in FA metabolism, thus suppressing PRRSV infection. These findings highlight FA metabolism as a new potential antiviral target against PRRSV. Conflicts of Interest: The authors declare no conflicts of interest.
2019-12-12T10:27:17.077Z
2019-12-01T00:00:00.000
{ "year": 2019, "sha1": "521bbc70da91a8aac662713cd7b86930631ef942", "oa_license": "CCBY", "oa_url": "https://doi.org/10.3390/v11121145", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "447ad4b787ce750dcd7d8cdb936630117d1e8c2f", "s2fieldsofstudy": [ "Medicine", "Biology", "Environmental Science" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
212696215
pes2o/s2orc
v3-fos-license
Aortic dissection complicated by paraplegia This case report illustrates the complexity and severity of acute aortic dissection. This condition has one of the highest mortality rates of any cardiovascular emergency and is often extremely challenging to treat with both open and endovascular intervention often required. This patient presented with a ruptured aortic dissection which is rare and often immediately fatal. He survived urgent extensive aortic endovascular stenting, but, despite preventative measures, developed spinal cord ischaemia post-intervention. The contemporary management of acute aortic dissection, and the pathophysiology and prevention of spinal cord ischaemia are covered in this fantastic case report. Introduction Acute aortic dissection is a medical emergency accompanied by high morbidity and mortality 1 . This is highlighted by the pioneering review of 505 patients reported by Hirst et al. in 1958 2 . Advances in surgical intervention have subsequently lowered the mortality rate 2,3 . Aortic dissection remains one of the most common aortic emergencies with an incidence of 3 per 100,000 patients, with a male predominance of 65% 4 . This condition is characterized by the separation of the layers of the aortic wall allowing entry of blood into the intima-media region; hence the dissection is further propogated 1 . The resulting compromised perfusion and systemic ischaemia is responsible for the characteristic severe pain radiating to the back 5 . Management ranges from medical analgesia, vasodilators and beta-blockers, to surgical intervention including open surgery, implementation of an endovascular stent graft, or a combination of these solutions 5 . Associated morbidities include rupture, stroke, acute renal failure, bowel ischaemia, peripheral ischemia and even paraplegia 5 . This report focuses on the rare latter complication, paraplegia, experienced by Mr X. Case presentation Mr X is a 67 year-old man who was admitted to the John Radcliffe with a thoracic aortic dissection. Two days prior to admission, Mr X experienced a 'sudden shooting pain' down the left-side of his back when he was getting out of his chair in order to go to bed. He continued to retire to his bed but the pain continued through the night and kept him awake. As the pain was still present, Mr X decided to drive to his local GP in the morning, but en route he experienced a very severe pain, like a 'stabbing in the back' . It was so severe that he 'couldn't breathe' and had to emergency stop the car. Mr X was able to turn the car around to return to his house and call 999 for an ambulance, before admission into ICU. Mr X's past medical history shows recurrent pneumothoracies leading to a bilateral pleurectomy as well as a prostate removal, following the diagnosis of prostate cancer. He was not on any medication prior to the surgery and has NKDA. Mr X was an ex-smoker but there was no other significant social, or family history. Upon admission to ICU, Mr X was conscious and the pain had marginally subsided. However, he presented with the complication of hypotension. An MRI was taken, which confirmed the diagnosis of a ruptured Type B aortic dissection from the subclavian to his common iliac artery. This case was managed surgically with a Thoracic Endovascular Repair of Aneurysm (TEVAR). Pre-operation Mr X was stable but tachycardic, and a small right-sided haemothorax was visible on CT. The operation involved placing the main body of the stent through the right groin. The deployment sequence was conducted using two proglides on the right side with manual pressure on the left side. He was under anaesthetic for 2.5 hours. CT scans were taken after the operation and Mr X remained in ICU for 4 days. Post-operation, Mr X suffered from the complication of paraplegia from level T5 below, due to spinal cord ischaemia. Many procedures were implemented to minimize the risk of this occurring during the surgery; this includes a spinal drain, O 2 -Hb transfusion, maintaining a high MAP, as well as placing Mr X in supine position in ITU. Unfortunately, these measures were not able to prevent the complication of paraplegia. The spinal ischemia has led to Mr X becoming double incontinent post-surgery. During his stay in hospital, he also developed a chest infection with crepitations in the right lower base of his lungs; this was resolved following administration of co-amoxiclav. The current plan is to move Mr X to a specialist re-habilitation centre so that he can commence physiotherapy and begin, in his words, 'a new chapter in his life' adjusting to the paraplegia. The complication of paraplegia following TEVAR to manage aortic dissection will be explored in this case report. Discussion Aetiology of aortic dissection The pathophysiology of aortic dissection is not completely understood. One hypothesis proposes an initial tear in the intima of the aorta, allowing blood to surge into the media and create a false lumen 6 . Another hypothesis postulates that the outer portion of the media (vasa vasorum) haemorrhages initially, which then leads to intimal rupture 6 . Common to both theories, blood then continues to flow, extending the dissection typically in an anterograde manner 6 . The predisposition to aortic dissection has both histopathological and genetic components 4 . The most prevalent risk factor is hypertension and is present in 75% of cases 4 . Other modifiable risk factors include smoking and drug use (such as cocaine and amphetamine) 4 . Traumatic aortic dissections are most commonly caused by traffic accidents or deceleration trauma 4 . The importance of inflammation in the pathophysiology of aortic dissection is demonstrated by the increased pre-disposition in patients with inflammatory disorders 7 . This includes vascular autoimmune diseases such as Giant-cell arteritis and Takayasu's arteritis, as well as infections such as tuberculosis and syphilis 4 . There are many genetic risks linked to aortic syndromes; a large majority are connective tissue disorders including Marfan's syndrome, Turner's syndrome and Type 4 Ehlers-Danlos syndrome 4 . Classification Classification systems are in place in order to describe the type of aortic dissection. This grouping is beneficial in deciding the course of management. The two most prevalent classification systems are the DeBakey and Stanford systems 4 . These classify the dissections in an anatomical manner, referring to the site of intimal tear 4 . As a part of the classification, the ascending aorta refers to the section of the aorta proximal to the brachiocephalic artery and the descending aorta is distal to the left subclavian artery 4 . The In the case of Mr X, pleural haematomas were noted in the chest radiograph, as shown below in Figure 2 by the left mid and lower zone opacification. This radiograph also demonstrated incorrect placement of an NG tube, which was later rectified. Another initial investigation includes the 12 lead ECG. In the review by Hagan et al., non-specific abnormalities in the ECGs were shown, however, results were normal for 31% of patients 1 . Imaging studies are employed as a diagnostic tool including contrast-enhanced CT Angiography, particularly in type B dissection 1 . In the case of Mr X, the dissection began just after the left subclavian artery, extends to the bifurcation and into the left common iliac artery. This is seen on the CT angiogram: Figure 3a and 3b below. The false lumen can be identified by the darker shading. The image also shows a large mediastinal haematoma and bilateral haemothoracies. Other diagnostic imaging includes transthoracic echo, transoesophageal echo. MRI is also used, although rarely 1,4 . Biomarkers are another key diagnostic tool when looking to the future for diagnosis. Markers that show injury to the vascular smooth muscle, interstitium and elastic laminae can indicate dissection 4 . Currently, only D-dimer is used clinically to determine suspected aortic dissection 4 . As a future prospect, fibrin degradation products can be assayed as a marker in acute dissection 4 . The medical management first aims to provide analgesia. The next priority is to control the blood pressure and to reduce the force of left ventricular ejection 4 . This, in turn, limits the propagation of the dissection 4,8 . The aim is to achieve a blood pressure of 100-120 mmHg 8 . Beta-blockers are used for blood pressure control, such as labetolol, as in the case of Mr X. This can be used in combination with vasodilating drugs such as ACE inhibitors, including ramipril, used in this case. Management of Aortic Dissections Surgical management includes both open repair as well as implementation of an endovascular stent (TEVAR). Surgical treatment, as opted for in type A dissections, aims to remove the entry into the false lumen and remodel the aortic true lumen with a graft (with or without re-implantation True lumen False lumen of coronary arteries) 1 . 30-day mortality for ascending aortic dissection at experienced centres is between 10-35% 9 . From a propensity matched retrospective analysis, survival rates in patients with acute type A dissection were 91% after 30 days, 74% after 1 year and 63% after 5 years 9 . Therefore, early open surgery is a suitable solution. However, there has been recent movement towards endovascular repair. As following standard protocol, thoracic endovascular repair (TEVAR) was used as the management plan for Mr X's complicated type B aortic dissection; this is the first line therapeutic option 8 . TEVAR Endovascular repair was introduced in 1999 by Dake et al. and has significantly reduced the mortality rates compared to when the only surgical solution was open repair-10 complicated dissections. Thoracic endovascular aortic repair (TEVAR) is a minimally invasive procedure. It uses stent grafts to seal the primary tear and allow blood flow through the true lumen 10 complicated dissections. It is recommended that there is minimal aortic coverage in order to minimize spinal cord ischemia 10 complicated dissections. If there is poor perfusion of the branch vessels, endovascular revascularization may be performed by fenestration or branch vessel stenting, however this is not usually done in an emergency setting 10 complicated dissections. The stent used in the case of Mr X is shown in the CXR and CT image in Figure 5 below. § The concept of repairing type B dissections without the need for open surgery, and its associated risks, is very valuable. The stents are composed of dacron or polytetrafluoroethylene with a stainless steel or nitinol skeleton. The procedure is carried out under X-ray fluoroscopic guidance and involves passing a device through the common femoral artery to an access sheath. This sheath is removed eventually to expose the stent. The meta-analysis for complicated type B dissections by Parker et al. compared a total of 942 patients from 29 different studies 10,11 complicated dissections. In-hospital mortality was 9% and other major complications including stroke (3.1%), paraplegia (1.9%), conversion to type A dissection (2%), bowel infarction (0.9%) and major amputation (0.2%) occurred in 8.1% 11 . Overall, technical success was achieved in 95% of the cases, with an in-hospital mortality of 9% 10 . This meta-analysis shows a promising solution to aortic dissection with endovascular repair. Data collected from large registries show that hospital mortality is 32% for patients treated with surgery, 7% for endovascular techniques, and 10% for patients treated with only medical management 4 . Booher et al. created a K aplan-Meier survival curve from the IRAD database for type B aortic dissections ( Figure 6) 12 . This identifies the treatment option along with the time period from onset, and its effect on mortality. The question still remains as to how it is best to treat a Type B dissection, as in Mr X's case. The INSTEAD trial explored this by randomizing patients with uncomplicated type B aortic dissection between 2-52 weeks from onset into medical management or TEVAR management. 5-year mortality was 11.1% for TEVAR compared to 19.3% for purely medical management 6 . Open surgery was compared to TEVAR by Fattori et al., in the International Registry of Acute Aortic Dissection (IRAD) 13 . The study reviewed 571 patients with acute descending dissection. 10% of patients had open surgery and 12% had endovascular repair 13 . There was a much better in-hospital mortality for TEVAR (10%) than open surgery (34%) 10,13 . Spinal cord ischemia and paraplegia Aortic dissection endovascular repair has been associated with great success, but unfortunately there are rare but disastrous complications following a dissection which cannot be prevented. This includes paraplegia as in the case of Mr X. Although there are advantages of TEVAR when compared to open repair, there is still a significant incidence of spinal cord injury; the overall incidence ranges from 2.5-8% 8 , 14 . Scali et al. found a 9.2% incidence in a study looking at 741 TEVAR procedures 15 . The spinal cord ischaemia was caused by the temporary obstruction for the spinal arteries, especially in critical zones such as the lower thoracic and lumbar segments 16 . This ischaemia of the spinal cord, which was found to be from the level T5, lead to the paraplegia. There are many cases that have reported this severe, although rare, complication. Weisman and Adams in 1944 described 38 cases of ischemic necrosis of the spinal cord following aortic dissection 16,17 . They proposed that paralysis occurred following occlusion of the intercostal and lumbar arteries by dissection of the aortic wall. The development of paraplegia can be classified as immediate or delayed 18 . The former is a direct result of hyperperfusion of the spinal cord as well as secondary hypoxic 18 . On the other hand, delayed complications (which can be up to 21 days following the surgery) are caused by reperfusion hyperaemia and free radical generation 18 . This then leads to oedema of the cord with hypotension in certain regions and reduced perfusion of the vasculature 18 . The latter is more associated with TEVAR with respect to open repair 14 There many factors that contribute to the occurrence of spinal cord ischaemia during and after aortic surgery 18 . Three key aspects were identified by Svensson et al.: the duration and degree of ischemia, the failure to re-establish blood flow to spinal cord after repair, and biochemically mediated reperfusion injury 19 . When looking at the TEVAR procedure specifically, spinal cord injury has been linked to the aortic coverage levels, a history of prior aortic surgery as well as hypotension at presentation 8 . The latter was present in the case Mr X. Distal dissections have been found to have a greater incidence of spinal cord ischaemia 20 . The spinal cord has both a complex, as well as a varied blood supply 20 . The vertebral artery and the costocervical trunk supplies the cervical and upper thoracic cord 20 . This part is less prone to vascular insult 20 . The lower half of the spinal cord is supplied by direct branches from the aorta, this includes the intercostal, lumbar, iliolumbar and sacral arteries 20 . Here the major arterial supply of the cord is from T10-L1 and is known as the artery of Adamkiewicz 20 . These arteries in particular are sheared in aortic dissection 20 . The subsequent interruption of blood flow has a maximal insult on the mid-thoracic cord, as this area is a watershed zone between blood supply of the upper and lower cord 20 . Perioperative preventative measures Many strategies have been put into place to prevent the incidence of spinal cord injury and consequent paraplegia during the endovascular repair of the dissection. These measures are attributed to the declining incidence of paraplegia 8 . Based on the three key contributing factors, spinal cord protection methods have been implemented. In the case of Mr X, as mentioned above, CSF drainage, maintenance of MAP, and maintenance of a supine position was used. The drainage of CSF acts to reduce the severity of ischaemia. Animal studies have shown that decreasing the spinal fluid pressure lead to a decrease in incidence of paraplegia 21 . This can be accomplished by CSF drainage and has been put into place clinically, with high risk patients being given CNS drainage and naloxone 18 . This holds a slight controversy following a study that failed to show any benefits of CNS drainage alone 22 . This study has been criticized as there was a small volume of drainage (50ml) and the drainage was not by free gravity 18 . Subsequently, more encouraging clinical results were found by Svensson et al., which allowed drainage freely by gravity 23 . This study showed CSF drainage to be protective and since, CNS drainage is used as one of the key methods for spinal cord protection. Further methods of spinal cord injury protection include avoiding perioperative hypotension and creating a temporary endoleak, both allowing for sufficient perfusion 8 . Adjunct protective methods include perioperative induction of hyperthermia and intrathecal medication 8,18 . Finally, during surgery itself, staging the procedure has shown spinal cord neuroprotection 14 . Although these measures are currently used in practice, there is no definitive recommendation for spinal cord injury prevention for TEVAR from current literature; there are no randomized controlled trials evaluating any of the preventive measures. The rationale behind the strategies used are drawn from those used in open surgery, as well as the basis of theoretical spinal cord injury pathophysiology 8 . Conclusion The treatment of aortic dissection is still associated with significant morbidity and mortality. The progressive evolution in operative techniques, including TEVAR, has been able to improve this. The case of Mr X demonstrates a very severe, although rare complication following aortic dissection and highlights the employment of techniques used to achieve spinal cord protection. The cause of the post-operative neurological complication is now mostly understood and therefore targeted in the protective methodology. By directing our efforts towards the three major contributing factors -the duration and degree of ischemia, failure to re-establish blood flow to spinal cord after repair, and biochemically mediated reperfusion injury -we can aim to reduce the complication of paraplegia due to spinal cord ischaemia.
2020-02-27T09:17:31.264Z
2020-01-01T00:00:00.000
{ "year": 2020, "sha1": "075d746b8d79dc8f815c0b83c571449527faac7c", "oa_license": "CCBY", "oa_url": "https://journal.nds.ox.ac.uk/index.php/JNDS/article/download/49/14", "oa_status": "HYBRID", "pdf_src": "Anansi", "pdf_hash": "1facbe262edb6ff1498b2ea2ef3758c418721c78", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
258762491
pes2o/s2orc
v3-fos-license
Degree criteria and stability for independent transversals An independent transversal (IT) in a graph G $G$ with a given vertex partition P ${\mathscr{P}}$ is an independent set of vertices of G $G$ (i.e., it induces no edges), that consists of one vertex from each part (block) of P ${\mathscr{P}}$ . Over the years, various criteria have been established that guarantee the existence of an IT, often given in terms of P ${\mathscr{P}}$ being t $t$ ‐thick, meaning all blocks have size at least t $t$ . One such result, obtained recently by Wanless and Wood, is based on the maximum average block degree b(G,P)=max{∑u∈Ud(u)∕∣U∣:U∈P} $b(G,{\mathscr{P}})=\max \{{\sum }_{u\in U}d(u)\unicode{x02215}| U| :U\in {\mathscr{P}}\}$ . They proved that if b(G,P)≤t∕4 $b(G,{\mathscr{P}})\le t\unicode{x02215}4$ then an IT exists. Resolving a problem posed by Groenland, Kaiser, Treffers and Wales (who showed that the ratio 1/4 is best possible), here we give a full characterization of pairs (α,β) $(\alpha ,\beta )$ such that the following holds for every t>0 $t\gt 0$ : whenever G $G$ is a graph with maximum degree Δ(G)≤αt ${\rm{\Delta }}(G)\le \alpha t$ , and P ${\mathscr{P}}$ is a t $t$ ‐thick vertex partition of G $G$ such that b(G,P)≤βt $b(G,{\mathscr{P}})\le \beta t$ , there exists an IT of G $G$ with respect to P ${\mathscr{P}}$ . Our proof makes use of another previously known criterion for the existence of ITs that involve the topological connectedness of the independence complex of graphs, and establishes a general technical theorem on the structure of graphs for which this parameter is bounded above by a known quantity. Our result interpolates between the criterion b(G,P)≤t∕4 $b(G,{\mathscr{P}})\le t\unicode{x02215}4$ and the old and frequently applied theorem that if Δ(G)≤t∕2 ${\rm{\Delta }}(G)\le t\unicode{x02215}2$ then an IT exists. Using the same approach, we also extend a theorem of Aharoni, Holzman, Howard and Sprüssel, by giving a stability version of the latter result. Introduction Given a graph G and partition P = {U 1 , . . ., U r } of its vertex set V (G) into blocks U i , an independent transversal (IT) of G with respect to P is an independent set {u 1 , . . ., u r } of G such that u i ∈ U i for all i ∈ [r].In the literature, an independent transversal has also been called an independent system of representatives or a rainbow independent set. Many important notions in mathematics can be described in terms of a suitably chosen graph and vertex partition having an IT (see e.g.[16,17] and the references therein), and accordingly there has been much work over the years in proving sufficient conditions for a vertex-partitioned graph G to have an IT.Many of these criteria require the block sizes to be large enough with respect to certain parameters of G, and the techniques used to prove such results have included purely combinatorial arguments (e.g.[8,14,15,18]), topological methods (e.g.[1,2,3,4,23,24]), arguments using the Lovász Local Lemma and its variants (e.g.[6,7,21,22]), and counting arguments (e.g.[27]).When no special information is known about the structure of the graph with respect to the vertex partition, the combinatorial and topological methods appear to work best, giving best possible results in many cases.One example is the following [14,15], where a partition P is said to be t-thick if each of its blocks has size at least t, and as usual ∆(G) denotes the maximum degree of G. (This is one of the most frequently applied IT theorems, see e.g.[12] and the references therein.)Theorem 1.Let G be a graph with a t-thick vertex partition P. If ∆(G) ≤ t/2 then G has an IT with respect to P. Szabó and Tardos [26] (see also [20,28]) proved that Theorem 1 is best possible, by giving, for each d, a (2d − 1)-thick partition P of the union of 2d − 1 disjoint copies of the complete bipartite graph K d,d that does not have an IT with respect to P. When information about the interaction between the graph and the vertex partition P is known, in particular when there is some limit on the number of edges between any pair of blocks or the number of edges incident to any particular block, then the other techniques mentioned tend to give stronger results.One recent example of this phenomenon, due to Wanless and Wood [27] (see also Kang and Kelly [21]), considers the IT problem with respect to the maximum average block degree b(G, P) = max{ u∈U d(u)/|U | : U ∈ P}. Theorem 2. Let G be a graph with a t-thick vertex partition P. If b(G, P) ≤ t/4 then G has an IT with respect to P. Kang and Kelly noted how this result can be derived using the Local Cut Lemma of Bernshteyn [9] (see also [10]), whereas Wanless and Wood used a counting argument similar to that of Rosenfeld [25].In fact Wanless and Wood showed that the number of IT's given by Theorem 2 is at least (t/2) |P| .Degree considerations.In view of Theorem 1, it is natural to ask whether the t/4 in Theorem 2 can be improved to t/2, and indeed this is stated as an open problem in both [21] and [27].It was resolved by the following theorem of Groenland, Kaiser, Treffers and Wales [13]. Theorem 3.For every > 0 and all sufficiently large t, there exists a forest F and a t-thick partition P of F such that b(F, P) ≤ (1 + )t/4 and F has no IT with respect to P. (We remark that adding the further assumption that no two blocks of P induce a subgraph of G of maximum degree more than o(t) allows the t/4 in Theorem 2 to be strengthened all the way to t − o(t) (Glock-Sudakov [11], Kang-Kelly [21]).) In the main construction giving Theorem 3, the maximum degree ∆(F ) is t.In [13] the authors speculate whether it can be brought down substantially, and in this direction they provide a modified construction that is no longer a forest, but has maximum degree αt with α asymptotically 1 2 + 1 2 √ 2 < 0.854.Again with reference to Theorem 1, they ask (Problem 12 in [13]) whether a construction with maximum degree arbitrarily close to t/2 is possible.More generally they asked the following.Question 4. What can be said about the set of pairs (α, β) ∈ [0, 1] 2 such that for sufficiently large t, there exists a graph G with ∆(G) ≤ αt and a t-thick partition P such that b(G, P) ≤ βt and no IT of G with respect to P exists?One of the main aims of this paper is to give a complete answer to Question 4. To describe it we introduce notion of a good pair.Definition 5. A pair (α, β) with 0 < β ≤ α is a good pair if the following holds for every t > 0: whenever G is a graph with ∆(G) ≤ αt, and P is a t-thick vertex partition of G such that b(G, P) ≤ βt, there exists an IT of G with respect to P. Our first main theorem gives a characterisation of the set of good pairs.Theorem 6.The pair (α, β) with 0 < β ≤ α is good if and only if one of the following holds. In particular, Theorem 6 confirms that the value 1 2 + 1 2 √ 2 obtained in [13] is best possible in their setting, i.e. when β is asymptotically 1/4.Theorem 6 can be viewed as an interpolation between Theorem 1 and Theorem 2, refining both results when α > 1/2 and β > 1/4.Our proof of Theorem 6 uses the topological approach of [1,4,23,24] (see also [17]), relating the existence of IT's in G to the parameter η(G), which is defined as the topological (homotopic) connectedness of the independence complex of G, plus 2 (see Section 2).This approach is based on the following topological Hall or Rado theorem, first proven explicitly in [24] and generalized in [1] (with variants appearing earlier in [4] and [23]).Here for a subset S of blocks of the partition P = {U 1 , . . ., U r } of V (G), we write G S = G U i ∈S U i for the subgraph of G induced by the union of the blocks in S. Theorem 7. Let G be a graph with vertex partition P. If η(G S ) ≥ |S| for every subset S of the blocks of P, then G has an IT with respect to P. We derive (one implication of) Theorem 6 by first proving a general technical theorem (Theorem 16 in Section 3) about the structure of graphs H for which an upper bound on η(H) is known.Theorem 16 has the following as a simple consequence.We provide constructions to prove the other implication of Theorem 6 in Section 6. Theorem 9.For every α > 1 2 , β > max 1 4 , 2α(1 − α) , and t sufficiently large, there exists a graph G and a t-thick partition P of V (G) with ∆(G) ≤ αt and b(G, P) ≤ βt, such that G has no IT with respect to P. Our main tool for the proof of Theorem 9 is a simple general lemma (Lemma 18) that gives a way of constructing a graph and partition with no IT from two smaller ones.Lemma 18 is quite versatile and is used in the Appendix as well, and also discussed further in Section 7. Stability.Another feature of the topological approach to proving the existence of IT's via Theorem 7 is that it can also give information about extremal or near-extremal configurations.A straightforward example of this is the (topological) proof of Theorem 1, which amounts to combining Theorem 7 with the fact [23] that every graph G with maximum degree d and at least 2dk vertices satisfies η(G) ≥ k.As a warm-up in using the topological technique, we give a proof of this fact in Section 2, together with a characterization of the extremal examples, which turn out to be disjoint unions of K d,d 's. Going further in this direction, Aharoni, Holzman, Howard, and Sprüssel [5] showed by these means that for d = ∆(G) ≥ 3, if P in Theorem 1 is (2d − 1)-thick and G has no IT then G contains the union of 2d − 1 disjoint K d,d 's.The second main aim of this paper is to prove a stability version of this, which states that if G does not have an IT and P is t-thick where t is close to 2d, then for some subset S of blocks of P, the graph G S is close to being the union of disjoint K d,d 's.Again we will derive our result by combining Theorem 7 with another consequence of our main technical result Theorem 16, which is as follows. Theorem 10.Let G be a graph with n vertices and maximum degree d, and let be such that n(1 + ) = 2dk.Suppose that η(G) ≤ k.Then ≥ 0 and 2|E(G)| ≥ dn(1 − ), and V (G) has a partition into sets {X i , Y i , Z i } k i=1 such that k i=1 |Z i | ≤ n and the following hold: Thus when |V (G)| is close to 2dk but η(G) ≤ k, there are two respects in which G is close in structure to the disjoint union J of complete bipartite graphs with vertex sets {(X i , Y i )} k i=1 : at most a small proportion of the edges of G are not edges of J, and almost all the edges of J are edges of G. Let us say that the graph G is γ-approximated by J if V (J) ⊆ V (G) and the symmetric difference of E(G) and E(J) has size at most γ|E(G)|.Combining Theorem 7 with Theorem 10 applied to G S leads to the following stability version of Theorem 1. Theorem 11.Let 0 < β < 1 and let G be a graph with a t-thick vertex partition P. If d := ∆(G) ≤ (1 + β)t/2 and G has no IT with respect to P, then for some subset S of blocks of P, the graph The simple derivation of Theorem 11 appears in Section 5. Another stability version of Theorem 1, that concludes that G as in Theorem 11 contains a large disjoint union of complete bipartite graphs, is discussed in Section 7. In the next section we introduce the topological method, and describe the tools that we will use in this paper.Section 3 is devoted to the proof of our main technical result, Theorem 16, and a companion result that will be useful in the subsequent sections.Our main results, Theorem 6 and Theorem 10, are proved in Sections 4 and 5 respectively.Our constructive result, Theorem 9, is proved in Section 6.We end the paper with some concluding remarks in Section 7. Topological connectedness and independent transversals The set I(G) of all independent sets in a graph G forms an abstract simplicial complex (i.e. a "closed-down" set), called the independence complex of G.An abstract simplicial complex C is said to be k-connected if for each −1 ≤ d ≤ k and each continuous map f from the sphere S d to ||C|| (the body of the geometric realization of C), the map f can be extended to a continuous map from the ball B d+1 to ||C||.The connectedness of C is the largest k for which C is k-connected.Following [1], we define the graph parameter η(G) to be 2 plus the connectedness of I(G).The link between topology and the existence of IT's was first discovered in [4], and developed much more fully in subsequent works such as [1,23,24].(See e.g. the survey [17] for an in-depth discussion of these notions, and some intuition on why they relate to IT's.) Theorem 7 implies that we can obtain sufficient conditions for independent transversals from lower bounds on connectedness.Conversely, constructing graphs with no independent transversals is often aided by finding graphs G with a low value of η(G).Understanding when this parameter is small will be our main motivation for the upcoming Theorem 16, which will form the basis of our work in this paper. As observed in [1], adding 2 to the connectedness of I(G) in the definition of η(G) simplifies various statements about this parameter, such as the following basic facts. For a graph G and edge e ∈ E(G), we denote by G − e the graph obtained by deleting e.We write G e for the graph formed by exploding e, which is obtained from G by removing both endpoints of e together with all of their neighbours.In other words, if x and y are the endpoints of e, then G e is the subgraph induced by V (G) \ (N (x) ∪ N (y)).Our main tool for obtaining lower bounds for the η parameter will be the following theorem of Meshulam [24] (see also e.g.[2]). One may obtain a reduction of a graph G by iteratively deleting deletable edges until there are none left. The topological proof of Theorem 1 is based on the lower bound η(G) ≥ |V (G)| 2∆(G) from [23].As an introduction to working with the connectedness parameter η, we give a proof of this bound, along with a characterization of the graphs G for which the bound is tight. Proof.We use induction on k.By Lemma 12(1), the result is immediate for k = 0, and for k = 1, the hypothesis |V (G)| > 0 implies that η(G) ≥ 1. Suppose that |V (G)| ≥ 2d and η(G) = 1.Having η(G) < 2 means that the independence complex of G is not path-connected.This is equivalent to saying that the complement of G is disconnected, since a topological path between two vertices in the independence complex of G corresponds to a graph-theoretical path between those vertices in the complement of G.This then implies that G has a complete cut, that is, G contains a complete bipartite graph on all of its vertices.Since |V (G)| ≥ 2d and ∆(G) = d, the only possibility is that G is the complete bipartite graph K d,d .This proves the base case k = 1. Assume that k ≥ 2 and the conclusion of the theorem holds for all values smaller than k.By iteratively deleting deletable edges from G, we may assume that G is a reduced graph, so that every edge of G is explodable.Let x be a vertex of minimum degree in G. Then d(x) > 0 by Lemma 12(2), so we may choose an edge e of G incident to x.Let G = G e, and let d denote the maximum degree of G .Then d ≤ d, and we may calculate that for every S ⊆ P, then G has an IT with respect to P. In particular, if P is t-thick and d ≤ t/2 then G has an IT. Proof.By Theorem 14, for every S ⊆ P we have so that η(G S ) ≥ |S|.By Theorem 7, G has an IT. Graphs with low connectedness In this section we describe our main technical result on the structure of graphs whose independence complexes have low topological connectedness.It is helpful to keep in mind that the partition in Theorem 16 is not related to any given partition into blocks as we have been discussing in the previous sections, but is rather a different partition that comes about as a consequence of the upper bound on η(G). Theorem 16.Let G be a graph with η(G) < .Then V (G) has a partition into sets for some k < , such that (a) for each i, there exist Proof.We construct a sequence of subgraphs G 0 , H 0 , G 1 , H 1 , . . ., G k , H k with G 0 = G and H k having no edges, such that H i is a reduction of G i and G i is obtained from H i−1 by exploding an edge x i y i that kills the smallest number of vertices.Note that every edge of each H i is explodable, and that k explosions are performed in total.Since η(G) < , Theorem 13 implies that k < . We classify the edges of G as follows.If an edge joins two vertices in distinct sets among {X i , Y i , Z i } for some i, then we label it as black.All other edges are labeled as pink.Further, we assign a direction to each pink edge that joins some X i ∪ Y i ∪ Z i to X j ∪ Y j ∪ Z j with i < j by orienting it froms its i end to its j end. . By the choice of the edge x i y i , the vertex w has at least |X i | neighbours outside of N (x i ) in H i−1 .The edges joining w to X i are black.Denoting their number by b X i (w), this implies that at least |X i | − b X i (w) edges incident to w in H i−1 are pink out-edges.Similarly, for every vertex w ∈ X i , denoting the number of edges joining to w to and b Y i (w) denote the black degree of w into X i and Y i , respectively.Again by the choice of the edge x i y i , the vertex w has at least |X i | neighbours outside of N (x i ) and at least |Y i | neighbours outside of N (y i ).This tells us that w has pink outdegree at least |X i | − b X i (w) and at least |Y i | − b Y i (w), and so in particular at least We get that the total pink outdegree of vertices in Next we give an upper bound on the total pink indegree of vertices of G.As before, each vertex w ∈ Y i has at least |X i | neighbours outside of N (x i ) in H i−1 , and thus it has at most d(w) Combining the lower bound on the total pink outdegree and upper bound on the total pink indegree, we find that k i=1 Then some rearranging gives that a basic partition if it satisfies the conclusions of Theorem 16.We next establish another technical result about basic partitions, for use in the next two sections. Lemma 17.Let G be a graph with n vertices with maximum degree where Proof.For each i, we set r i = |X i | + |Y i | and assume without loss of generality that Observe that, for any positive integers x, r with r − x ≤ x < r, we have that x(r − x) > (x + 1)(r − x − 1).Hence with these conditions the expression x(r − x) is smallest when x is as large as possible.By Property (a) of the basic partition we know that |X i | ≤ d − |Z i |, and this implies that where in the last line we use the fact that {X i , Y i , Z i } k i=1 is a partition of V (G).This finishes the proof. Density and the proof of Theorem 6 We begin by noting that Theorem 16 and Lemma 17 immediately imply Theorem 8. Proof of Theorem 8. Let H with ∆(H) = d and η(H) ≤ k be given.Let s ≤ k be such that {X i , Y i , Z i } s i=1 is a basic partition of V (H), which exists by Theorem 16.We apply Lemma 17, first noting that clearly Q ≥ 0. By Property (a) of the basic partition we find that |X i |+|Y i |+2|Z i | ≤ 2d, and hence the terms (2d We may now give the proof of Theorem 6 (assuming the result of Theorem 9). Proof of Theorem 6.If (1) holds (α ≤ 1/2), then for every t, and every G and P as in Definition 5, we see that G and P satisfy the conditions of Theorem 1. Hence G has an IT with respect to P. If (2) holds (β ≤ 1/4), then every t, G and P satisfy the assumptions of Theorem 2, hence again there exists an IT of G with respect to P. Suppose (3) holds (β ≤ 2α(1 − α)) . By the previous paragraph we may assume α > 1/2 and β > 1/4, and hence also clearly α < 1.Let t, G and P be as in Definition 5, so that in particular ∆(G) ≤ αt, and suppose on the contrary that G has no IT with respect to P. Then by Theorem 7 there exists a subset S of blocks of P such that the subgraph G S of G induced by U ∈S U satisfies η(G S ) ≤ |S| − 1.Let us define γ by γt = ∆(G S ).Then clearly γ ≤ α.Also, since |V (G S )| ≥ t|S|, we see that η(G S ) ≥ t|S| 2γt by Theorem 14, from which we conclude that γ ≥ |S| 2(|S|−1) > 1 2 . Finally suppose that none of (1-3) hold.Then by Theorem 9 there exist t, G and P as in Definition 5 such that G has no IT with respect to P, showing that (α, β) is not a good pair. Using Property (a) of our basic partition we get Our next aim is to establish Conclusion (i) of Theorem 10.We assume without loss of generality that and proceeding exactly as in the proof of Lemma 17, we may conclude for each i ≤ s that where in the last line we again use the fact that {X i , Y i , Z i } s i=1 is a partition of V (G).By Property (a) we know that r i ≤ 2d − 2|Z i | for each i, from which we get that Since d = ∆(G) we know 2|E(G)| ≤ dn, so using also the facts that s ≤ k and z ≤ n, and that n(1 + ) = 2dk, we conclude For Conclusion (ii), we know by Lemma 17 that As before, the quantity (2d ) is nonnegative by Property (a) of basic partitions, and so which is the statement of (ii). We end this section with the proof of Theorem 11. Proof of Theorem 11.Let 0 < β < 1 be given, and let G be a graph with a t-thick vertex partition P, for which d = ∆(G) ≤ (1 + β)t/2.Suppose G has no IT with respect to P. Then by Theorem 7, there exists a subset S of blocks of P for which η(G S ) ≤ k, where k = |S| − 1. Set n = |V (G S )| and define by n(1 + ) = 2dk.Since P is a t-thick partition we know n ≥ t|S|, and so it follows that By Theorem 10 we know Let {X i , Y i , Z i } k i=1 be the partition of V (G S ) guaranteed by Theorem 10, and let J be the union of the k complete bipartite graphs induced by {X i , Y i } k i=1 .Then by Theorem 10, the symmetric difference of E(G S ) and J has size at most Hence G S is 5 /(1 − )-approximated by J. Noting that 5 /(1 − ) < 5β/(1 − β) completes the proof. Constructing graphs with no independent transversals This section is devoted to the proof of Theorem 9. Let us call a pair (α, β) relevant if it satisfies the conditions of Theorem 9, in other words α > 1 2 and β > max 1 4 , 2α(1 − α) .Our aim is to construct, for every relevant pair (α, β) and every sufficiently large t, a graph G and a partition As mentioned in the Introduction, in [13] Groenland, Kaiser, Treffers and Wales modified their proof of Theorem 3 to provide also a proof of Theorem 9 for every > 0 and every relevant pair (α, 1 4 + ).As approaches zero, this gives the asymptotic value 1 2 + 1 2 √ 2 < 0.854 for α.A slight generalization of their construction, discussed in Section 7, is sufficient to establish Theorem 9 for α ≥ 2 3 .However, modifications are needed to get a construction that works for all α > 1 2 .Our main tool is the following simple lemma, that allows us to build complicated graphs with no IT starting from simpler ones.Lemma 18.Let J and H be disjoint graphs, and let P = {U 1 , . . ., U r } and Q = {W 1 , . . ., W s } be partitions of V (J) and V (H) respectively, such that J has no IT with respect to P and H has no IT with respect to Q. Let R = {U 1 , . . ., U r , W 1 , . . ., W s−1 }, where U 1 ⊇ U 1 , . . ., U r ⊇ U r are obtained by distributing each of the vertices in W s into one of U 1 , . . ., U r arbitrarily.Then J ∪ H has no IT with respect to R. Proof.Assume for contradiction that J ∪ H has an IT {u 1 , . . ., u r , w 1 , . . ., w s−1 } with respect to R, where u i ∈ U i for 1 ≤ i ≤ r and w i ∈ W i for 1 ≤ i ≤ s − 1.If u i ∈ U i for every 1 ≤ i ≤ r, then {u 1 , . . ., u r } is an IT of J with respect to P, a contradiction.So suppose instead that u j ∈ U j ∩ W s for some 1 ≤ j ≤ r.Then {w 1 , . . ., w s−1 , u j } is an IT of H with respect to Q, again a contradiction.In the proof that follows, for a given relevant pair (α, β) and (sufficiently large) t, we will be applying Lemma 18 possibly many times.In each application, the complete bipartite graph K = K t− αt +1, αt with its standard bipartition into two blocks will be used in place of J. Clearly K has no IT with respect to this partition.We will refer to the block A of K as the "A-side" and the block B as the "B-side", where t − αt + 1 = |A| < |B| = αt . Our proof of Theorem 9 builds directly upon the construction from [13] that established their special case of Theorem 9 (i.e. when β is close to 1/4).Given the pair (α, β) with α ≥ β > 1/4, this construction provides, for all sufficiently large t, a graph G and partition P with Properties For completeness we describe (G , P ) explicitly (formulated using Lemma 18) in the Appendix. Proof of Theorem 9. Let (α, β) be a relevant pair, and let t be large enough such that (G , P ) satisfying Properties (a ), (b), (c) and (d) exists (as given in [13]).We call blocks of size less than t deficient.If the set of deficient blocks in (G , P ) is empty then (G , P ) itself proves Theorem 9 for (α, β), so we may assume the contrary from now on.In particular by (a ) we may assume that α < 1. Our plan is to create a new vertex-partitioned graph by adding a number of copies of K = K t− αt +1, αt to G and suitably extending the current partition.Throughout this process, the current graph G and partition P will be valid, meaning that (G, P) satisfies Properties (b), (c) and (d) together with (a ) Every deficient block U satisfies |U | ≥ αt , and d(u) ≤ t − αt + 1 for each u ∈ U . Observe that the initial pair (G , P ) is valid. Our main construction step is as follows (see Figure 1).Given a valid pair (G, P), if no blocks are deficient then (G, P) provides a proof of Theorem 9 and we stop.Otherwise, we take a deficient block U , add a disjoint copy of K to G, distribute αt − 1 vertices of U to the A-side of K to form a new block U A , and the remaining |U | − αt + 1 vertices of U to the B-side of K, forming block U B .Note that this process removes U , adds one new block U A of size t and one new block U B of size |U | + 1.All other blocks remain unchanged.Thus the resulting pair (G + , P + ) satisfies (a ), and Lemma 18 tells us that (d) is also satisfied.Property (b) holds because ∆(K) = αt and (G, P) satisfies (b).To verify (c), note that since (G, P) satisfied (a ), the new block U A has average degree at most This quantity is 2α(1 − α)t + 4α, which for large enough t is at most βt as required. Each vertex in the other new block U B has degree exactly t − αt + 1 (if it came from K) or at most t − αt + 1 by (a ) (if it came from U ). Since t − αt + 1 < t − αt + 2 < 2α(1 − α)t + 2, we have verified (c), and hence that (G + , P + ) is valid.Therefore we may continue the construction. Observe that after each step of this construction, the deficient block U is replaced with a larger block U B , and all other deficient blocks are unchanged.Thus we may repeat the construction step finitely many times to obtain a valid pair (G, P) that has no deficient blocks.This completes the proof of Theorem 9. Concluding remarks The constructions given in Section 6 and the Appendix are by no means unique.In the main construction step in Section 6 for example, given a deficient block U , we added αt − 1 vertices of U to the A-side of K = K t− αt +1, αt and the remaining |U | − αt + 1 vertices of U to the B-side of K. We could instead fix any positive integer C and, in each step, add αt − C vertices of U to the A-side of K = K t− αt +C, αt and the remaining |U | − αt + C vertices of U to the B-side of K .This would require our minimum value for t to be larger, but as a trade-off would give constructions with fewer blocks.The bipartite component K and its number of copies would also be different.However, we know by Theorem 11 that when α is close to 1 2 , the constructed graph with no IT should have many copies of "near" K t/2,t/2 's, so in any case we would get many components that are close in structure to complete bipartite graphs. For the case in which α is slightly greater than 1 2 + 1 2 √ 2 and β slightly greater than 1 4 , a proof of Theorem 9 is given in [13] as follows.Recall that the basic graph G with partition P satisfies Properties (a ), (b), (c) and (d) described in Section 6.For each deficient block U of P , a single complete bipartite graph K = K β t/α , αt is added, where β = max 1 4 , 2α(1 − α) , and the vertices of U are distributed into the two blocks of K so that the size of each becomes at least t.This is possible because t , and |U | = αt .As before, Lemma 18 ensures that the new graph has no IT with respect to the new partition.The maximum block average degree does indeed stay below β after this distribution.However, for this strategy to work for a wider range of (α, β) we would need that β t/α ≤ αt to ensure ∆(K ) ≤ αt (Property (b)).Hence this proves Theorem 9 in general only when α ≥ 2 3 .We remark that our simple construction lemma in Section 6, Lemma 18, provides a convenient way to describe (and in some cases generalize) various known constructions for graphs without IT's.For example this is done in the Appendix to derive the construction of [13].The same approach can also provide new constructions for other contexts.These topics are explored further in [19]. As mentioned in the Introduction, it is possible to modify Theorem 16 to show that if G has a low value of η(G) then it will contain many disjoint complete bipartite subgraphs.The precise technical statement is as follows. ) such that the following hold. (a) for each i there exist x i ∈ X i and y (c) the set U of vertices that are not conforming satisfies where w is said to be conforming if for some i we have w ∈ X i and To interpret this statement it helps to compare it with Theorem 16.In Theorem 19 we have an additional part W in the partition, consisting of "junk" over which we have little control, but by (b) its size is small as long as is small.Property (a) is the same as in Theorem 16.The subgraph of G induced by the set of conforming vertices in each X i ∪ Y i ∪ Z i contains a spanning complete bipartite graph, and Property (c) tells us that most vertices are conforming, provided also that γ is not too close to 3. Hence under these conditions G contains k 0 complete bipartite graphs, whose union contains most of the vertices of G, and where k 0 is close to k. The proof of Theorem 19 is not difficult, and is quite similar to the proof of Theorem 16.However, including it here would add a significant amount of technical detail, so we omit the full proof in favour of the following sketch.Sketch of proof.Since η(G) < k + 1 we know that every sequence of edge deletions and explosions taking G to the empty graph has length at most k.We define a specific sequence in a step-by-step fashion, and derive the structural conclusion about G from the fact that it has at most k explosion steps.As before, the operation of reducing a graph G is to delete edges of G one by one until no further edges are deletable.Then in the resulting reduced graph, every edge is explodable.IF there exists a low vertex w ∈ X (respectively in Y ) that has a non-neighbour u ∈ Y (respectively u ∈ X): • explode xu (respectively yu) and reduce, • explode wv for some neighbour v of w and reduce, • put all vertices lost in the two explosions into W . ELSE set x i := x and y i := y.Set X i := X, Y i := Y and Z i := Z. Explode x i y i and reduce. The rest of the proof consists of the analysis of this procedure, and how it leads to Conclusions (a-c). In just the same way that Theorem 16 (via Theorem 10) combined with Theorem 7 led to the IT stability result Theorem 11, Theorem 19 combined with Theorem 7 leads to an alternative IT stability result.This one asserts that if a graph G does not have an IT with respect to a t-thick partition P, where ∆(G) is close to t/2, then for some subset S of blocks of P, the graph G S contains a disjoint union of complete bipartite graphs that spans almost all of V (G S ). Set d 0 = 0, G 0 = ∅ and P 0 = ∅.Suppose 1 ≤ j < k and that we have defined G j and P j such that the following hold. (i) G j has no IT with respect to P j . (ii) Any block U of P j with size different from t satisfies U ∩ V (G j−1 ) = ∅ and |U | = d j , and each u ∈ U has degree 1 in G j .These are called the terminal blocks of P j . (iii) Any block U of P j with |U | = t that contains a vertex of V (G j ) \ V (G j−1 ) has average degree 1 t (d j−1 + (t − d j−1 )d j ) in G j with respect to P j . We define G j+1 and P j+1 as follows.We will essentially be adding a new "layer" of star components K 1,d j+1 .By (ii) we know that all blocks of P j have size t except the terminal blocks of P j , each of which has size d j .Fix one terminal block X, and let X 1 = X.With Lemma 18 in mind, for each i = 1, . . ., t − d j in sequence, we add to the current graph a copy of K 1,d j+1 , distribute all vertices of X i into the A-side of K Next we verify (i-iii) for (G j+1 , P j+1 ).By repeated applications of Lemma 18 as indicated, we know that G j+1 has no IT with respect to P j+1 .Hence (i) holds for (G j+1 , P j+1 ).To check (ii), observe that by (ii) for (G j , P j ) and by our construction, the only blocks that are not now of size t are the B-side blocks of the newly added layer of copies of K 1,d j+1 , which all have size d j+1 , and their vertices all have degree 1.Thus (G j+1 , P j+1 ) satisfies (ii).To verify (iii), the blocks U of P j+1 with |U | = t that contain a vertex of V (G j+1 ) \ V (G j ) are precisely those that were previously a terminal block X of P j .By Property (ii) of (G j , P j ) we know that |X| = d j and each vertex of X has degree 1 in G j .Our construction adds t − d j vertices to X, each of degree d j+1 , to obtain the class U of size t.Hence the average degree of U is 1 t (d j + (t − d j )d j+1 ), verifying (iii) for (G j+1 , P j+1 ). Let G = G k and P = P k .Observe that G has maximum degree d k = min{t, αt } ≤ αt, so Property (b) is satisfied.Property (d) is given by (i).To ensure that Property (c) is satisfied, we use the following lemma from [13]. For example, we may set d 1 = βt and, as noted in [13], one can simply take d j = d j−1 + 1 for every j ∈ {2, . . .This is bounded above by βt provided t is sufficiently large, since β > 1/4.We choose t 0 and a sequence such that Lemma 20 holds for our given β, and apply our construction with this sequence and with t ≥ t 0 .Any terminal block of P = P k has average degree 1 by (ii) with j = k.All remaining blocks in P have size t and are of the type in (iii) for exactly one value of j, and hence by (iii) and Lemma 20 each has average degree at most βt.Hence (c) holds for (G , P ). Finally, to check (a ) we again note that any block U that has size less than t is a terminal block of P k , and hence satisfies (ii).Then necessarily U has size d k = αt < t, and by (ii) consists of vertices of degree one.This verifies (a ) as required. Theorem 8 . Let H be a graph with n vertices and maximum degree d.Suppose that η(H) ≤ k.Then |E(H)| ≥ dn − d 2 k.Since an upper bound on b(G, P) easily implies an upper bound on the density |E(G S )|/|V (G S )| of any G S , from here it is a short step to combine Theorem 8 for H = G S with Theorem 7 to prove this implication of Theorem 6. (See Section 4.) Lemma 12 . (a) Every graph G has η(G) ≥ 0, and η(G) = 0 if and only if G is empty.(b) If graph G has an isolated vertex, then η For the second assertion, suppose that η(G) = k.Then equality holds everywhere in the previous paragraph, so that d = d and G is the union of k − 1 disjoint copies of K d,d .Since ∆(G) = d and G is d-regular, the graph H = G[N (x) ∪ N (y)] is a connected component of G, and G = G ∪ H.This implies that η(H) ≤ η(G) − η(G ) = 1 and |V (H)| = |V (G)| − |V (G )| ≥ 2d.From the base case k = 1, it follows that H is a K d,d .Therefore, G is the union of k disjoint copies of K d,d , finishing the induction.Observe that Theorem 14 then leads to (a slightly stronger version of) Theorem 1. Corollary 15.Let G be a graph with ∆(G) = d, and let in the conclusion of Lemma 17 are all non-negative.Hence 2|E(G)| ≥ 2dn − 2d 2 s ≥ 2dn − 2d 2 k, thus completing the proof. 5 Stability and the proof of Theorem 10 Proof of Theorem 10.Let G be a graph with η(G) ≤ k, and set n = |V (G)|, d = ∆(G), and define by n(1 + ) = 2dk.Then ≥ 0 by Theorem 14. Theorem 8 tells us that |E(G)| ≥ dn − d 2 k = dn − d(n + n)/2 and hence 2|E(G)| ≥ dn − dn as claimed in Theorem 10.By Theorem 16 there exists a basic partition {X (a) for each i we have |U i | ≥ t; (b) G has maximum degree ∆(G) ≤ αt; (c) b(G, P) ≤ βt; and (d) G has no IT with respect to P. Figure 1 : Figure 1: The deficient block U is replaced with blocks U A of size t and U B of size |U | + 1. (b), (c) and (d), and with Property (a) replaced by (a ) Every block U with |U | < t consists of vertices of degree 1 and satisfies |U | = αt . 2 . If graph is empty, stop.If not, then it will contain no isolated vertices.Choose an edge xy whose explosion kills the smallest possible number of vertices.Set Z := N (x) ∩ N (y) in the current graph, and set X := N (y) \ Z and Y := N (x) \ Z.Call a vertex w ∈ X (respectively Y ) high if it has at least θd neighbours outside N (x) = Y ∪Z (respectively N (y) = X ∪ Z), and low otherwise. 1,d j+1 to form a new block X i+1 of size |X i | + 1, and let the B-side of K 1,d j+1 form one more new block.(This B-side of K 1,d j+1 will become a terminal block of the partition P j+1 .)For each i, Lemma 18 guarantees that the new graph has no IT with respect to the new partition.Repeating this step until i reaches t − d j , we obtain |X t−d j +1 | = t, so this finishes the process of enlarging X to size t.We repeat this procedure for every terminal block X of P j , creating a new layer of star components K 1,d j+1 , whose B-sides are the terminal blocks of the new partition P j+1 .We denote the new graph by G j+1 .
2023-05-19T01:16:34.532Z
2023-05-17T00:00:00.000
{ "year": 2024, "sha1": "7d673983c92b431386f533f23e91f956822e2c1d", "oa_license": "CCBYNC", "oa_url": "https://onlinelibrary.wiley.com/doi/pdfdirect/10.1002/jgt.23085", "oa_status": "HYBRID", "pdf_src": "ArXiv", "pdf_hash": "7d673983c92b431386f533f23e91f956822e2c1d", "s2fieldsofstudy": [ "Mathematics" ], "extfieldsofstudy": [ "Mathematics" ] }