id stringlengths 3 9 | source stringclasses 1 value | version stringclasses 1 value | text stringlengths 1.54k 298k | added stringdate 1993-11-25 05:05:38 2024-09-20 15:30:25 | created stringdate 1-01-01 00:00:00 2024-07-31 00:00:00 | metadata dict |
|---|---|---|---|---|---|---|
245592885 | pes2o/s2orc | v3-fos-license | HIV-1 capsid is the key orchestrator of early viral replication
The cone-shaped capsid enclosing the viral RNA genome and the replication machinery is the hallmark of mature human immunodeficiency virus 1 (HIV-1). Over the past decade, our understanding of capsid function in the early events following membrane fusion and cytosolic entry underwent a major paradigm shift. In particular, recent studies using microscopic approaches shed new light on the involvement of the capsid structure in postentry replication and resolved some apparent discrepancies raised by earlier reports. The emerging model of the early phase of HIV-1 replication places the capsid in the leading role as key orchestrator of viral replication steps and HIV-1–cell interactions during the postentry phase. Here, we summarize how recent observations have reshaped our view on HIV-1 postentry (Fig 1).
Introduction
The cone-shaped capsid enclosing the viral RNA genome and the replication machinery is the hallmark of mature human immunodeficiency virus 1 (HIV-1). Over the past decade, our understanding of capsid function in the early events following membrane fusion and cytosolic entry underwent a major paradigm shift. In particular, recent studies using microscopic approaches shed new light on the involvement of the capsid structure in postentry replication and resolved some apparent discrepancies raised by earlier reports. The emerging model of the early phase of HIV-1 replication places the capsid in the leading role as key orchestrator of viral replication steps and HIV-1-cell interactions during the postentry phase. Here, we summarize how recent observations have reshaped our view on HIV-1 postentry (Fig 1).
Discard after entry? The traditional picture of the uncoating process
The most typical feature of mature, infectious HIV-1 particles is the emblematic conical capsid. This structure consists of a homomultimer of the viral CA (capsid) protein, which assembles into a closed fullerene structure consisting of approximately 250 CA hexamers and 12 CA pentamers, 7 at the wide and 5 at the narrow end of the cone (reviewed in [1,2]). This elaborate structure is not initially formed during virus assembly: Particle formation is directed by the viral Gag polyprotein consisting of multiple folded domains, including CA. Upon release of the immature, noninfectious particle, Gag is cleaved at multiple sites by the viral protease, yielding fully processed CA, which subsequently assembles into the mature CA lattice inside the complete virion (reviewed in [1]). Accordingly, processed CA and the mature CA lattice are not needed for virus assembly or release, pointing toward a function of the mature lattice structure during or after virus entry. This hypothesis was supported by several early reports that identified mutations in the CA region associated with defects exclusively in the early phase of HIV-1 replication or mapped the effects of HIV-1 restriction factors to the viral CA protein and CA lattice (reviewed in [1]).
Proteolytic maturation is traditionally understood as a solution for the assembly-disassembly paradox [3]: Stable virions assembled in the producer cell are converted into a metastable state, ready to rapidly release the viral genome upon cell entry. Accordingly, early schemes of HIV-1 replication assumed rapid and complete disintegration (uncoating) of the CA lattice after membrane fusion with subsequent cytoplasmic conversion of the genomic RNA into double-stranded cDNA in the released reverse transcription complex (RTC) (reviewed in [4]). The resulting preintegration complex (PIC) was suggested to enter the nucleus through intact nuclear pores, but the relevant nuclear import factors remained enigmatic. Rapid uncoating after membrane fusion appeared to be supported by the rapid and complete dissociation of the capsid when the membrane of cell-free virions was stripped by detergent and by the failure to detect cone-shaped objects in the cytosol of infected cells by electron microscopy (EM) (reviewed in [5]). Moreover, early replication complexes purified from extracts of infected cells contained little or no CA [5]. Immediate postentry uncoating could not explain, however, why the elaborate structure of the fullerene capsid had evolved, although it appeared to be dispensable for both virus morphogenesis and the early replication phase. Furthermore, mutational and microscopic analyses provided evidence for a functional and temporal link between capsid uncoating and reverse transcription, suggesting a more gradual uncoating process (reviewed in [5]). Recent advancements in fluorescence and EM based imaging approaches in conjunction with novel labeling strategies and elegant in vitro approaches helped to resolve the conundrum presented by apparent discrepancies between earlier studies.
A fresh look: New insights from biochemical and imaging studies
One main argument for rapid uncoating was the instability of the mature capsid observed in biochemical analyses. An important advance in our understanding came from the observation that the HIV-1 capsid is actually a stable structure in the presence of inositol hexakisphosphate (IP6) [6,7]. IP6 binds to and coordinates basic side chains in a pore on the surface of the mature CA lattice; this increases the half-life of the assembled capsid after detergent removal of the viral membrane from minutes to many hours. Adding deoxynucleoside triphosphates (dNTPs) to this stabilized structure in vitro even allowed complete reverse transcription of the endogenous viral genome and revealed breakage, but not complete disassembly of the lattice, when synthesis of the double-stranded cDNA approached completion [8]. Since IP6 is present in high concentration in the cytosol, these in vitro data clearly indicate that the capsid structure could remain intact for prolonged periods of time after membrane fusion and could support the full process of reverse transcription once dNTPs become available. But does this also occur in infected cells? Various methods have been used to identify and track viral replication complexes and their association with CA in different subcellular compartments (reviewed in [9]). They include immunofluorescence using different antibodies, incorporation of fluorescent fluid phase markers or fluorescently tagged proteins of the replication complex into the virion, as well as labeling of newly synthesized viral cDNA, indirect labeling of the capsid, and incorporation of substoichiometric amounts of CA-GFP fusion proteins into the capsid. Consistently, these studies confirmed that the incoming capsid stays intact and remains associated with the viral replication complex at least for some time after membrane fusion. Conclusions regarding time and location of eventual uncoating were highly divergent, however. Some studies reported delayed cytoplasmic uncoating with residual CA molecules retained on the RTC, while other groups suggested uncoating close to or directly at the nuclear pore, and yet other groups provided evidence for nuclear import of largely intact capsids and nuclear uncoating (reviewed in [2,9,10]). Importantly, these conclusions were generally based on indirect detection methods, which may explain some of the observed differences, albeit the possibility of multisite uncoating cannot be excluded at present. EM or electron tomography (ET) analysis could directly identify cytosolic structures, but previous studies yielded at best anecdotal evidence for conical structures in the cytosol of infected cells. This is not surprising, however: Detection of rare subviral structures against the dense background of the surrounding cytosol is comparable to finding a needle in a haystack. Three-dimensional correlative light and electron microscopy (CLEM), by which objects of interest are first localized using fluorescence imaging and then characterized by ET, can overcome this problem. Recent advances provided a powerful tool to analyze the three-dimensional architecture of complex biological objects directly in their cellular context. Employing this approach, we have identified numerous apparently intact postfusion HIV-1 capsids in the cytoplasm and in close vicinity to the nuclear envelope; almost all fluorescently labeled cytoplasmic HIV-1 RTC could be correlated with a typical capsid cone by CLEM [11]. This result is consistent with other studies reporting strong CA signals on cytoplasmic RTC resembling the signal of the complete virion core (reviewed in [9]). We thus hypothesize that the HIV-1 capsid stays intact-or at least largely intact-during cytoplasmic trafficking, and enclosure of the RTC inside the capsid protects the HIV-1 genome, forms a reaction container for reverse transcription, and shields the nascent cDNA from innate sensing. Consistent with this model, induction of premature uncoating in the cytosol induced innate immune signaling and led to proteasomal degradation of subviral complexes [12,13].
Finding the key in the pocket: Interactions between the capsid and the nuclear pore
Following membrane fusion, apparently intact HIV-1 capsids travel along microtubules (MTs) toward the nuclear envelope. Trafficking requires dynein and kinesin-1 motors, which interact with the CA lattice through motor adaptor proteins (reviewed in [10,14]). Once at the nuclear periphery, kinesin-1 has been reported to facilitate interaction of the capsid with the nucleoporin Nup358 (RanBP2), relocated from the nuclear pore complex (NPC) to the cytosol. The latter interaction involves the cyclophilin domain of Nup358 and the flexible cyclophilin binding loop of CA that is exposed on the surface of the capsid (reviewed in [10,14]). A direct role of the microtubular network until capsid arrival at the nuclear envelope is consistent with our CLEM analyses: Most conical capsids in the vicinity of the nuclear envelope were closely associated with MTs; capsids were frequently found in multiples, suggesting that they arrived via common MT transport routes [11].
Once at the nuclear envelope, HIV-1 capsids appear to directly interact with the cytoplasmic face of nuclear pores, probably mediated by interaction with the cyclophilin domain of Nup358. Interestingly, tomogram renderings of capsids docked at the NPC revealed that the narrow end of the cone was almost always positioned toward the central NPC channel [11]. It is tempting to speculate that this orientation may be governed by the higher density of CA pentamers at the narrow end of the fullerene cone. Preferential interaction of highly curved capsid regions with Nup358 has been suggested recently [15], but other host factors may also be involved. Preferential interaction of the narrow end with the NPC may guide the orientation of the capsid, thereby initiating the nuclear import process-threading the narrow tip of the cone into the NPC channel could facilitate entry into the narrow gate. We hypothesize that this mechanism may also provide some explanation for the distinct conical shape of the HIV-1 capsid.
Stop right here: What happens at the nuclear pore?
Besides Nup358, HIV-1 CA has been shown to also interact with other nucleoporins, including Nup153, Nup214, Nup88, Nup62, Nup98, and Nup107 (reviewed in [10]). Nup153, an inner nuclear basket protein, binds to a hydrophobic pocket in the assembled CA lattice, and binding is suggested to be mediated by the Phe-Gly (FG) repeats of the nucleoporin. FG repeats are common to several nucleoporins and form a disordered gel-like structure that fills the central channel of the NPC and conducts all nuclear import processes (reviewed in [16]). Conceivably, the HIV-1 capsid may thus also interact with FG repeats of other nucleoporins besides Nup153. Since Nup153 is a nuclear basket component located at the nuclear side of the NPC, its interaction with the CA lattice would suggest that at least a partially assembled lattice-if not the complete capsid-is retained throughout nuclear import.
The main argument against nuclear import of the complete HIV-1 capsid has been its size: With a width of approximately 60 nm at the wide end, the CA appeared to be too large for translocation through the central NPC channel with a reported diameter of approximately 40 nm (reviewed in [2]). A second argument was the variable detection of CA on nuclear subviral HIV-1 complexes, which yielded low or undetectable signals in most studies (reviewed in [5,9]). Therefore, it seemed clear that uncoating is a prerequisite for passage of the subviral complex through the NPC. This conclusion was challenged by immunofluorescence analyses, which revealed strong CA signals on nuclear subviral HIV-1 complexes in macrophages [17]. CA signals were initially not detected in nuclei of infected T cells and variably detected in other cell-types, but this can now be explained by different accessibility of nuclear and cytoplasmic CA structures to antibody detection. Nuclear structures are highly decorated with large clusters of the cellular protein cleavage and polyadenylation specificity factor 6 (CPSF6; see below) and possibly other host cell proteins, which mask CA epitopes and thus prevent detection of the underlying capsid structure. This can be overcome by treatment with the small molecule PF74 (competing for CPSF6 binding to the CA lattice) or by different extraction treatments [18]. These data, as well as fluorescent labeling of CA fusion proteins [19], indicated that postfusion cytoplasmic and nuclear HIV-1 subviral structures contain comparable amounts of CA.
The described results leave the options that either the HIV-1 capsid is remodeled to fit through the NPC, the NPC may widen for capsid transport, or both. Recent CLEM and cryo-ET analyses resolved these questions, at least in part. Cryo-ET of nuclear pores in uninfected and infected T cells revealed that the diameter of the NPC central channel in situ is larger (approximately 64 nm) than reported in previous analyses of isolated nuclear pores [11]. It is thus sufficiently wide to accommodate the entire HIV-1 capsid. Apparently intact cone-shaped capsids containing dense nucleoprotein complexes were observed inside the NPC central channel (always with the narrow end pointing toward the nucleus; Fig 2) [11]. CLEM and cryo-ET analyses of nuclear subviral complexes rarely revealed cone-shaped structures enclosing dense nucleoprotein complexes, however. Most structures identified by correlative analyses appeared to be empty inside, sometimes retaining a cone shape, but often exhibiting tubular or irregular morphologies [11]. These data indicate that apparently complete HIV-1 capsids enter and pass through the NPC channel, but do not allow firm conclusions regarding alteration of the capsid during this passage. (1); NE, nuclear envelope; NPC, nuclear pore complex; MT, microtubule. Adapted from Lucic and colleagues [31].
The Nup153 binding pocket on the CA is also the target of the nuclear host protein CPSF6, which has been reported to be a dependency factor promoting HIV-1 replication (reviewed in [10]). Super-resolution microscopy revealed recruitment of CPSF6 to HIV-1 cDNA containing complexes at the nuclear side of the NPC [17] and knockdown of CPSF6 or a mutation in the CPSF6 binding site of CA arrested subviral complexes at the nuclear pore [17]. Interestingly, CPSF6 is not essential for nuclear translocation or infectivity, and subviral complexes in the absence of CPSF6 binding accumulated directly adjacent to nuclear pores accompanied by viral genome integration into lamina-associated chromatin (reviewed in [10]). The same site in the CA lattice has recently also been reported to interact with the cytosolic host protein Sec24C [20]. Based on these observations, we hypothesize that there may be a handoff of HIV-1 capsids as they traffic toward the cell nucleus with consecutive (and potentially competitive) binding of several host factors (Sec24C-Nup153-CPSF6) to a highly reactive pocket in the CA lattice.
Based on the described results, we suggest a model for nuclear entry of the HIV-1 capsid: Following MT-mediated transport, the pentamer-rich narrow end of the capsid directly associates with Nup358 at the cytoplasmic face of a nuclear pore. Positioned into the central channel, the array of FG repeat binding pockets on the capsid surface mediates repetitive interactions with FG repeat nucleoporins throughout the central channel. The number of binding sites increases toward the wide end of the cone, leading to immersion of the capsid into the central NPC channel, with multivalent, low-affinity binding to FG repeats within the channel driving the capsid toward the nuclear side. Once the narrow tip reaches the nuclear basket, CPSF6 binding competes for and obscures the FG binding pocket on the CA lattice. CPSF6 binding may thus provide a ratchet type mechanism aiding release from the NPC. The latter is not essential for genome delivery into the nucleus, but may be kinetically relevant and also appears to be important for subsequent nucleoplasmic trafficking (see below). In this model, the entire HIV-1 capsid acts as an unconventional multivalent nuclear import machinery driving nuclear entry of large subviral cargo without requiring conventional nuclear import factors.
It's time to move on: Separation of the viral genome from the capsid in the nucleus
Following nuclear import, CPSF6 clustering on the CA lattice of subviral complexes has been reported to mediate nucleoplasmic trafficking to nuclear speckle domains [21][22][23]. Multiple replication complexes accumulate at these sites, and CLEM analysis also revealed small clusters of closely apposed nuclear capsid-like structures, both conical and tubular in appearance, upon high multiplicity infection [18]. It appears likely, therefore, that the entire capsid or capsid-like structure containing the (partially) reverse transcribed genome and replication machinery travels via CPSF6 to distinct subnuclear sites, where integration occurs. For chromatin integration, the capsid needs to open, and the cDNA with associated integrase (and potentially other factors) has to be released.
Nuclear uncoating has been studied by tracking of fluorescently labeled HIV-1 cores in living cells and by labeling the newly synthesized HIV-1 cDNA as well as components of the replication machinery. The former study indicated that uncoating occurs rapidly in close vicinity to the subsequent integration site, <1.5 hours before integration [19]. Fluorescent 2-color imaging and CLEM of HIV-1 dsDNA and a component of the replication machinery confirmed clustering in selected nuclear locations and revealed separation of the genomic viral DNA complex from the bulk of associated proteins (including most, if not all, of CA and CPSF6) over time [18]. Broken capsid-related structures were observed by CLEM in the position of the viral proteins after this separation, while the exposed viral cDNA appeared as dense elongated structure, morphologically resembling chromatinized DNA [18]. Based on these results, we hypothesize that the capsid does not cooperatively disassemble as previously suggested, but appears to break open once it reaches its final destination in nuclear speckles. The discrepancy to the labeling studies, which suggested loss of the CA lattice [19,24], might be explained by the fact that the latter made use of a fluorescent CA fusion protein, which was incorporated in substoichiometric amounts and may be preferentially lost upon capsid disintegration. Further studies will be required to fully resolve this apparent discrepancy.
Based on these data, we can conclude that the capsid with associated proteins eventually breaks open inside the nucleus and releases the viral genome and integration factors. This appears to occur in close vicinity of the actual integration site and does not appear to require cooperative disassembly of the CA lattice. What then triggers breakage of the capsid? One option is completion of reverse transcription. Multiple recent studies indicated that viral cDNA synthesis is completed inside the nucleus [18,19,[21][22][23]25]. dsDNA has different mechanical properties and requires a larger volume than the template ssRNA strands. Reverse transcription of the viral genome may thus lead to breakage of the capsid, once DNA synthesis reaches completion. Mechanical strain from the growing dsDNA as a trigger or driver of the uncoating process has been proposed in several earlier studies (reviewed in [26]) and is consistent with the detection of a partially broken CA lattice with emanating nucleic acid upon endogenous reverse transcription in isolated capsids in vitro [8]. It should be noted, however, that HIV-1-based vectors with much shorter genomes effectively transduce nondividing cells. Furthermore, the HIV-1 NC (nucleocapsid) protein has been shown to compact DNA [27] and may thus help to overcome the strain imposed by synthesis of the more rigid cDNA. Further studies will also be needed to determine whether and how the CA lattice may be (partially) destabilized during passage through the narrow nuclear pore channel and whether this could facilitate its subsequent breakage. Obviously, additional host cell factors at the final destination of nuclear speckles may also be involved. Genomic HIV-1 cDNA is rapidly chromatinized when becoming accessible inside the nucleus [28], and this is consistent with morphological appearance of released genomes by CLEM [18]. It is tempting to speculate, therefore, that immediate chromatinization of the viral cDNA, once it becomes accessible from inside the broken capsid, could provide a driving element for complete uncoating of the viral replication complex.
Conclusions
In summary, it has become clear that the HIV-1 capsid represents much more than a delivery container for the viral genome. It provides a closed environment for reverse transcription, protects the nascent viral cDNA from DNA sensors, and acts as delivery vehicle mediating transport toward and through the nuclear pore and even within the nucleus. These key functions render the mature capsid a promising target for antiviral drugs. Several capsid binding small molecules have already been developed into promising drug candidates, with the highly potent long-acting capsid inhibitor GS-6207 (lenacapavir) [29,30] in Phase II/III clinical trials (reviewed in [2]). | 2022-01-01T05:08:06.150Z | 2021-12-01T00:00:00.000 | {
"year": 2021,
"sha1": "9b4da302ec139258bdfdaf86e13c0ab9e829506b",
"oa_license": "CCBY",
"oa_url": null,
"oa_status": null,
"pdf_src": "PubMedCentral",
"pdf_hash": "9b4da302ec139258bdfdaf86e13c0ab9e829506b",
"s2fieldsofstudy": [
"Medicine",
"Biology"
],
"extfieldsofstudy": [
"Medicine"
]
} |
269386691 | pes2o/s2orc | v3-fos-license | Zearalenone exposure differentially affects the ovarian proteome in pre-pubertal gilts during thermal neutral and heat stress conditions
Abstract Zearalenone (ZEN), a nonsteroidal estrogenic mycotoxin, causes endocrine disruption and porcine reproductive dysfunction. Heat stress (HS) occurs when exogenous and metabolic heat accumulation exceeds heat dissipation. Independently, HS and ZEN both compromise swine reproduction; thus, the hypothesis investigated was two-pronged: that ZEN exposure would alter the ovarian proteome and that these effects would differ in thermal neutral (TN) and HS pigs. Pre-pubertal gilts (n = 38) were fed ad libitum and assigned to either (TN: 21.0 ± 0.1 °C) or HS (12 h cyclic temperatures of 35.0 ± 0.2 °C and 32.2 ± 0.1 °C). Within the TN group, a subset of pigs were pair-fed (PF) to the amount of feed that the HS gilts consumed to eliminate the confounding effects of dissimilar nutrient intake. All gilts orally received a vehicle control (CT) or ZEN (40 μg/kg/BW) resulting in six treatment groups: thermoneutral (TN) vehicle control (TC; n = 6); TN ZEN (TZ; n = 6); PF vehicle control (PC; n = 6); PF ZEN (PZ; n = 6); HS vehicle control (HC; n = 7); or HS ZEN (HZ; n = 7) for 7 d. When compared to the TC pigs, TZ pigs had 45 increased and 39 decreased proteins (P ≤ 0.05). In the HZ pigs, 47 proteins were increased and 61 were decreased (P ≤ 0.05). Exposure to ZEN during TN conditions altered sec61 translocon complex (40%), rough endoplasmic reticulum membrane (8.2%), and proteasome complex (5.4%), asparagine metabolic process (0.60%), aspartate family amino acid metabolic process (0.14%), and cellular amide metabolic process (0.02%) pathways. During HS, ZEN affected cellular pathways associated with proteasome core complex alpha subunit complex (0.23%), fibrillar collagen trimer (0.14%), proteasome complex (0.05%), and spliceosomal complex (0.03%). Thus, these data identify ovarian pathways altered by ZEN exposure and suggest that the molecular targets of ZEN differ in TN and HS pigs.
Introduction
Zearalenone (ZEN) is an estrogenic nonsteroidal fungal mycotoxin from the genus Fusarium (Shier et al., 2001;Bennett and Klich, 2003;Zinedine et al., 2007) and a food contaminant due to high-temperature resistance during food processing (Bennett et al., 1980).Detectable in wheat bran and wheat germ oil, corn and corn germ oil, and corn by-products (Olsen et al., 1981;EFSA, 2011), ingestion is the primary ZEN exposure route.Absorption of ZEN is rapid (Olsen et al., 1985;Kuiper-Goodman et al., 1987) and the half-life of ZEN in pigs is ~86 h, with absorption estimated at 80% to 85% from the gastrointestinal tract (Biehl et al., 1993).
Zearalenone is an endocrine-disrupting chemical (Metzler et al., 2010;Frizzell et al., 2011), by structurally mimicking endogenous estradiol (E 2 ) , and binding to the estrogen receptor (Shier et al., 2001); a molecular scenario that inhibits the estrogen response element-mediated regulation of gene transcription (Bennett and Klich, 2003;Minervini and Dell'Aquila, 2008).In vivo, exposure to intraperitoneal ZEN at 7.5 mg/kg/b.w. for 24 h inhibited follicle-stimulating hormone synthesis and secretion, and gilts administrated 10, 20, or 40 μg/g b.w. of ZENcontaminated feed for 28 d had enlarged uteri (James and Smith, 1982).Further evidence of an estrogenic impact of ZEN was observed by increased reproductive tract weight in female pigs administered 1.1, 2.0, or 3.2 mg/kg b.w of ZEN for 18 d (Jiang et al., 2011).
Biotransformation of ZEN is species-dependent, and secondary metabolites formed may contribute to ZEN's toxicity (Malekinejad et al., 2006).Metabolism of ZEN occurs by two major biotransformation pathways: phase I reduction and phase II glucuronidation and sulfation.In poultry and cattle, β-ZOL is the primary ZEN metabolite (Joint FAO/WHO Expert Committee on Food Additives, 2000; Malekinejad et al., 2006;Zinedine et al., 2007;Videmann et al., 2008).However, in pigs, α-ZOL is the primary ZEN metabolite and this is responsible for pigs being highly sensitive to ZEN because α-ZOL has a high affinity for the estrogen receptor (Malekinejad et al., 2006).Incidentally, the pig has similar ZEN biotransformation and sensitivity levels as humans (Pillay et al., 2002;Malekinejad et al., 2006;Videmann et al., 2008), which differs from rodents (Malekinejad et al., 2006).A provisional maximum tolerable daily intake for ZEN is 0.2 µg/ kg body weight (b.w.) in humans (Joint FAO/WHO Expert Committee on Food Additives, 2000), based on a no observed effect level (NOEL) of 40 µg/kg b.w. in pigs (Joint FAO/WHO Expert Committee on Food Additives, 2000).In gilts, the lowest observed effect levels for the ovary, uterus, and vulva range from 17 to 200 µg/kg b.w. with a NOEL level of 10 µg/kg b.w.(EFSA, 2011), but currently, a provisional maximum tolerable daily intake value for ZEN is not defined in the United States.
During HS, basal insulin concentrations increase (Itoh et al., 1998;Wheelock et al., 2010;Pearce et al., 2013), despite a reduction in feed intake (FI).This presents a potential link between HS and an altered response to chemical exposure since experiments in hepatic cells have demonstrated that insulin and glucagon both regulate proteins involved in chemical biotransformation.In cultured hepatocytes, lack of insulin reduced Cytochrome P450 (CYP) isoforms 2B (CYP2B), 3A (CYP3A), and 4A(CYP4A) in response to xenobiotic induction (Woodcroft and Novak, 1999).Insulin also increased hepatic abundance of microsomal epoxide hydrolase (EPHX1) and this was ameliorated by inhibition of phosphatidylinositol 3-kinase (PI3K) or mitogen-activated protein kinase pathways (Kim et al., 2003).Conversely, glucagon decreased both hepatic EPHX1 and CYP isoform 2E1 (CYP2E1) protein abundance (Woodcroft et al., 2002;Kim et al., 2003).The insulin-regulated PI3K pathway also regulates ovarian chemical biotransformation (Bhattacharya and Keating, 2012;Bhattacharya et al., 2012Bhattacharya et al., , 2013)).Thus, the potential for HS to change the ovarian response to an ovotoxicant is convincing, especially within the context of climate change.
This study investigated ovarian molecular targets of ZEN exposure and evaluated if this response differed in HS pigs.
We hypothesized that ZEN exposure would alter the ovarian proteome and that these effects would diverge in thermal neutral (TN) and HS pigs.
Animal and experimental design
All animal procedures were approved by the Institutional Animal Care and Use Committee at Iowa State University.This study utilized tissues collected from a previously described experiment (Roach et al., 2024).Briefly, female crossbred pre-pubertal gilts (61.5 kg ± 0.5; 105 to 115 d of age; n = 38) were fed a standard diet formulated to meet all nutritional requirements (National Research Council, 2012).Gilts were exposed to constant TN conditions (21.0 ± 0.1 °C, 66.8% relative humidity) or cyclic HS (35.0 ± 0.2 °C from 0700 to 1900 hours, 42.0% relative humidity and 32.2 ± 0.1 °C from 1900 to 0700 hours, 40.7% relative humidity) for 7 d.Environmental temperatures were selected to simulate summer conditions in the midwestern region of the United States.All animals experienced a 12:12 h light-dark cycle.The TN gilts were further divided into two subgroups which were either ad libitum fed (TN) or were pair fed (PF) to calorically control for the reduction in FI which occurred in the HS gilts, and treatments were assigned as thermoneutral (TN) vehicle control (TC; n = 6), TN ZEN (TZ; n = 6), PF vehicle control (PC; n = 6), PF ZEN (PZ; n = 6), HS vehicle control (HC; n = 7), HS ZEN (HZ; n = 7).The vehicle control and the ZEN (40 µg/kg BW; 0.04 ppm; Z2125, Sigma-Aldrich, Inc.St. Louis, MO) were provided in a 10 g cookie dough bolus at 0700 and 1900 hours.This dosage was based on previous publications in which ovarian effects were observed (Liu et al., 2017) and based on the level of human exposure (Kuiper-Goodman et al., 1987;Joint FAO/ WHO Expert Committee on Food Additives, 2000).
Tissue Collection
Pigs were euthanized on day 7 using captive bolt and exsanguination and one ovary was immediately collected, weighed, and snap-frozen in liquid nitrogen followed by storage at -80°C.
Ovarian Protein Isolation and Quantification
Whole ovarian tissue was powdered with a mortar and pestle on dry ice.Approximately 100 mg of powdered whole ovarian tissue was weighed and lysed by tissue lysis buffer (200 µL; 50 mM Tris-HCl, 1 mM EDTA, pH 8.5) supplemented with Halt protease and phosphatase inhibitor cocktail (P178442, Thermo Scientific, Waltham, MA).Lysed tissue was homogenized by sonication and incubated on ice for 30 min.Protein lysate was centrifuged at 10,000 rpm for 15 min at 4 °C and supernatant was collected.Protein concentration was quantified using a Pierce BCA Protein Assay Kit (BCA; 23227, Thermo Scientific, Waltham, MA) and spectrophotometry detection.
Liquid Chromatography-Tandem Mass Spectrometry
Total ovarian protein samples were prepared with a working solution of 50 µg/µL diluted in lysis buffer.Liquid chromatography-tandem mass spectrometry (LC-MS/MS) analysis was performed as previously described (Clark et al., 2019;González-Alvarez et al., 2021) at The Protein Facility of the Iowa State University Office of Biotechnology.Briefly, 50 µg/µL of total protein was digested with trypsin/Lys-C for 16 h, dried and reconstituted in buffer A (47.5 µL; 0.1% formic acid/water).A standard Peptide Retention Time Calibration (PRTC; 25 fmol/µL) was spiked into each sample as an internal control.Protein and PRTC were injected into an LC column and separated by mass spectrometry.Fragmented patterns were compared to MASCOT or Sequest HT theoretical fragmentation patterns for peptide identification.The area of the top three unique peptides per sample was used to identify protein abundance.The PRTC arithmetic mean was used as a normalization factor.The signal intensity was divided by the PRTC arithmetic mean for each peptide.Protein identities were confirmed by three peptides for each protein.Metaboanalyst 4.0 was used for bioinformatics comparison by the Genome Informatics Facility at Iowa State University.Missing value imputation by Singular Value Decomposition method was performed.Values were filtered based on the interquartile range followed by generalized log transformation.Volcano plots depict alterations to proteins within treatments.UniProt identified biological, molecular, and pathway information using KEGG identifiers for each protein.
Gene ontology analysis and protein-protein interaction web network
Gene Ontology (GO) analysis was conducted using the protein-coding gene classification system Search Tool for the Retrieval of Interacting Genes/Proteins (STRING) and to identify pathways of altered proteins within comparisons.Proteins altered by treatments with a P ≤ 0.05 were compared to the Sus scrofa reference list for pathway classification.The percentage of each category was calculated by dividing the gene hit by the total number of genes.Common ovarian proteins were identified across treatment comparisons and protein-protein interactions/web network was computed using STRING.
Statistical Analysis
Student's t-test was used to compare control and treatment with the adjusted P-value false discovery rate (FDR) cutoff of 0.05.A fold change threshold of 1 was used to compare the absolute value of change and expression level between control and treatment values.The threshold for proteins deemed significant in the pathway analysis was P ≤ 0.05.
Effects of ZEN exposure on the whole ovarian proteome in TN, PF, and HS pigs
Relative to TN gilts, 84 proteins were altered by ZEN (P ≤ 0.05; Figure 1A; Supplementary Table 1), with 45 increased and 39 decreased.Relative to PF gilts, ZEN altered 84 proteins (P ≤ 0.05; Figure 1B; Supplementary Table 2) with 45 increased and 39 decreased.While identical numbers of increased and decreased proteins due to ZEN exposure when compared to the control ovaries in each nutritional group were identified, the protein identities differed (Supplementary Tables 1 and 2).Relative to HS gilts, ZEN altered 108 proteins (P ≤ 0.05; Figure 1C; Supplementary Table 3) with 47 increased and 61 decreased.
Functional Classification of Biological Pathways Identified to be Altered in the Ovary by ZEN Exposure
Proteins identified to be different by ZEN (P ≤ 0.05) relative to TN, PF, and HS gilts were designated to classification systems based on their ascribed function using STRING GO analysis.
For molecular and cellular pathways, 11 functional classifications were determined, with the top five processes altered by ZEN relative to vehicle control in TN gilts being associated with sec61 translocon complex (40%), rough endoplasmic reticulum membrane (8.2%),proteasome complex (5.4%), myofibril (2.6%), and structural molecule activity (1.9%; FDR ≤ 0.05; Figure 2; Supplementary Table 4).Additionally, the STRING protein-protein interaction network identified 75 nodes and 122 edges associated with protein function to be altered by ZEN in TN gilts (Figure 3).
Identification of Common Proteins and Functions Altered by ZEN Relative to TN/PF and HS Gilts
As previously mentioned, 84, 84, and 108 ovarian proteins were differentially altered by ZEN in TN, PF, and HS gilts, respectively.Of these proteins, 27 ovarian proteins were altered by ZEN regardless of thermal environment, albeit not always in the same direction (Figure 8).Functional protein-protein interactions of common proteins (P ≤ 0.05; Figure 9) depict 20 nodes and 5 edges.No biological processes were identified by STRING.Common proteins are grouped according to functional roles determined by STRING GO analysis (Figure 10).
Discussion
Recent climate conditions have resulted in intense heat waves which can deleteriously impact human and animal health.Widespread ZEN exposure is also driven by warm and moist climates since improper grain drying can increase ZEN contamination (Bennett and Klich, 2003).Pig reproductive consequences of ZEN consumption are well-documented (Alm et al., 2006;Malekinejad et al., 2007;Jiang et al., 2011;Wan et al., 2022) and this study aimed to identify the ovarian molecular changes stemming from ZEN exposure during TN and HS conditions in the pre-pubertal female.The ZEN dose utilized is the NOEL level observed in pre-pubertal gilts and is considered low risk for human intake (Joint FAO/WHO Expert Committee on Food Additives, 2000).Feeding a low dose was important to identify ovarian molecular causative changes caused by ZEN exposure without overt ovarian toxicity, since a differential impact of HS was also under investigation.The ZEN-exposed gilts experienced either TN or HS temperatures to mimic the summer months of the Midwestern United States, a period in which pigs endure seasonal infertility.A conserved species response of HS is reduced FI; therefore, a group of TN gilts was included to control for the difference in nutritional intake (the PF group).Pigs undergoing the HS conditions had elevated rectal temperature and respiration rate confirming that HS was successfully induced (Roach et al., 2024).HS decreased FI by 36%, body weight by 3.6%, and decreased average daily gain, with absence of a ZEN effect on FI or body weight (Roach et al., 2024).The age of pigs was purposely chosen to ensure that the observed effects by ZEN in TN and HS pigs were not due to the endogenous hormone milieu.
The mechanisms responsible for ovarian toxicity are diverse and chemical exposures (single or multiple) can impact ovarian function in numerous ways.Thus, to evalu-ate the consequences of ZEN exposure in the pre-pubertal pig ovary, whole ovary proteomic profiling using LC-MS/MS was employed to measure the altered abundance in proteins between TC vs. TZ, PC vs. PZ, and HC vs. HZ in an unbiased manner, with network analysis providing an understanding of how affected proteins interact in the context of each thermal group.This approach permitted the identification of proteins that were molecular targets of ZEN within thermal group as well as those that were commonly affected by ZEN exposure regardless of thermal environment.
In TN gilts, 84 ovarian proteins were altered by exposure to ZEN, identifying potential targets of ZEN-induced toxicity.Proteins with ovarian roles that were altered by ZEN exposure included PRDX4, AKR1B1, and CDK2.In the ovary, PRDX4 is secreted by cumulus cells and protects the oocyte against oxidative stress (Dai et al., 2022;Qian et al., 2022).There are reports of an association between PRDX4 and both polycystic ovary syndrome (PCOS; Meng et al., 2013;Gateva et al., 2019;Zhou et al., 2021) and ovarian aging (Qian et al., 2016).Exposure to ZEN increased AKR1B1 which is altered during the estrous cycle in the oviduct (Lopera-Vásquez et al., 2022) and functions in control of luteolysis (Chang et al., 2017).This is interesting since these pigs were pre-pubertal in their developmental status.The cell cycle protein CDK2 is altered in the ovary in response to a variety of stressors (Wang et al., 2023a) including xenobiotic exposure (Rhon-Calderón et al., 2018).Additionally, in donkey granulosa cells exposed in culture to ZEN, CDK2 mRNA and protein levels were decreased (Zhang et al., 2018a).In this study, CDK2 was also decreased by ZEN exposure demonstrating consistency across species albeit in different experimental paradigms.Thus, in TN gilts, proteins with roles in luteolysis, oxidative stress, and ovarian pathology were targets of ZEN.
An additional group of PF gilts were included to address the confounding issue of feed reduction due to HS.These gilts were also in TN conditions and whilst these findings could have been included with the TN group, understanding the proteomic effects of ZEN exposure in the PF gilts could also be useful to situations in which underfeeding is also present.Proteins amongst the 84 proteins that were altered by ZEN in PF gilts that have interesting ovarian roles were YWHAZ and STAT1.Exposure to ZEN increased YWHAX in the ovary and YWHAZ has a function in conferring luteal sensitivity to prostaglandin F2α (Goravanahally et al., 2009) and is also suggested to be involved in the transition from the follicular to the luteal phase of the estrous cycle (Zhang et al., 2019).Increased by exposure to TNF-α in cumulus cells in the bovine ovary (Piersanti et al., 2019), STAT1 was decreased by ZEN exposure in the PF gilt ovary.Interestingly there is a relationship between the metabolic hormone, ghrelin, and STAT1 in granulosa cells of the ovary (Benco et al., 2009), and since the gilts in the study are limited fed to match the HS FI level, could reflect an interaction between nutritional status and ZEN exposure.
In the HS gilts, a greater number of proteins in totality were altered by ZEN exposure (108) and included proteins with documented ovarian function, namely DAZAP1, FKBP5, DDX3X, HSPA9, HSPA1A, HSPA2, TXNL1, MGST1, and PPP2R2A.Exposure to ZEN increased DAZAP1 which functions in male germ cells (Vera et al., 2002) and is present in luteal cells of human and rat ovaries (Pan et al., 2005).Dazap1 −/− mice are infertile with reduced ovarian size (Hsu et al., 2008).Another protein with luteal function, TNXL1 (Pokharel et al., 2020) was decreased by ZEN exposure in HS gilt ovaries.Ovarian FKBP5 was increased by ZEN in HS gilts.In fetal ovaries, exposure to dexamethasone increased FKBP5 (Poulain et al., 2012), and overexpression of FKBP5 was associated with chemoresistance in ovarian cancer cells (Sun et al., 2014), suggesting that ovarian FKBP5 is involved in the ovarian response to xenobiotic exposure.DDX3X was increased by ZEN in HS gilt ovaries and is involved with cell survival and cell cycle control in embryonic development (Li et al., 2014) and is also identified as being a potential regulator of ovulation (Zaniker et al., 2023).As an indicator of ZEN-induced oxidative stress in gilt ovaries during HS, MGST1 is increased in oocytes from endometriosis patients (Ferrero et al., 2019), and is increased by antioxidant treatment of PCOS in rats (Zhou et al., 2021).In this study, MGST1 was decreased by ZEN exposure and could reflect that the protein was depleted through functioning in the response to oxidative stress or could also be attributable to a mode of toxicity of ZEN exposure.Three HSPs were decreased in response to ZEN exposure in HS gilt ovaries: HSPA1A, HSPA2, and HSPA9, perhaps unsurprising considering their roles in the ovarian response to HS (Abdelnour et al., 2020) and to ZEN exposure (Chen et al., 2019;Yi et al., 2022).In cycling pig ovaries, HSPA1A is responsive to HS or lipopolysaccharide exposure alone in the absence of ZEN exposure (Seibert et al., 2019).Supplementation of oocytes with the antioxidant melatonin, reduced HSPA1A mRNA in resultant bovine blastocysts (Cordova et al., 2022).A role for HSPA2 in primordial to primary follicle transition is also supported in pig ovaries (Xu et al., 2017) and cultured granulosa cells increased Hspa2 mRNA abundance in response to nitropropionic acid as an indicator of oxidative stress induction (Kang et al., 2018) Thus, HSPs are increased in response to ZEN and HS, are markers of oxidative stress, and this study indicates that combination of both alters the abundance of three HSP in response to ZEN exposure in HS gilt ovaries.Finally, PPP2R2A is involved in ovarian cancer biology (Youn and Simon, 2013;Zhang et al., 2018b) and ZEN exposure promotes tumorigenesis in gran-ulosa cells (Zhang et al., 2018c).PPP2R2A was decreased to the greatest extent of all proteins affected by ZEN in the HS gilts.Thus, in combination with HS, while a greater number of proteins were altered in abundance by ZEN exposure, similar ovarian roles in the oxidative stress response, apoptosis, and luteolysis were noted.
As an effort to identify proteins that were commonly affected by ZEN exposure independent of thermal load, a comparison of the altered proteins across treatments was made.Proteins with functional roles associated with metabolism (PPIA, RPIA, EF2, and ADR), immune response (ZDBF2, IGHG, and C1-INH), detoxification (AKR7A2, AASD1, CCS, PRDX5, and GSTA4) and transport (TMED7 and SEC61B) were identified to be altered by ZEN regardless of thermal exposure, albeit sometimes in opposing directions of change.Additionally, cellular locations where those changes occurred were nuclear, ribosomal, and in the extracellular matrix.
Four proteins were altered by ZEN exposure in the same pattern of change across thermal groups: TRIM28 (decreased), PALLD (increased), CCS (decreased), and SNRPD1 (increased).TRIM28 is decreased by ZEN exposure in TN and HS gilt ovaries.Interestingly, loss of TRIM28 resulted in differentiation of ovarian granulosa cells to Sertoli cells which was dependent upon sumoylation (Rossitto et al., 2022) suggesting that a ZEN-induced reduction would detrimentally affect ovarian function.ZEN exposure increased PALLD in all three thermal groups.While ovarian roles for PALLD are not widely known, increased PALLD is associated with ovarian high-grade serous carcinoma (Davidson et al., 2020), potentially a contraindication of ZEN exposure.Accumulation of radical oxygen species is lessened by the action of CCS (Culotta et al., 1997) and CCS was decreased by ZEN exposure in all thermal groups potentially lessening the ovarian capacity to respond to oxidative stress.An oncogene, SNRPD1, lacks clearly defined ovarian roles (Liu et al., 2022;Dai et al., 2023;Wang et al., 2023b) although small nuclear ribonucleoproteins have been visualized in the germinal vesicle stage oocytes, potentially indicating a role for RNRPD1 in oocyte function.Thus, the four proteins altered in the same pattern of change by ZEN exposure have physiological roles that if disrupted by ZEN exposure could be detrimental to ovarian function.
Of the other proteins altered by ZEN in all thermal groups but not necessarily in the same pattern of change, those with known ovarian roles are TGFB1 which participates in primordial follicle activation (Pangas, 2012;Chen et al., 2022), luteal regression (Monaco and Davis, 2023) and regulation of extracellular matrix genes (Harlow et al., 2002).Though there are fewer defined roles for ovarian PDCD5, there are links with ovarian pathology including ovarian cancer (Zhang et al., 2011) and granulosa cell apoptosis (Geng et al., 2022).
Relevant to detoxification and oxidative stress, GSTA4 is a phase II detoxifying enzyme that metabolizes electrophiles and carcinogens (Hubatsch et al., 1998) and it is decreased due to ZEN exposure in TN but increased in HS pig ovaries.The AKR7A2 protein detoxifies aldehydes and ketones and catalyzes the reduction of xenobiotics (Barski et al., 2008) and is increased by ZEN exposure in both TN groups but decreased in HS gilt ovaries.Similar to PRDX4 which was altered in TN gilts exposed to ZEN, PRDX5 responds to oxidative stress and induces proinflammatory cytokines in macrophages through activation of toll-like receptor 4 (Poncin et al., 2021).Both TN and PF had reduced levels of PRDX5 while HS gilts had higher PRDX5 in response to ZEN.The granulosa cell transcriptome from primordial follicles had higher Prdx5 relative to primary follicles (Ernst and Lykke-Hartmann, 2018).Thus, ZEN exposure during HS increased the abundance of GST4A and PRDX5 suggesting an effort by the ovary to relieve oxidative and xenobiotic-induced stress.
In conclusion, this study identifies targets of ZEN as modes of toxicity.In addition, different thermal load paradigms resulted in differential ovarian responses to ZEN.The ovarian abundance of proteins with documented ovarian roles and those that function in metabolism, immune response, detoxification, and transport were identified as targets of ZEN.Taken together, the data identify ZEN-induced ovarian alterations and support that the ovarian response to ZEN is different in TN relative to HS pigs, suggesting that hyperthermia can impact the outcome of ovarian xenobiotic exposure.Additionally, ovarian proteins consistently affected by ZEN exposure across thermal treatments could indicate potential mitigation targets to mitigate ZEN-induced reproductive toxicity.
Figure 1 .
Figure 1.Ovarian proteins altered by ZEN exposure.The volcano plot depicts the comparison between proteins identified in (A) TC vs. TZ, (B) PC vs. PZ, and (C) HC vs. HZ gilt ovaries.The solid horizontal line indicates where P = 0.05, with dots above the line P < 0.05 and dots below having P > 0.05 The solid vertical line indicates log2fold change of < ± 1.0 with dots to the right indicating increased and dots to the left denoting decreased proteins relative to respective control.Bar chart represents the top five increased and decreased proteins per comparison illustrated as fold-change in (D) TC vs TZ, (E) PC vs PZ, and (F) HC vs HZ in pre-pubertal gilts.TC: n = 6; TZ: n = 6; PC: n = 6; PZ: n = 6; HC: n = 7; HZ: n = 7; P ≤ 0.05.
Figure 2 .
Figure 2. Distribution of biological processes in ovarian proteins altered by ZEN exposure in thermal neutral gilts.Pie chart represents the biological processes in TC vs. TZ pre-pubertal gilts; FDR ≤ 0.05.
Figure 3 .
Figure 3. Web network of ovarian proteins altered by ZEN in thermal neutral pre-pubertal gilts.Protein-protein associations of 84 altered ovarian proteins are depicted as a web network.Network nodes represent proteins, with colored nodes indicating the first shell of interactors and white nodes indicating the second shell of interactors.Empty nodes illustrate proteins with unknown 3D structures and filled nodes represent a known or predicted 3D structure.Edges depict protein-protein associations between nodes and illustrate proteins with a shared function.Light blue edges = known interactions curated from databases, light pink edges = experimentally determined known interactions, green edges = gene neighborhood predicted interactions, orange edges = predicted interactions with gene fusions, and navy edges = predicted interactions with gene co-occurrence.
Figure 4 .
Figure 4. Biological pathway classification of ovarian proteins altered by ZEN exposure in pair-fed females.Pie chart represents the distribution of altered proteins identified in the PC vs. PZ comparison.Biological processes are presented as a percentage, FDR ≤ 0.05.
Figure 5 .
Figure 5. Protein-protein associations ovarian proteins altered by ZEN exposure in pair-fed gilts.Network nodes represent proteins, with colored nodes indicating the first shell of interactors and white nodes indicating second shell of interactors.Empty nodes illustrate proteins with unknown 3D structures and filled nodes represent a known or predicted 3D structure.Edges depict protein-protein associations between nodes and illustrate proteins with a shared function.Light blue edges = known interactions curated from databases, light pink edges = experimentally determined known interactions, green edges = gene neighborhood predicted interactions, orange edges = predicted interactions with gene fusions, and navy edges = predicted interactions with gene co-occurrence.P ≤ 0.05.
Figure 6 .
Figure 6.Molecular and cellular pathway classification of ovarian proteins altered by ZEN exposure in heat stress females.Pie chart represents the distribution of altered proteins identified in the HC vs. HZ comparison.Biological processes are presented as a percentage, FDR ≤ 0.05.
Figure 7 .
Figure 7. Protein-protein interactions of altered ovarian proteins in heat stress gilts are depicted as a web network.Network nodes represent proteins, with colored nodes indicating the first shell of interactors and white nodes indicating second shell of interactors.Empty nodes illustrate proteins with unknown 3D structures and filled nodes represent a known or predicted 3D structure.Edges depict protein-protein associations between nodes and illustrate proteins with a shared function.Light blue edges = known interactions curated from databases, light pink edges = experimentally determined known interactions, green edges = gene neighborhood predicted interactions, orange edges = predicted interactions with gene fusions, and navy edges = predicted interactions with gene co-occurrence.P ≤ 0.05.
Figure 9 .
Figure 9. Protein-protein associations of ovarian proteins altered by ZEN independent of thermal treatment depicted as a web network.Network nodes represent proteins, with colored nodes indicating first shell of interactors and white nodes indicating second shell of interactors.Empty nodes illustrate proteins with unknown 3D structures and filled nodes represent a known or predicted 3D structure.Edges depict protein-protein associations between nodes and illustrate proteins with a shared function.Light blue edges = known interactions curated from databases, light pink edges = experimentally determined known interactions, green edges = gene neighborhood predicted interactions, orange edges = predicted interactions with gene fusions, and navy edges = predicted interactions with gene co-occurrence.P ≤ 0.05. | 2024-04-27T06:18:04.903Z | 2024-04-26T00:00:00.000 | {
"year": 2024,
"sha1": "80b50649af8a8c97b13bdae274c066786d0b5031",
"oa_license": "CCBY",
"oa_url": null,
"oa_status": null,
"pdf_src": "PubMedCentral",
"pdf_hash": "ec2bfd9a3790c743e767762ae4a934f78a05d190",
"s2fieldsofstudy": [
"Agricultural and Food Sciences",
"Environmental Science"
],
"extfieldsofstudy": [
"Medicine"
]
} |
234083850 | pes2o/s2orc | v3-fos-license | Behavior investigation of powder actuated shear connectors in composite beams with profiled sheeting
Eurocode 4 and Russian SP266.1325800.2016 composite structure design codes require touse shear connectors for providing a combined action of steel beam and concrete slab. Both codes also allow use steel profiled sheeting as a stay-in-place formwork. In this case, the designer has to use a reduction factor to define the real shear resistance. However, the existing method is mostly based on round-shape welded studs and does not take into account the peculiarity of powder actuated shear connectors. This paper presents the result of an investigation of the behaviour of powder actuated shear connectors Hilti X-HVB in composite beams. One of the goals was to explore the influence of the steel profiled sheeting`s geometry on shear resistance of the connectors. Fifteen test series were conducted with different numbers of connectors, slab thicknesses, installation parameters of the connectors and types of profiled sheeting. All types of sheeting that were used as the test samplesare manufactured according to the Russian national standard and are widely used for composite slab design. Some test cases had smaller edge distances for shear connectors installation in profiled sheeting than allowed by the technical data of the producer, because they took place in real practice in CIS countries. The test results have been evaluated according to Appendix B of Eurocode 4. The results for samples with solid slabs are in fully agreement with external experience. Most of series with profiled sheeting also match predicted design values; however, there were some series with insufficient results. Therefore, a deeper analysis and additional testing are necessary. The current investigation results could be used to specify the existing reduction factor that is included in the Russian composite structure design code SP266.1325800.2016
Introduction
The composite beams were being investigated in USSR since 1930s. The first range of use for this structures was the bridge building: soviet scientists have been studied the basic principles of qualification and design composite beams with rigid shear connectors [1]. The design approach had been developed and included into the special code SNiP 2.05.03-84 [2]. There were different scientists, who worked on similar approach for civil buildings: B. Markov [3], E. Khautin [4], M. Karpovskiy [5] and other. Due the accumulated experience the first design recommendations have IOP Publishing doi: 10.1088/1757-899X/1070/1/012104 2 been developed in 1987 [6]. Then we can an evaluation of design codes [7][8], that leads to national design code SP 266.1325800.2016 [9] development.
The SP 266design code allows to design the composite beams at the plastic or elastic stage. In both cases, the designer has to ensure a full connection between the concrete slab and the underlying steel beam. A partial connection is not allowed. The distribution of shear force along the seam joining the reinforced concrete slab to the steel beam has to be determined for each design region as a difference between the stresses in the concrete and reinforcement cross-sections on the ends of the design region. The stresses should be determined with geometry properties of the combined beam cross-section and inelastic deformations from concrete cracking, and shrinkage has to be taken into account.
The SP code allows to use different types of connectors for shear load accommodation: welded round, angle, channel-shape, etc. The SP code consists of design formulas of shear resistance for each type. Powder actuated shear connectors are also allowed: the design of shear resistance has to be provided by the producer. The Hilti X-HVB has the ETA-15/0876 [10] with all relevant technical data that could be confirmed by test procedure according to Russian test method standard GOST R 58336-2018 [11].
If the steel profiled sheeting is used as stay-in-place formwork, it is necessary to estimate the deck`s influence on bearing capacity of the shear connector by applying the reduction factor. The formula for ductile connectors depends on the deck's orientation to steel beam and uses an approach similarto Eurocode 4. At the same time, the SP 266 allows to use alternative formulas for defining the reduction factor from shear connectors producers issued in their technical data. The formulas from the ETA and from the SP 266 are different ( Note: n r > 3;not allowed It is necessary to match producer`s positioning requirements As we can see, the SP`s approach is more conservative for the case when the deck is oriented transverse to the underlying steel beam and shear connectors are installed parallel to the deck. These formulas need to be proved by local tests.
Test program and push-out specimen
The standard specimen is a piece of I-section 25K2 according to GOST Р 57837-2017 [13]. The flanges of the I-section are connected with 600x600 mm normal-weight concrete slabs with different thickness. The slabs' form (solid, ribbed) depends on test series. Slabs are bonded with the I-section by powder actuated X-HVB shear connectors. The typical appearance of the specimen is shown in Figure1. The test program consists of 15 series with 3 standard specimens. The program is divided into 3 stages. The first stage consists of the specimens with 95, 125 and 140 mm height connectors that were installed and tested in solid slabs. The goal was tocheck the reference design resistance from the ETA. There were specimens with the connectors, oriented parallel to the underlying steel beam (P-series) and the connectors oriented transverse (T-series). The ETA contains no data for the Tseries. The additional goal was to check the scope of X-HVB shear connectors. The second stage consists of the specimens with 95 and 125 mm height connectors that were installed fully matching the Hilti`s requirements and tested in slabs with metal decks that are made according to the GOST 24045-2016 [14] ( Table 2). The goal was to check the applicability of the ETA reduction factor to the local profiled decks. The third stage consists of the specimens that are not compliant with the ETA spacing requirements. Some series had connectors installed in a weak position (X-150Р-N60-1, X-150Р-N60-2, X-150T-N75 series). The X-150T-N75-1 series had a non-compliance with the requirement to minimum number of connectors in narrow decks (one instead of two). The installation of the connectors in the X-150Р-N60-1 formally matched the ETA requirements for weak position. However, the distance between the slab edge and the first bottom connector was only 120 mm, which could be insufficient. The X-150Р-N60-2 had 2 connectors per row in weak position with ignorance of the minimum edge distance from the deck gauge to the connector. At the same time, the ЕТА-15/0876 allows to use similar positioning scheme in cases of narrow decks. The width of such decks should be more than 60 mmthe specimens matched this requirement. The potential problem zone is the distance between the slab edge and the first bottom connector -120 mm. In the X-150T-N75 series, the minimum edge distance from the deck gauge to the connector had also been ignored. The specimens for X-150T-N75-1 series were made with N75 deck laid by narrow side on the underlying beam. For such decks in this case,the ETA specifies to install not less than 2 connectors per row, but the goal was to check the importance of this requirement by installing 1 connector per row. It is also interesting to compare the results with X-150Т-N60 series, because there are 2 connectors per row in a similar deck (Table 3).
Testingand result evaluation
The tests were conducted in 2019 in the laboratory ofMoscow State University of Civil Engineering. The MTS.Multiaxial.DS1.4811.DS1.50019 was used as a testing machine. The displacements were measured by a transducer on each concrete slab. The force was controlled by the internal transducer in the hydrocylinders. The loading was constant with a fixed speed of 0.83 kN/sec. The results were estimated by the Eurocode 4 methodology. Test results for X-150P-N44 series were excluded due to technical reasons. The reduction factors and design shear resistances were defined according to the ETA-15/0876 and SP266 approach. The results are presented in Table 4. The specimens at the first stage showedexcellent results in all cases. All failures were linked with dowel shearing. The results for the T-series testified that the connectors could be installed in this way and the technical data, which has been obtained for parallel-to-steel-beam-oriented connectors, could be used for calculating shear resistance of transverse oriented connectors.
The second stage results are contradictory: for connectors with transverse orientation to underlying beam, the ETA approach predicts the shear resistance more precisely than SP 266 in all cases, except the X-150Т-N60. In this series, the narrow side of the deck was used. There were 2 connectors per row installed with 100 mm between the connectors. This case is covered by ETA: the predicted shear resistance for X-HVB 125 needs to be more than 26.7 kN per connector. In all three cases, we observed the rib shearing as a failure mode with much less ultimate resistance per connector(Figure2). One of the possible reasons is an unfavorable specimen geometry that provides non-uniform stress distribution between 3 pairs of connectors ( Figure 3). Additional tests or numerical simulations are required to reliably determine the reasons for this behavior of the connectors in such situation. The third stage results fully confirm the ETA requirements. However, a deeperanalysis could provideuseful data about ductile shear connectors in a weak position: in most series, the distance between the slab edge and the first bottom connectors was insufficient. It causes premature concrete failure near the bottom connectors. Checking the moment when the bottom rib cracked couldhelp to investigate the stress redistribution between remaining shear connectors. For example, in the X-150Р-N60-1, X-150Р-N60-2, X-150T-N75 series,more precise data couldbe obtained by considering the shear force redistribution. The X-150T-N75 with narrow ribs and connector per row provided a result similar to X-150Т-N60 series with concrete rib shearing. The behavior of shear connectors in narrow ribs with b 0 /h p <1.8 requires an additional investigation.
Conclusions
Results of the testsconfirm the application limits of the Hilti X-HVB and provide a basis for further investigation of shear connectors' behavior in composite beams made with Russian profiled sheeting. The tests confirmed the applicability of shear connectors with traverse orientation to underlying steel beam in solid slab without additional testing.
The SP 266 approach for defining the reduction factor is too conservative in comparison with ETA formulas. The ETA approach could be used for local decks with wide rows (with b 0 /h p ≥1.8) when manufacture's application limits are observed. The scope of the shear connectors in narrow ribs requires an additional investigation.
The SP 266 needs to be updated with additional requirements for shear connectors spacing. The GOST R 58336-2018 for shear connectors test method needs to be updated with restriction of minimal distance between the slab edge and the first bottom concrete rib with the connectors. | 2021-05-10T00:03:27.992Z | 2021-02-01T00:00:00.000 | {
"year": 2021,
"sha1": "4ca373c74a961b5372be5c9f26f6ada6612a5505",
"oa_license": null,
"oa_url": "https://doi.org/10.1088/1757-899x/1070/1/012104",
"oa_status": "GOLD",
"pdf_src": "IOP",
"pdf_hash": "97057743a3e87ba4169459b08b99a7b20b9e7473",
"s2fieldsofstudy": [
"Engineering"
],
"extfieldsofstudy": [
"Physics",
"Materials Science"
]
} |
17856724 | pes2o/s2orc | v3-fos-license | Triplet State of the Semiquinone–Rieske Cluster as an Intermediate of Electronic Bifurcation Catalyzed by Cytochrome bc1
Efficient energy conversion often requires stabilization of one-electron intermediates within catalytic sites of redox enzymes. While quinol oxidoreductases are known to stabilize semiquinones, one of the famous exceptions includes the quinol oxidation site of cytochrome bc1 (Qo), for which detection of any intermediate states is extremely difficult. Here we discover a semiquinone at the Qo site (SQo) that is coupled to the reduced Rieske cluster (FeS) via spin–spin exchange interaction. This interaction creates a new electron paramagnetic resonance (EPR) transitions with the most prominent g = 1.94 signal shifting to 1.96 with an increase in the EPR frequency from X- to Q-band. The estimated value of isotropic spin–spin exchange interaction (|J0| = 3500 MHz) indicates that at a lower magnetic field (typical of X-band) the SQo–FeS coupled centers can be described as a triplet state. Concomitantly with the appearance of the SQo–FeS triplet state, we detected a g = 2.0045 radical signal that corresponded to the population of unusually fast-relaxing SQo for which spin–spin exchange does not exist or is too small to be resolved. The g = 1.94 and g = 2.0045 signals reached up to 20% of cytochrome bc1 monomers under aerobic conditions, challenging the paradigm of the high reactivity of SQo toward molecular oxygen. Recognition of stable SQo reflected in g = 1.94 and g = 2.0045 signals offers a new perspective on understanding the mechanism of Qo site catalysis. The frequency-dependent EPR transitions of the SQo–FeS coupled system establish a new spectroscopic approach for the detection of SQo in mitochondria and other bioenergetic systems.
B iological energy conversion faces an engineering problem of joining the one-and two-electron stoichiometry of redox reactions between substrates and cofactors. Most catalytic sites accomplish this by supporting two sequential one-electron transfers toward a single cofactor chain involving a stable intermediate radical. 1,2 The catalytic Q o site of cytochrome bc 1 (respiratory complex III) is different and unique in that it changes the electronic stoichiometry by steering two electrons from ubiquinol (QH 2 ) to two separate chains of cofactors: it delivers one electron to the Rieske cluster (FeS) in the highpotential chain and the second electron to heme b L in the lowpotential chain ( Figure S1 of the Supporting Information). 3−6 The common view of this bifurcation process is that the intermediate semiquinone radical (SQ o ), formed by oneelectron oxidation of QH 2 by FeS, is highly unstable 5,7 and reduces heme b L very rapidly before it can react with dioxygen to generate superoxide. 8−11 This concept has been supported by a general difficulty to detect SQ o under aerobic conditions. In fact, the only report of detection of SQ o under those conditions comes from early studies with submitochondrial particles (SOM). 12 The origin of this signal was, however, questioned by later studies showing the insensitivity of the SQ signals in SOM to specific inhibitors of the Q o site. 13 More recent studies reported either detection of small amounts of SQ o under anaerobic conditions 14,15 or a lack of detection of SQ o under aerobic conditions, 16 Here, we explore a possibility that the intriguing lack of SQ o detection is a result of its magnetic interactions with metal centers of the Q o site rather than an effect of its high instability. In principle, a strong antiferromagnetic coupling of SQ o with a metal center could result in the elimination of the SQ o electron paramagnetic resonance (EPR) signal, as proposed by Link. 17 However, if the coupling is ferromagnetic and/or weak (in comparison to the thermal energy of the lattice), it may be expected that it will manifest itself as a new spectroscopic identity. 18,19 Indeed, by exposing the purified enzyme to its substrates (oxidized cytochrome c and QH 2 ), we have detected new transitions in EPR spectra assigned to a SQ o magnetically coupled to reduced FeS via spin−spin exchange interaction. We also detected a separate radical signal of SQ o with relaxation properties consistent with its location between the metal centers of the Q o site. This discovery offers a new perspective on understanding the mechanism of quinol oxidation at the Q o site. It also provides new insight into side reactions of the catalytic cycle involved in the production of superoxide by cytochrome bc 1 .
■ MATERIALS AND METHODS
Biochemical Procedures. The cytochrome bc 1 complex was isolated from the purple bacterium Rhodobacter capsulatus strain grown semiaerobically as described previously. 20 Bovine cytochrome c, 2,3-dimethoxy-5-decyl-6-methyl-1,4-benzoquinone (DB), and inhibitors (antimycin, myxothiazol, atovaquone, azoxystrobin, kresoxim-methyl, and famoxadone) were purchased from Sigma-Aldrich and used without further purifications. Tridecyl-stigmatellin was a generous gift from N. Fisher. DB was dissolved in an HCl/DMSO solution and then reduced to its hydroquinone form (DBH 2 ) with sodium borohydride. Inhibitors were used in 5-fold molar excess over the concentration of cytochrome bc 1 monomers. Cytochrome bc 1 and cytochrome c solutions were dialyzed against the reaction buffer composed of 50 mM Tris (pH 8.0), 100 mM NaCl, 20% glycerol (v/v), 0.01% (m/m) dodecyl maltoside, and 1 mM EDTA. All buffers were in equilibrium with air. Glycerol, added as a cryoprotective agent, increased the viscosity of the reaction buffer, which resulted in a deceleration of the overall catalytic turnover rate of the enzyme by decreasing diffusion rates of the substrates.
Freeze-quench experiments were performed using a Biologic SFM-300 stopped-flow mixer equipped with an MPS-70 programmable syringe control. The system was equipped with EPR FQ accessories. One syringe contained a cytochrome bc 1 /cytochrome c solution, and the second syringe contained DBH 2 in reaction buffer. Steady-state reduction of cytochrome c by cytochrome bc 1 was initiated by mixing the cytochrome bc 1 /cytochrome c solution with DBH 2 in a 1:1 volume ratio to obtain final concentrations of cytochrome bc 1 , cytochrome c, and DBH 2 of 50, 393, and 665 μM, respectively. The reaction mixture was incubated at room temperature in a delay line for a programmed number of milliseconds and then injected into an isopentane bath cooled to 100 K. Samples with higher cytochrome bc 1 concentrations required for hemes b measurements were prepared by manual injection of DBH 2 into the cytochrome bc 1 /cytochrome c solution inside EPR tube. The reaction was stopped by immersing the tube into cold ethanol glue.
EPR Spectroscopy and Data Analysis. All measurements were performed using a Bruker Elexsys E580 spectrometer. X-Band continuous wave electron paramagnetic resonance (CW EPR) spectra of hemes and FeS were measured at 10 and 20 K, respectively, using a SHQE0511 resonator and ESR900 cryostat (Oxford Instruments). X-Band spectra of semiquinones were recorded using a TM9103 resonator equipped with a temperature controller system (Bruker). Q-Band spectra of semiquinones were measured at 200 K by CW EPR using an ER507D2 resonator (Bruker) equipped with homemade modulation coils using a 0.6 mT modulation amplitude, a 90 kHz frequency, and a 1.92 mW microwave power. Q-Band echo-detected EPR (ED EPR) spectra of FeS were measured at 10 K using a π/2−148 ns−π sequence with a π pulse of 48 ns and a shot repetition time of 300 μs. First-derivative spectra of FeS were generated by applying the pseudomodulation procedure 21 on ED EPR spectra using Eleana (http://www. wbbib.uj.edu.pl/web/gbm/eleana). The magnitude of the external magnetic field was controlled using a Bruker NMR teslameter.
The microwave power saturation profiles of semiquinones were fit using formulas described in ref 22. The data for chemically induced semiquinone (SQ CH ) were fit assuming a contribution from one saturable component, while data for SQ o were fit assuming the presence of two species: major, nonsaturable component and minor, saturable component. The temperature dependencies of the amplitude of SQ CH were fit with the well-known Curie law. The data for SQ o were fit assuming the presence of the Leigh effect 23 in which the correlation time of the fluctuating dipolar field increases with a decrease in temperature. Q-Band spectra of semiquinones were simulated with Easy-spin 24 using the anisotropic g tensor, assuming homogeneous and inhomogeneous line broadening.
Spectral simulations based on a spin Hamiltonian including Zeeman interaction of spins of FeS and SQ o centers with the external static magnetic field and a general bilinear spin−spin interaction term were performed as described in the Supporting Information.
■ RESULTS
Detection of New EPR Transitions Associated with the Q o Site of Cytochrome bc 1 . In searching for intermediates of the Q o site, we performed series of experiments in which isolated cytochrome bc 1 in equilibrium with air catalyzed steady-state electron transfer from the water-soluble QH 2 analogue [2,3-dimethoxy-5-decyl-6-methyl-1,4-benzohydroquinone (DBH 2 )] to oxidized cytochrome c, and the time course of spin states of redox centers was monitored by EPR. The time points of freezing the samples were selected to cover the range from the beginning of the reaction until an equilibrium between the substrates and the products was reached. As a measure of the reaction progress, the amount of oxidized cytochrome c available for reaction was determined from the amplitude of the EPR signal of heme c (not shown). We compared two cases: the reaction catalyzed by the noninhibited enzyme and that catalyzed by the enzyme inhibited with antimycin. These two cases differ by the way in which the heme b L undergoes reoxidation (after its initial reduction by an electron derived from quinol) to support the turnover of the Q o site. In the noninhibited enzyme, heme b H rapidly reoxidizes heme b L and then transfers an electron to the Q i site (see Figure S1 of the Supporting Information). This reaction sequence continues until the equilibrium is reached (the substrates are used up). In the antimycin-inhibited enzyme, the Q i site is blocked by the inhibitor, and after the first QH 2 oxidation at the Q o site, heme b H remains reduced, preventing fast reoxidation of heme b L after the oxidation of a second QH 2 at the Q o site. Nevertheless, this heme can undergo slow reoxidation by the back electron transfer to SQ o that re-forms QH 2 25 or electron transfer to Q that forms SQ o . 5,26−28 With these reactions, the Q o site can also keep the turnover until the equilibrium is reached, although the overall rate is significantly slower than that of the noninhibited enzyme.
As shown in Figure 1a, in the noninhibited enzyme, the level of reduced FeS increased within the first 7 s, reflecting the expected progress of the reaction, and after an equilibrium had been reached, the amplitude of the FeS signal remained constant. In the antimycin-inhibited enzyme, the rate of reaching the equilibrium level of reduced FeS decreased, as expected, but at the same time, quite unexpectedly we observed an additional EPR transition at g = 1.94 (Figure 1b). Its amplitude reached a maximum at 10 s and then gradually decreased to zero. A comparison of amplitudes of EPR signals of hemes b shown in Figure 1c indicates that in the samples exhibiting a g = 1.94 signal, heme b L remained fully oxidized. The presence of a g = 1.94 signal correlated with the presence of another weak signal of organic radical at g = 2.0 (exact value of 2.0045) detected with the use of a high microwave power (Figure 1b). Both g = 1.94 and g = 2.0 signals arose during the enzymatic turnover to reach their maximal amplitudes at the time where the g y (1.89) transition of reduced FeS reached approximately half of its maximal amplitude. After the maximum had been reached, the amplitude of both g = 1.94 and g = 2.0 signals gradually decreased, and when the system reached equilibrium (g y of FeS remains at its maximum), both signals disappeared completely.
The experiments described in Figures 1c and 2 asserted that g = 1.94 and g = 2.0 signals originate specifically from the Q o site. Both signals were sensitive to inhibitors of the Q o site and to point mutations that abolish the activity of the site 7,29 and were not present in the b-c 1 subcomplex lacking the FeS subunit. 20 On the other hand, the amplitude of the g = 1.94 and g = 2.0 signals was larger in the mutants with affected motion of the FeS head domain (+2Ala) 30 (see Figure S2 of the Supporting Information). As +2Ala arrests this domain at the Q o site for seconds with FeS in the reduced state 30 (this way it abolishes the natural submillisecond electronic connection between the Q o site and heme c 1 ), the observed enhancement of the signals immediately suggests that they must be associated with paramagnetism of FeS occupying the Q o site. Furthermore, in light of all of the results described above, the g = 2.0 signal must report SQ o . We note that g = 1.94 and g = 2.0 signals were not present in samples reduced with dithionite (not shown), precluding the possibility that they originate from a contamination of the sample with low-potential iron−sulfur centers.
Identification of the Semiquinone−Rieske Cluster Coupled System. Chemicals, such as DMSO or glycerol, and some point mutations have been reported to induce small changes in the EPR spectra of iron−sulfur clusters in proteins (Rieske or ferredoxins) with shifts in the g y values of <0.01. 29,31−34 The new g = 1.94 transition does not fall into this category, because the observed difference between the g y of Rieske and the new signal was 1 order of magnitude larger (Δg ∼ 0.05) and the signal disappeared over time. Most importantly, the g = 1.94 signal detected at X-band (9.46 GHz) shifted to a g = 1.96 when the same samples were measured at Q-band (33.5 GHz) (Figure 3, black). This excludes the possibility that this signal originated from a new paramagnetic center. It thus must be a result of magnetic interactions between two closely separated paramagnetic species. An assumption that reduced FeS at the Q o site is one of them leaves SQ o as the only possible candidate for the other.
To verify that both FeS and SQ o do interact with one another and to identify the dominant mechanism responsible for the appearance of a new EPR spectrum, we performed simulations based on a spin Hamiltonian including isotropic (scalar exchange) and anisotropic (exchange and dipolar) terms of spin coupling between SQ o and the FeS cluster ( Figure 3) (see details in the Supporting Information). 18 Dipolar interaction alone appeared to be too weak to produce the g = 1.94 transition. However, when spin−spin exchange interaction was taken into account and its frequency |J 0 | was on the order of 3500 MHz (∼0.1 cm −1 ), the simulations neatly reproduced experimental spectra (Figure 3). We thus identified the SQ o −FeS coupled system that at lower magnetic fields (those used at X-band) exists as a triplet state (and will be termed as such, in the remaining text). The Table 1), an inability to saturate it with microwaves ( Figure 4c), and the presence of a Leigh effect (Figure 4d). 23 We note that the fast relaxation makes this SQ o signal different from other reported SQ o signals 12,14,15 that did not show signs of interactions with the FeS and/or heme b L metal centers of the Q o site. (Figure 1c). Furthermore, the signals reached maximal amplitudes when FeS and cytochrome c (acting as the oxidizing pool) were approximately half-reduced (Figure 1b). This suggests that the probability of trapping the g = 1.94 and g = 2.0 intermediates comes as a result of competition between the rate of oxidant-induced heme b L reduction and the rate of its oxidation by the transfer of an electron from heme b L to Q to form SQ o at the time when FeS is reduced. It follows that the conditions of the formation of SQ o −FeS coupled centers are not favored at the beginning of the reaction, when the population of Rieske clusters is largely oxidized and capable of "consuming" electrons from SQ o (time points before appearance of the g = 1.94 and g = 2.0 signals in Figure 1b). On the other hand, as the system reaches equilibrium, the populations of Rieske clusters and cytochrome c become largely reduced and the average oxidant-induced reduction of the heme b L rate decreases, diminishing the amount of electron donor for Q at the Q o site. This leads to the loss of SQ o −FeS and SQ o signals (Figure 1b). Figure 5). It can be envisaged that the SQ o −FeS triplet forms as an initial step of oxidation of QH 2 when oxidized FeS withdraws an electron from QH 2 (state b in Figure 5). Evolution of this state into the state where SQ o and reduced FeS exist as separate spectral identities (state c in Figure 5) leads to immediate reduction of heme b L by SQ o , which completes the reaction generating Q (state d in Figure 5). In this scheme, a direct transition from state b to state d cannot be ruled out and might be even rapid enough to consider the twoelectron oxidation of QH 2 at the Q o site as a virtually concerted process. The flow of electrons out from the cofactor chains (state e in Figure 5) allows the enzyme to regain state a to complete the cycle. For this scheme, the measured g = 1.94 (the SQ o −FeS triplet) and g = 2.00 (SQ o ) signals are spectroscopic signatures of states b and c, respectively. These states were detected only when the flow of electrons out from the Q i site was blocked by antimycin (interrupted transition from state d to e) that, in the context of full reversibility of Q o site reactions, indirectly increased the probability of transfer of an electron from reduced heme b L to Q to form SQ o (bringing the site back to state b or c). 5,26−28 One may ask why a significant amount of SQ o cannot be detected in the noninhibited enzyme. At this stage, the precise answer is difficult. Nevertheless, we may propose that if electron transfer among SQ o , heme b L , and heme b H is a pure tunneling process, not coupled to any chemical event (like protonation/deprotonation, conformational change, etc.), then freezing the samples will not prevent the transfer of the electron from SQ o to heme b H involving a transient step through heme b L . However, in the antimycin-inhibited enzyme, heme b H remains reduced; thus, in frozen samples containing a reduced FeS cluster, an electron may circulate only between SQ o and heme b L . Under these conditions, the highest probability of finding unpaired electrons is on SQ o −FeS coupled centers, and as long as the electron circulation is significantly slower than the Larmor frequency (∼9.5 GHz), it exerts no effect on the EPR spectra of SQ o −FeS coupled centers at the Q o site.
Thermodynamic Properties of the SQ o −FeS Couple. While the quantity of residual SQ o (from state c) cannot be determined because of the presence of the Leigh effect, 23 the estimated maximal abundance of the SQ o −FeS triplet state (state b) reaches as much as ∼9 and ∼17% of the total concentration of FeS in WT and +2Ala cytochrome bc 1 , respectively ( Figure 3). This indicates that SQ o may not be as highly unstable as the models of the Q o site assume. 5,10,11,13 This raises the question of how much the stability constant (K stab ) of SQ o detected in this work differs from the K stab of ≪10 −7 typically reported in the literature. 7,14,15,25,36 Any temptations to estimate this difference must consider the fact that in our experiments the new intermediates were detected under nonequilibrium conditions of continuous turnover; thus, the use of K stab for a description of the stability of SQ o may be invalid, as this parameter is used to define stability in systems under thermodynamic equilibrium conditions. Nevertheless, the use of this parameter for the description of SQ o −FeS triplet stability at the time point (t max ) where the amount of SQ o is the highest yields a K stab on the order of 10 −2.6 . a This is more than 3 orders of magnitude larger than the previously defined upper limit of K stab for SQ o . Such a value of K stab makes the stability of SQ o comparable to stabilities of other semiquinones in proteins, such as that of the Q i site. 35 Until now, the Q o site has been considered exceptional in that, unlike other quinol oxidation−reduction sites, it did not stabilize semiquinones that were naturally volatile outside the protein matrix. 2 Our work suggests that the instability of SQ o is apparent and is a consequence of the simultaneous accessibility of two redox partners rather than a lack of an influence of the site on the stability of SQ o .
Relation of SQ o to the Superoxide-Generating Activity of Cytochrome bc 1 . The observation that large quantities of SQ o can be detected under aerobic conditions indicates that SQ o is not as highly reactive with oxygen as current mechanisms of superoxide production by cytochrome bc 1 assume. 10,11,14,37 In fact, high levels of the SQ o −FeS triplet state signal observed in the +2Ala mutant, which does not produce any detectable superoxide, 27,28 indicate that conditions of triplet formation (when SQ o is likely to be hydrogen-bonded to histidine liganding the FeS cluster) do not impose a risk of electron leaks on oxygen. This, however, does not preclude the possibility that the enzyme faces such a risk if SQ o is present at the time when FeS is remote from the Q o site 27,28 (and the hydrogen bond is not formed). This could be explained in analogy to the reactions of 1,4-semiquinones with oxygen in solution. In such chemical systems, it was found that "hydrogen bonding of the -OH moiety in the semiquinone radical to the HBA (hydrogen-bond-accepting) solvent prevents reaction of the semiquinone with O 2 ". 38 Possible Contribution of bc-Type Complexes to the g = 1.94 Signal in Other Bioenergetic Systems. Signals near g = 1.94 often reported in studies on mitochondrial and bacterial respiration have usually been attributed to iron−sulfur clusters of complex I and II, even though their origin was not always clear. 39−42 Our work implies that the Q o site of complex III, so far beyond consideration, should in fact be regarded as one of the possible contributors to the mitochondrial g = 1.94 signal. The diagnostic feature of the Q o site-deriving g = 1.94 signal at X-band is its shift to larger values with an increase in The enzyme goes through further states to reach the initial state a. Antimycin prevents oxidation of heme b H , interrupting the transition from state d to state e. Black and red denote the oxidized and reduced cofactor, respectively, while the dot with an arrow indicates the paramagnetic state of the center. Orbitals engaged in spin exchange are shown as gray ovals. Blue, black, magenta, and green spectra are EPR spectra of heme b L , SQ o , FeS, and the SQ o −FeS triplet state, respectively. Green arrows show transitions between the enzyme states. The blue box denotes the state that was detected as a major fraction of SQ. The scheme does not consider the still unknown proton transfers that may influence transitions between the states. EPR frequency, as observed in cases of weak exchange between two paramagnetic centers. 19 We anticipate that knowledge of spectroscopic properties of the SQ o −FeS triplet signal will allow us to examine whether it can accumulate in mitochondria to relate SQ o levels with other radicals, including ROS, formed during respiration. 43
■ CONCLUSIONS
In this work, we identify new EPR transitions (g = 1.94 and g = 2.0) associated with the enzymatic activity of cytochrome bc 1 . Those two transitions revealed the presence of two distinct populations of semiquinone (SQ o ) formed at the quinol oxidation site (the Q o site). The g = 1.94 signal was assigned as one of the transitions originating from SQ o coupled to the Rieske cluster (FeS) by spin−spin exchange interaction. By analyzing the Q-and X-band EPR spectra of this coupled system, we estimated the 3500 MHz value of the isotropic exchange coupling constant, |J 0 |, which is strong enough to create the SQ o −FeS triplet state at the lower magnetic field typical of X-band. The radical signal centered at g = 2.0 corresponded to the population of fast-relaxing SQ o for which spin−spin exchange does not exist or is too weak to be resolved. The paramagnetic properties of this signal were strongly affected by metal centers, consistent with its location between two fast-relaxing metal centers of the Q o site (FeS and heme b L ). The detection of SQ o together with oxidized heme b L in samples containing antimycin suggests that the dominant way of generating SQ o that can be detected under nonequilibrium conditions is the transfer of an electron from heme b L to Q bound at the Q o site. Under these conditions, the amount of SQ o is comparable to the amount of stable semiquinones detected in catalytic sites of other bioenergetic enzymes.
* S Supporting Information
Simulations of EPR spectra, outline of catalytic cycle of cytochrome bc 1 ( Figure S1), comparison of EPR spectra of wild type and +2Ala mutant ( Figure S2) and references. This material is available free of charge via the Internet at http:// pubs.acs.org.
Funding
This work was supported by The Wellcome Trust International Senior Research Fellowship (to A.O.).
Notes
The authors declare no competing financial interest.
■ ADDITIONAL NOTE
a Given that ∼20% of the total Rieske clusters is coupled to SQ o , the total concentration of SQ o is ∼10 μM. This means that the total concentrations of QH 2 and Q at t max are ∼75 and ∼580 μM, respectively. For these values, the K stab calculated from the formula K stab = [SQ o ] 2 × [Q] −1 × [QH 2 ] −1 is 10 −2.6 . | 2016-05-31T19:58:12.500Z | 2013-08-13T00:00:00.000 | {
"year": 2013,
"sha1": "6c9f8aabc93bb74e48df1fd8e91e9838985a015b",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.1021/bi400624m",
"oa_status": "HYBRID",
"pdf_src": "PubMedCentral",
"pdf_hash": "6c9f8aabc93bb74e48df1fd8e91e9838985a015b",
"s2fieldsofstudy": [
"Chemistry"
],
"extfieldsofstudy": [
"Chemistry",
"Medicine"
]
} |
246065889 | pes2o/s2orc | v3-fos-license | Computational complexity explains neural differences in quantifier verification
Different classes of quantifiers provably require different verification algorithms with different complexity profiles. The algorithm for proportional quantifiers, like 'most', is more complex than that for nonproportional quantifiers, like 'all' and 'three'. We tested the hypothesis that different complexity profiles affect ERP responses during sentence verification, but not during sentence comprehension. In experiment 1, participants had to determine the truth value of a sentence relative to a previously presented array of geometric objects. We observed a sentence-final negative effect of truth value, modulated by quantifier class. Proportional quantifiers elicited a sentence-internal positivity compared to nonproportional quantifiers, in line with their different verification profiles. In experiment 2, the same stimuli were shown, followed by comprehension questions instead of verification. ERP responses specific to proportional quantifiers disappeared in experiment 2, suggesting that they are only evoked in a verification task and thus reflect the verification procedure itself. Our findings demonstrate that algorithmic aspects of human language processing are subjected to the same formal constraints applicable to abstract machines.
For these and other reasons, quantifiers have been studied extensively in theoretical linguistics, psycholinguistics, and cognitive neuroscience. One common theme in the cognitive neuroscience literature is that quantifiers can give rise to different truth-conditions depending on the surrounding linguistic context (Freunberger & Nieuwland, 2016;Kounios & Holcomb, 1992;Nieuwland, 2016;Noveck & Posada, 2003;Urbach et al., 2015;Urbach & Kutas, 2010) or the order of the quantifiers in multiply quantified sentences (Dwivedi et al., 2010;McMillan et al., 2013). One empirical question is whether quantified sentences are verified and interpreted incrementally or whether instead their interpretation is delayed until the whole sentence has been parsed. Another question is whether incrementality interacts with negation or negative polarity more generally (Augurzky et al., 2020a;Freunberger & Nieuwland, 2016;Nieuwland, 2016;Urbach et al., 2015;Urbach & Kutas, 2010).
What unifies these studies is that they all use verification paradigms. As will be more thoroughly discussed in Section 1.1, different classes of quantifiers require distinct verification procedures, and these can in turn be classified differently in terms of their computational complexity. The aims of the present study are to explicitly manipulate quantifier class in a verification task, to demonstrate that computational complexity plays a role in determining which type of algorithm is implemented in the verification of different classes of quantifiers, and to gather initial empirical information on how quantifiers are verified by the brain.
Aside from being relevant to the processing of quantifiers specifically, the approach exemplified herein can help shed light on algorithmic aspects of semantic processing more generallyan area that hitherto has not received sufficient attention (Baggio, 2018(Baggio, , 2020. Arguably, in order to explain the capacity to comprehend and produce meaningful utterances, it is not enough to know what computation is being carried out and which brain areas are activated when over the course of the computation. In line with Marr's (1982) levels of analysis in cognitive science, algorithms are essential in mediating between the computational and implementational levels, since they are restricted both by the nature of the computation and by what kinds of processes can be carried out by the physical medium of the brain (Baggio, Stenning, & van Lambalgen, 2016;Baggio, van Lambalgen, & Hagoort, 2015;Embick & Poeppel, 2015;Lewis & Phillips, 2015). Regardless of the cognitive plausibility of truth functional semantics, verification is a well-defined computation, and knowing the impact of different verification procedures on sentence processing is, at a minimum, useful in disentangling effects of task from effects of representation, structure-building, prediction, and other processes.
Relatedly, there is a growing body of literature advocating so-called procedural semantics (Moschovakis, 2006;Muskens, 2005;Pietroski et al., 2009;Suppes, 1982;Szymanik, 2016;Tichý, 1969;van Benthem, 1986;van Lambalgen & Hamm, 2005), where the meaning of an expression is a set of algorithms computing its extension, which for declarative sentences amounts to a model-building or verification procedure. However, the theory we test and the task we employ here are focused on verification, not meaning representation as such. Consequently, the data cannot be used to argue for or against this philosophical position about the nature of meaning or its linguistic and computational instantiations.
Informally, verification algorithms go through the objects in the domain denoted by the quantified phrase sequentially in order to check whether the property predicated of these objects holds true. For Aristotelian quantifiers, this entails going through the contextually relevant objects one after the other and looking for a (counter)example of an object with(out) the predicated property; once the (counter)example is (not) found, it can be established whether the expression is true. To exemplify, when verifying a sentence like 'All the circles are red' in a domain of differently colored circles, the algorithm searches through all the circles until it finds a non-red circle, in which case the sentence is false. If a non-red circle is not found, the sentence is true. In the same vein, for numerical quantifiers, one counts the number of objects with the predicated property, and if one finds the number of objects required by the quantifier, the quantifier expression is true. As an illustration, consider the sentence 'Three of the circles are red' in a domain as above. For this sentence, the algorithm looks for red circles and counts until three red circles have been found. If three red circles are found, the algorithm outputs true, and if not, it outputs false. Because these algorithms only require paying attention to one type of object, either with or without the predicated property, these kinds of quantifiers can all be computed by a finite state automaton (FSA) and can equivalently be described in a regular language (Kleene, 1951).
To verify proportional quantifiers, by contrast, one needs to enumerate both the objects that have the predicated property and those that do not.
Once one has considered and classified all the objects, one compares the number of objects in the two sets. If the ratio of objects with the predicated property to objects without it conforms to the ratio set by the quantifier, e.g., 'more than half', the expression is true. In a domain corresponding to the examples above, to verify a sentence like 'most circles are red' the algorithm must keep track of both the red circles and the non-red circles, and if the red circles outnumber the non-red circles, the algorithm outputs true; it outputs false if there are more non-red than red circles. Such verification algorithms for proportional quantifiers cannot be computed by an FSA, and instead require a push-down automaton (PDA) with a memory component where the information about both types of objects can be stored. PDAs correspond to contextfree languages (Hopcroft & Ullman, 1979, p. 116), and are thus strictly more complex than regular languagesand FSAsaccording to the Chomsky hierarchy (Chomsky, 1956). For a formal description and textbook explanation of the different algorithms, see Szymanik (2016, chapter 4).
Previous research and relevant electrophysiological effects
Previous studies have shown that computational differences between quantifiers have significant cognitive effects in terms of accuracy and reaction time in picture-sentence verification tasks (Szymanik & Zajenkowski, 2009, 2011Zajenkowski & Szymanik, 2013;Zajenkowski et al., 2014). Furthermore, fMRI studies (McMillan et al., 2005;Olm et al., 2014) have found that (pre)frontal areas associated with working memory and executive function, notably the dorsolateral prefrontal cortex, have found an increase in BOLD responses for proportional relative to nonproportional quantifiers in the same type of task. Building on these findings, verification paradigm studies of patients with neurodegenerative diseases (McMillan et al., 2006;Morgan et al., 2011) have found that atrophy in these regions is associated with decreased performance with proportional, but not nonproportional quantifiers. Similar effects are also found in fMRI experiments in the mathematical cognition literature, where bilateral frontal activation is associated with processing of proportions both in adaptation and magnitude comparison paradigms (Jacob & Nieder, 2009;Mock et al., 2018Mock et al., , 2019. The same effects are found regardless of whether proportions are presented mathematically or verbally, i.e., by means of a natural language quantifier (Jacob & Nieder, 2009). By contrast, previous electrophysiological studies of quantifiers have either considered only one class of quantifiers in each experiment (Augurzky et al., 2017;Augurzky et al., 2019;Augurzky et al., 2020a;Augurzky et al., 2020b;Kounios & Holcomb, 1992;Noveck & Posada, 2003), or have used quantifiers from different classes as polar opposites (Freunberger & Nieuwland, 2016;Nieuwland, 2016;Urbach et al., 2015;Urbach & Kutas, 2010). To our knowledge, the only exception is a small-scale study by De Santo et al. (2019), to be discussed below, that looked at differences between Aristotelian 'some' and proportional 'most'.
Additionally, few studies have looked at sentence verification in relation to a picture. Spychalska et al. (2019Spychalska et al. ( , 2016 were only interested in sentence final effects of implicature violations, and showed the picture mid-sentence, immediately before the final word. This modulated the N400 and post-N400 positivities. The authors were able to show that participants' pragmatic sensitivity had an effect on the evoked potential in trials where scalar implicatures were modulated. However, the design did not allow investigating incremental effects of verification that could originate at earlier points in the sentence. Hunt III et al. (2013) and Politzer-Ahles et al. (2013) were also interested in implicature violations, but presented pictures before each sentence. The former found graded N400 responses with a visual world paradigm for true, underinformative and false sentences: false sentences elicited the strongest effect compared to true, whereas underinformative fell in the middle. Politzer-Ahles et al. (2013) looked at effects on the quantifier. In a 2 × 2 design with 'some' and 'all' -where 'all' was true when 'some' was underinformative, and false when 'some' was strictly truethey found sustained positivities for quantificational violations with 'all', but sustained negativities for implicature violations with 'some'. Augurzky et al. (2017Augurzky et al. ( , 2019Augurzky et al. ( , 2020aAugurzky et al. ( , 2020b have all addressed issues of incrementality. They found that, regardless of quantifier type -Aristotelian or proportional, in nominal, e.g., 'all the circles', or adverbial form, e.g., 'every day' -the N400, and related truth value effects, are only found at the position where the sentence is disambiguated. When the presented linguistic material is compatible with the sentence being both true and false, N400 effects do not arise. The only exception to this pattern is the negative proportional quantifier 'less than half', for which the N400 does not arise at all (see also Nieuwland, 2016;Urbach et al., 2015;Urbach & Kutas, 2010). In these cases, they instead found an increased positivity on the quantifier, which they attributed to the semantic complexity of the negative polarity (see e.g. Deschamps et al., 2015;Just & Carpenter, 1971). In all experiments, a sustained positivity was also found after the N400 in false trials where the truth value could not be known immediately, but only when participants performed a verification task. The authors attributed this to increased attention to the picture-sentence mapping in complex contexts, and argue that it is a P600-as-P3 decision effect (Sassenhagen et al., 2014).
De Santo et al. (2019) conducted a small-scale study (N = 8) where they compared proportional and Aristotelian quantifiers in a picture-verification task in which participants saw an array of geometrical shapes while hearing a quantified sentence. The auditory stimuli were divided into subject and predicate segments, and presented with a 200 ms interval between them. In the predicate segment, they found a small difference in the N200 for true versus false for 'some' sentences, but not for 'most' sentences. Furthermore, there were no differences in the N400, and both elicited a post-N400 positivity for false versus true trials, which lasted until the end of the trial for 'most', but not for 'some'. In the subject segment, a significant positivity was found for 'most' relative to 'some', visible from around 300 ms and sustained throughout the epoch.
Summing up, previous studies have shown that truth value relative to a picture does elicit the same truth value effects as verification tasks without pictorial material, i.e., larger N400s for false than for true sentences. These N400s do not arise before the truth value of the sentence can be confidently determined, and they are followed by an increased positivity when the complexity of sentence-picture matching places greater cognitive demands on the decision process. Furthermore, sustained effects are observed earlier in the sentence, indicating that verification affects the processing of the entire sentence, and not just the final disambiguating word. This is true regardless of whether the complexity stems from the picture or the sentence.
The present study
In two ERP experiments, we sought to determine whether differences in the computational complexity of the verification algorithm for different quantifier classes are reflected online during sentence processing. Notably, proportional quantifiers should be computationally more demanding, in terms of the neural responses they elicit, than nonproportional quantifiers, here Aristotelian and numerical quantifiers (Baggio, 2018;Baggio & Bremnes, 2017). The complexity differences between proportional and nonproportional quantifiers should be reflected in real-time ERP signals in an explicit verification task, and not when participants are only asked comprehension questions.
Importantly, this question is on a higher level of abstraction than the one posed in a parallel behavioral literature, investigating specific algorithms associated with specific quantifiers (Hackl, 2009;Hunter et al., 2017;Knowlton et al., 2021;Lidz et al., 2011;Pietroski et al., 2009;Pietroski et al., 2011;Talmina et al., 2017;Tomaszewicz, 2011). The formal proofs outlined above demonstrate that, regardless of which specific algorithm is implemented to verify a proportional quantifier, the algorithm still minimally requires a push-down automaton (PDA) with a memory component to perform the task, thereby making it more computationally complex than the corresponding finite state automaton (FSA) algorithms for the nonproportional quantifiers. Relatedly, the notion of memory evoked by the automata theory is also highly abstract. The implication of specific types of memory resources employed by the brain, and therefore of specific ERP components associated with them, is not strictly predicted by the theory, and as such remains an open empirical question not addressed by the experiments presented herein.
In the present study, participants saw images of red or yellow circles and triangles, and subsequently read quantified sentences about the contents of the picture. In the first experiment, participants had to judge whether the sentence was true or false of the picture, and in the second, they had to answer comprehension questions about the picture, the sentence, or both.
We expect false sentences to elicit a sentence-final N400 type of response. If that is observed, we can reasonably conclude that the sentence has been processed and understood. Furthermore, if effects of truth value are indeed detected, we can also infer that, at that stage, the verification algorithm has already been executed. Possible ERP differences resulting from algorithmic complexity must then be observed prior to the onset of the truth value effect. To establish that these effects are related to the verification procedure, we must rule out that these differences stem from other sources, in particular comprehension processes. Thus, if different ERP effects between quantifier classes are observed only in experiment 1 (verification) but not in experiment 2 (comprehension), then they can be hypothetically considered as candidate neural signatures of the algorithmic processes posited by the formal theory.
Design
We used a 3 × 2 design with the factors Quantifier Class (3 levels: Aristotelian, Numerical, and Proportional) and Truth Value (2 levels: True and False). Participants performed a picture-sentence verification task for each trial. To prevent eye movements that would affect the EEG recording, participants could not look at the picture while the sentence was presented and verified. Instead, a picture was shown before each sentence, at the beginning of each trial. To ensure that participants could memorize the picture well enough, and that memory encoding or recall of the picture as such would not interfere with deployment of memory resources for verification, the same picture was used within a block. Additionally, participants had the opportunity to study the picture as long as they wanted at the beginning of each block. Details on stimulus presentation, block design, and task are given below.
In this experimental set-up, all quantifier classes require some form of memory in order for participants to perform the task. However, the automata theory shows that verification of proportional quantifiers further requires manipulation of items in memory, specifically comparing two sets of objects: this requires an additional memory component. This is predicted to further increase memory load, as compared to the other two classes.
Participants
Thirty right-handed native Norwegian speakers (13 female; mean age 21.53, SD = 2.58; age range 18-27), with normal or corrected to normal vision and no psychiatric or neurological disorders, were recruited from the local student community. Twenty-four participants (11 female; mean age 21.65, SD = 2.73; age range 18-27) met the inclusion criteria of having an average of at least 20 artifact-free trials per condition, and were included in the final analysis. All participants gave written informed consent and were compensated with a voucher. The study was approved by The Norwegian Centre for Research Data (NSD; project nr. 455334).
Materials
Twelve images consisting of clusters of 2-5 red and yellow circles and triangles in a 2 × 2 grid were constructed. The colors red and yellow were chosen because their color words both end in consonants in Norwegian ('rød' and 'gul', respectively), and preference for plural '-e' congruence marking on color words ending in vowels varies within the population (Faarlund et al., 1997, p. 370). The location, number, and color of the shapes were varied pseudorandomly. Importantly, we chose to vary both shape and color to guarantee that participants could not know the truth value of the sentence before the final word. Previous experiments with similar set-ups (e.g. Brodbeck et al., 2016) have all emphasized the need for simple pictures from which quantity information can be rapidly extracted to minimize memory encoding and subsequent retrieval. This is particularly important since quantifier class is expected to modulate memory, and such effects would be hard to detect if memory load was already high in all conditions. Note that the hypothesis above, derived from the formal proofs, is that proportional quantifiers are more difficult and require a memory component regardless of the cardinality of the set of objects: there is no strategy that can simplify the task.
To construct the sentences, two quantifiers from each quantifier class were chosen. Consequently, 6 different quantifiers were used in the stimulus set. In order to maintain syntactic identity between sentences, only quantifiers that take a plural definite complement were chosen. Numerical quantifiers were 'tre av' (three of) and 'fem av' (five of), and the Aristotelian quantifiers were 'alle' (all) and 'ingen av' ('none of'). 'Some' was not chosen because it affords two interpretations: a logicosemantic at least one reading and a pragmatic some but not all reading (e.g. Levinson, 1983, p. 134). For proportional quantifiers, 'de fleste' (most) and 'faerrest av' (the fewest) were chosen. Downward monotone quantifiers are less frequent than upward monotone (Szymanik & Thorne, 2017), but since we wanted the two quantifiers to have complementary truth values, we decided to include 'faerrest av'. Another issue with the proportional quantifiers, is that 'de fleste', like 'most' (e.g. Hackl, 2009), has both a proportional and a superlative/comparative meaning, whereas 'faerrest av' does not. However, since the two meanings are denotationally equivalent in binary contexts, when there are only two alternatives, this issue was ignored. It is also important to note that 'faerrest av' -in contrast to its English translationtakes a definite complement, and thus behaves identically to all the other quantifiers with respect to predicating a property of a set of objects. For an overview of the semantics of quantity adjectives in Germanic languages, and in particular the differences between the Scandinavian languages and English with respect to definiteness, see Coppock (2019).
All sentences had the form of quantifier + shape noun + copula + color adjective, see Table 1. Each quantifier was presented equally many times with all shape and color combinations in a total of 288 sentences (48 per quantifier and 96 per quantifier class). The sentences were counterbalanced according to truth value between each of twelve blocks with 24 trials each. Because the image remained the same within a block, some sentences occurred more frequently in some blocks than in others, and the ratio of true to false sentences differed slightly between blocks (range: 9-14; median: 12.5), but were evenly balanced through the experiment overall. The order of the sentences were randomized within each block. Further, we created 2 randomizations of the order of the blocks, and these were run both forward and backward, resulting in 4 different orders of the blocks, to ensure that training effects were distributed equally across trials: the imbalance of sentence-types in the different blocks was counterbalanced by participants encountering them at different stages of the experiment in random order.
All pictures and sentences can be found in the supplementary material.
Procedure
After reading the information sheets and signing the consent forms, participants were seated in front of a computer screen in a dimly lit, sound attenuated, and electrically shielded EEG booth. They were instructed to judge whether each sentence was true or false of the picture seen before each trial by using two predefined response buttons (Fig. 1). Which button indicated true or false was counterbalanced between blocks, and participants were informed of this by two squares with the words 'sant' (true) and 'usant' (false) on horizontally opposing sides of the screen, with the alternatives on the side of the screen corresponding to the relative placement of the response keys. This information was provided both at the beginning of the block and every time they had to make a truth value judgement. As numerical quantifier interpretation is known to vary between participants, they were asked to interpret these exactly (e.g., three and no more than three) rather than as a lower bound (e.g., at least three). It was especially important to ensure that all participants interpreted the sentences in the same way, because the two readings have been shown to give rise to different ERP profiles (Spychalska et al., 2019). The choice of the exact reading was made on the grounds that this reading is preferred by the majority of people (Shetreet et al., 2014;Spychalska et al., 2019). Finally, they were told not to blink or move while reading the sentences, and that any necessary such activity could take place only while looking at the picture or when they saw a fixation cross.
At the beginning of each block, after the indication of which buttons corresponded to true and false was provided, participants saw the picture that would be presented before each trial in that block. They were advised to study the picture carefully and press a button when they were ready to begin. Each trial began with the presentation of the picture for 4 s. The picture was followed by a 500 ms fixation cross and 500 ms of blank screen. Subsequently, the sentence was presented one word at a time for 400 ms with a 400 ms blank screen onset delay. The quantifier was always presented as one expression and on a single screen frame, even if it was not a single syntactic word. This was done in order to make the length of every trial identical, which was necessary to be able to compare verification procedures. After the sentence had been presented, the same fixation cross and blank screen followed, before participants had to press a button to indicate whether the sentence was true or false. Once they had responded, or if they had not responded for 4000 ms, a new trial started immediately. When they had completed all 24 trials in the block, the experiment was paused and the participant had to press a button to begin the next block. Consequently, participants were free to determine the length of the break themselves. Each experimental session lasted between 1:10 and 1:20 hours, including breaks.
. Data analysis
Accuracy and reaction time data were collected. The principal function of accuracy in this experiment was to ensure that participants were actually correctly verifying the sentences. Reaction times were primarily gathered in order to compare our study to previous behavioral experiments, but as there was a 1400 ms delay between the presentation of the final word and the response due to the fixation cross, it was acknowledged that they would not be directly comparable. The accuracy and reaction time data were subjected to mixed effects logistic and linear regression, respectively, using the glmer function of the lme4 package (Bates et al., 2015) in R. Quantifier class and truth value were fixed effects and the models had random intercepts by participant. We did not include random intercepts by item, since aside from the experimental manipulation (i.e. replacing the quantifier) the experimental stimuli were identical. As a consequence, the variance between items is not random, but is captured by a fixed effect. For both fixed effects, model comparison was performed.
EEG data were analyzed using FieldTrip (Oostenveld et al., 2011). At the quantifier, at the noun completing the noun phrase, and at the sentence-final adjective, 1000 ms epochs were extracted, including a 200 ms prestimulus interval that was used for baseline correction, and re-referenced to the averaged mastoids. Using automated artifact rejection, any trial in which one or more electrodes exceeded ±150 μV relative to baseline were rejected. Additionally, trials including eye movements were excluded by thresholding the z-transformed value of the preprocessed raw data from Fp1 and Fp2 in the 1-15 Hz range. The remaining trials were subsequently low-pass filtered at 30 Hz. Participants that had an average of fewer than 20 out of 24 trials per condition were excluded from the analysis. 6 participants did not meet these criteria.
ERPs were computed for each sentence segment by averaging all trials in one condition, that is, a sentence segment by quantifier by truth value. The same procedure was used to compute ERPs for collapsed conditions: sentence segment by quantifier class, truth value at the final word, and quantifier class by truth value at the final word. Numerical and Aristotelian quantifiers were computed both as individual classes and as a collapsed class. Because the quantifier was presented in a single frame, quantifiers differed both in length, frequency, and to a certain extent morphology and syntax: any differences here might be caused by small saccadic eye-movements, frequency, or ease of comprehension. In order to avoid these confounds, we only analyzed the parts of the sentence where participants were presented with identical linguistic material, so that the only difference between them was based on the algorithm being computed.
The ERPs were analyzed using non-parametric cluster-based statistics (Maris & Oostenveld, 2007), with alpha thresholds at 0.05 for both sample and cluster level. To assess differences between conditions, each channel-time pair (or sample) in two conditions were compared by means of a t-test. If the results of this test were significant at the 0.05 alpha level in at least 2 neighbouring channels and 2 neighbouring time-points, these channel-time pairs were made into a cluster, and the t-values of all channel-time pairs were summed. To assess statistical significance at the cluster-level, p-values were estimated using Monte Carlo simulations. In a cluster, all participant level channel-time pairs across conditions were collected into a single set which was then randomly partitioned into two subsets. This procedure was repeated 1000 times. The p-value was estimated by the number of partitions in which the test statistic was larger than in the observed data. In each case, the output is a set of (possibly empty) spatio-temporal clusters in which a pair of conditions are significantly different: we report the T sum , size (S) and estimated p-values in the highest-ranked clusters. For additional details, see Maris and Oostenveld (2007).
Behavioral results
Overall accuracy was high (mean = 0.945, SD = 0.229), and even within groups all means were above 0.9 (see Table 2 for descriptive statistics). When fitted to a mixed effects logistic regression model with accuracy as a binomial dependent variable and random intercepts by participants (see Table 3), β estimates revealed that participants were significantly (p < 0.0001) less accurate with both proportional and numerical quantifiers relative to Aristotelian quantifiers. The effect of truth value was not significant (p = 0.9). We then re-fitted the models without one of the fixed effects, and we compared the re-fitted models to the full models by means of an ANOVA. Removing condition led to a significantly poorer model (χ 2 = 103.17, p < 0.0001), whereas removing the effect of truth value did not significantly impact model fit.
Response times were fast both in general (mean = 659.8 ms, SD = 566.6) and across quantifier classes (see Table 2). A mixed effects linear regression model was fitted to the data with random intercepts by participants (see Table 4). It revealed a significant increase in reaction time for numerical (p = 0.005) and proportional (p < 0.0001) quantifiers relative to Aristotelian quantifiers. True sentences also elicited significantly (p = 0.035) faster responses than false sentences. Results of the same type of model comparison as for the logistic regression above, indicated that both quantifier class (χ 2 = 23.34, p < 0.0001) and truth value (χ 2 = 5.194, p = 0.023) contributed to explaining the variance in reaction time.
EEG results
2.2.2.1. Sentence-final effects: adjective. We first consider ERP effects at the sentence-final adjective. This is the earliest point in time at which participants can determine with confidence whether a sentence is true or false. We therefore expect that neural responses at the adjective will show sensitivity to truth value. Overall, false trials show a more Fig. 1. Structure of a single trial from experiment 1. Trial structure was the same in experiment 2, except that the true/false (sann/usann) screen was replaced by a comprehension question (4000 ms) followed by a maximum 4000 ms interval within which the participant could produce an answer. negative-going complex ERP response than true trials, largely similar across quantifier classes (Fig. 2). Statistical analyses of ERP effects in the comparison between false and true trials, collapsing across quantifier classes, show a large negative cluster between 200 and 500 ms from adjective onset with a broad scalp distribution (first-ranked cluster, NEG1: T sum = − 28189.93, S = 5631, p < 0.001) and a smaller negative cluster between 600 and 800 ms (second-ranked cluster, NEG2: T sum = − 6246.91, S = 2123, p = 0.019; Fig. 3). The effect is also present for each quantifier class taken separately (Aristotelian, first-ranked cluster, NEG1: T sum = − 41153.75, S = 10532, p < 0.001; numerical, first-ranked cluster, NEG1: T sum = − 15925.43, S = 4123, p = 0.002; proportional, first-ranked cluster, NEG1: T sum = − 6389.83, S = 2136, p = 0.012; Fig. 3). These were the only clusters in which the associated Monte Carlo p-values are below the α = 0.05 threshold. The decreasing cluster sizes (S) and cluster-level T sum statistics from Aristotelian to numerical to proportional indicate that the size of the truth value effect in ERPs varies accordingly, with the largest effect observed for Aristotelian quantifiers and the weakest for proportional quantifiers. An inspection of ERP waveforms (Fig. 2) provides further information on the nature of these effects and their possible underlying physiology. ERP waveforms do not differ between conditions in the first 200 ms after adjective onset, up to and including the N100-P200 complex. From about 200 ms, waveforms differ qualitatively between false and true trials, and these qualitative differences are modulated by the quantifier classes. All true trials present a clear P300 component, particularly visible over posterior channels (Fig. 2, black lines). The P300 component appears largest for true trials with Aristotelian quantifiers and smallest for true trials with proportional quantifiers, with numerical quantifiers falling in between. These differences persist throughout the epoch (Fig. 2). In direct comparisons between true trials across quantifier classes, we only found a marginal effect for the firstranked cluster in the contrast between Aristotelian and proportional quantifiers (T sum = 2081.42, S = 806, p = 0.072), and no effects for Aristotelian vs numerical or numerical vs proportional. These data indicate that verification strategies at the sentence-final word for true trials do not differ, in terms of underlying physiology, between quantifier classes. ERP waveforms appear qualitatively different in false trials. All false trials present a visible rising flank of the N400 component (Fig. 2, red lines) or possibly of an N200-N400 complex. After 300 ms from adjective onset, waveforms from false trials show a positive-going deflection: this coincides temporally with the P300 in true trials, suggesting that a P300 wave may overlap with the peak and the falling flank of the N400 component, rendering its characteristic features less visible here. Importantly, from around 300 ms, the waveforms for false trials diverge between the quantifier classes. They pattern together in false trials with Aristotelian and numerical quantifiers, showing more negative voltage values overall and no differences between them (no positive or negative clusters with a significant effect). Differences were found between Aristotelian and proportional quantifiers (first-ranked cluster: T sum = − 5013.65, S = 1635, p = 0.015) and between numerical and proportional quantifiers (first-ranked cluster: T sum = − 3969.17, S = 1394, p = 0.034), indicating that proportional quantifiers are associated with a more positive-going deflection in ERPs than both Aristotelian and numerical. These results suggest that verification strategies at the sentence-final word for false trials differ, in terms of underlying physiology, between proportional quantifiers and Aristoteliannumerical quantifiers.
Sentence-internal effects: noun.
We now consider ERP effects at the sentence-internal noun position. This is the earliest point in time at which participants can effectively initiate the verification process, recalling from memory the content of the picture, storing in memory the content of the sentence, and integrating the two. We therefore expect that neural responses at the noun will show sensitivity to the computational complexity of the different quantifier classes, with proportional quantifiers resulting in qualitatively different ERP responses than Aristotelian and numerical quantifiers. At the noun, we observed diverging ERP responses between the quantifier classes following the N100-P200 complex. Numerical quantifiers exhibit a more negative-going ERP response throughout the epoch, proportional quantifiers elicit a more positive-going response, and Aristotelian quantifiers tend to fall between the two (Fig. 4). Direct comparisons between numerical and Aristotelian quantifiers reveal only a marginal ERP effect in one small negative Fig. 4. Grand-average ERP waveforms from 9 selected channels, time locked to the onset of the sentence-internal noun (0 ms) in experiment 1. Trials from nouns following Aristotelian quantifiers are shown in black, blue is numerical quantifiers, and red is proportional quantifiers. (For interpretation of the references to color in this figure legend, the reader is referred to the web version of this article.) cluster (first-ranked cluster, NEG1: T sum = − 2193.62, S = 814, p = 0.081; Fig. 5). In contrast, we found larger positive clusters in the comparisons between proportional and Aristotelian quantifiers (firstranked cluster, POS1: T sum = 3183.25, S = 1237, p = 0.041), proportional vs numerical quantifiers (first-ranked cluster, POS1: T sum = 3231.82, S = 1177, p = 0.040), and proportional vs numerical and Aristotelian collapsed (first-ranked cluster, POS1: T sum = 5888.53, S = 2225, p = 0.019; Fig. 5). This positive ERP shift, driven by proportional quantifiers relative to the two other classes, is largest after 600 ms from noun onset, both in terms of voltage values and statistically. Its Fig. 5. ERP effects of pairwise comparisons between quantifier classes, time locked to the onset of the sentence-internal noun (0 ms) in experiment 1. Raw effect waveforms (left column) are displayed along with contour maps of sample-level statistics (middle column) and raster plots of cluster-level statistics (right column). Clusters with an associated p-value below the specified threshold (α = 0.05) are shown in yellow shades; all other clusters (gray shades) were statistically not significant. (For interpretation of the references to color in this figure legend, the reader is referred to the web version of this article.) temporal profile and posterior distribution (Fig. 5, contour plots of sample-level statistics) appear more consistent with a P600 effect than with earlier positivities, such as the P300.
Interim discussion
The sentence-final negative effect of truth value revealed that participants are correctly performing the task. The negativity was also modulated by Quantifier Class, such that the largest effect was found for Aristotelian and the smallest for proportional, with numerical quantifiers in between. Furthermore, while there were no significant differences between the classes in true trials, proportional quantifiers differed from the other two in false trials. Notably, we observed that, from around 300 ms, proportional quantifiers are more positive than Aristotelian and numerical. These results are comparable to the effects from Augurzky et al. (2017) in that the negative effect is somewhat earlier than a standard N400, and the condition that is predicted to be more complex gives rise to a post-N400 positivity. Since a truth value effect presupposes that a verification procedure has been performed, we have no reason to believe that these effects reflect the verification procedure while it is taking place. Rather, they are more likely an effect of verification complexity on subsequent cognitive processes, such as task-relevant attentional or decision processes (Augurzky et al., 2017;Sassenhagen et al., 2014).
If participants have already established sentence truth value at the final word, as our evidence indicates, then algorithmic verification differences should be observed earlier in the sentence. Indeed, we found that proportional quantifiers differed significantly from the other two classes, showing a broadly distributed positivity. The effect was largest for proportional quantifiers relative to the other two classes collapsed, but is also clearly observed between proportional quantifiers and Aristotelian and numerical individually. This effect appears consistent with a P600, both spatially and temporally. Because the ERP is recorded from the onset of the noun, where the participants were presented with identical linguistic material, the effect cannot stem from the noun itself. This leaves three options: it can be (1), an attentional or decision effect of the same kind observed at the final word; (2) an effect of the syntactosemantic combinatory procedure, such as building a compositional representation of the noun phrase or the sentence as a whole (Fritz & Baggio, 2020; or (3) an effect reflecting algorithmic verification differences between proportional and nonproportional quantifiers. It seems unlikely that participants would initiate decision making processes this early in the sentencerecall that such effects have previously only been observed when truth value can be unambiguously determined, and this only happens at the final word in the current set-up. Regarding (2), it has been claimed (Hackl, 2009) that 'most' is syntactically derived from its root adjective form 'many' and superlative morphology, thus creating a more complex noun phrase than the other classes, which both contain proper determiners rather than derived adjectives. If this is the case, then this could be a P600 integration or composition effect (Baggio, 2021;Brouwer & Hoeks, 2013). However, it is also consistent in distribution with the LPC, a centro-parietal positivity that peaks around 600 ms, associated with decision-relevant memory retrieval (Hubbard et al., 2019;Ratcliff et al., 2016;Rugg et al., 1998;Yang et al., 2019). This would be in line with the predictions of the automata theory, where the difference between the proportional and nonproportional quantifiers is precisely a memory process.
Despite these arguments, it is not possible to assess which of the above interpretations is the correct one just on the basis of data from experiment 1. We therefore conducted a second experiment, without an explicit verification task, to determine whether the effects persist when verification is no longer required, but participants still have to view the images and read the sentences. Importantly, if the positivity on the noun is a syntactosemantic combinatory effect, it should still be seen when participants read and comprehend the sentences. Similarly, the post-N400 decision effect on false sentence completions with proportional quantifiers should also disappear, as the complexity of the task remains constant between all three quantifier classes, and so no additional attentional demands are placed on participants.
Participants
Twenty-seven (14 female; mean age 23.53, SD = 3.55; age range 19-34) participants were recruited from the same student community as in experiment 1. Twenty-four participants (12 female; mean age 23.21, SD = 3.46; age range 19-34) met the inclusion criteria and were included in the final analysis. All participants gave written informed consent and were compensated with a voucher. The study was approved by The Norwegian Centre for Research Data (NSD; project nr. 455334).
Materials
The picture and sentence stimuli were identical to those in experiment 1, as was the order of presentation both within and across blocks. In addition, we constructed comprehension questions that concerned either the picture, the sentence or both. To ensure that participants were paying as much attention to both types of stimulus, half the questions included questions about both the sentence and the picture, and the other half contained an even number of questions about either. The sentence questions were of the form 'Er setninga en påstand om (quantifier/adjective) shape?' (Is the sentence a claim about (quantifier/adjective) shape?), whereas the questions about the picture asked 'Er det adjective shape på bildet?' (Are there adjective shape in the picture?). The questions about both were of the same form as the picture questions, but with the possible omission of the adjective: 'Er det (adjective) shape både på bildet og i setninga?' (Are there (adjective) shape both in the picture and in the sentence). Importantly, the questions about the picture and about both the picture and the sentence could not contain reference to the quantifier, as this could trigger explicit verification of the sentences. This meant that there was more variation in the questions about the sentence, than in the other two categories. The questions were balanced according to truth value and distributed evenly across the quantifier classes. However, like in experiment 1, due to the nature of the images, it was not possible to balance the truth value within each block completely, nor avoid repeating the same questions multiple times for some images. All questions can be found in the supplementary material.
Procedure
The procedure replicated as much as possible the procedure in experiment 1. Participants sat in the same booth and used the same response buttons, received the same information at the beginning of each block, and had the same opportunity to take breaks. They also received the same instructions prior to the experiment, but the explanation of the task necessarily differed. The block and trial structure was essentially the same except that, after the sentence was presented, participants saw the comprehension question for 4000 ms, before they had to answer it with the same time-constraint as in experiment 1. This meant that the experimental sessions took approximately 20 min longer.
EEG-recording
There were no differences in EEG recording between experiments.
3.1.5. Data analysis EEG data were processed and analyzed in the same fashion as in experiment 1. For the behavioral data, we constructed comparable mixed effects logistic and linear regression models as in experiment 1, for the accuracy and reaction time data, respectively. The only difference was that, in addition to quantifier class and sentence truth value, the question typeabout the picture, the sentence, or bothand whether the question required an affirmative or negative answer, were added as fixed effects.
Behavioral results
Also in this experiment accuracy was high (mean = 0.934, SD = 0.247). A mixed effects logistic regression model with accuracy as a binomial dependent variable, random intercepts by participant and question type, question truth value, quantifier class and sentence truth value as fixed effects were fitted to the data. The model revealed that participants were significantly (p < 0.0001) more accurate with questions that only concerned the picture, relative to questions about both picture and sentence, and that they were marginally more accurate (p = 0.038) when the sentence contained a numerical compared to an Aristotelian quantifier. All other β-estimates were not significant.
Participants also responded quickly to the comprehension questions (mean = 654.9 ms, SD = 569.8). We fitted a mixed effects linear regression with the same parameters as in the logistic regression above to the data. Reaction times were lower when the question only concerned the picture (p < 0.0001) or the sentence (p = 0.003) compared to both, when the question required an affirmative as opposed to a negative answer (p = 0.036), and when the sentence contained a proportional rather than an Aristotelian quantifier (p < 0.001) (see Tables 5-7).
Sentence-final effects: adjective.
In experiment 2 there is no explicit verification task. Participants had to answer questions about the picture or the sentence, and establishing the truth value of the latter was never required to perform the task. However, participants might still covertly track the truth and falsehood of sentences, to the extent that cognitive resources, not expended in the main comprehension task, are available for implicit verification. If covert truth tracking indeed occurs, ERP signals at the sentence-final adjective should still show sensitivity to truth value. Overall, collapsing over the quantifier classes, false trials result in more negative-going ERPs at the adjective than true trials. This negative cluster shows a similar temporal and spatial distribution to its counterpart in experiment 1, but is weaker statistically (first-ranked cluster, NEG1: T sum = − 5204.02, S = 1860, p = 0.011; Figs. 5 and 6). Moreover, and most importantly, it is only observed in the comparisons between false and true trials in Aristotelian (first-ranked cluster, NEG1: T sum = − 2948.82, S = 1119, p = 0.040) and numerical quantifiers (firstranked cluster, NEG1: T sum = − 3741.65, S = 1340, p = 0.018), but not in proportional quantifiers, where the effect is absent (the three highestranked clusters are all positive clusters, but none has an associated pvalue below threshold; Fig. 7). The negativity observed in experiment 1 in the contrast between false and true trials with proportional quantifiers is here not elicited. These results indicate that implicit verification, or covert tracking of the truth and falsehood of sentences, may still occur in either true or false trials, or both, with Aristotelian and numerical quantifiers, but it does not occur for proportional quantifiers.
Sentence-internal effects: Noun.
ERP results from the sentence-final word in experiment 2 suggest that, in a comprehension task that does not require verification, participants do not compute the truth values of sentences containing proportional quantifiers. If this is correct, and if the positivity observed at the sentence-internal noun position for proportional quantifiers in experiment 1 reflects the complexity of the verification process, then that effect should disappear in the same contrast in experiment 2. That was indeed what we found at the noun position. As in experiment 1, ERP waveforms appear more negative for numerical than for Aristotelian quantifiers (Fig. 8), however there were no significant negative or positive clusters for that comparison specifically (Fig. 9). Contrary to experiment 1, where proportional quantifiers resulted in positive effects compared to both Aristotelian and numerical quantifiers, such effects are absent in experiment 2: there are no visible waveform differences between proportional quantifiers and the other two classes (Fig. 8) and no negative or positive clusters with associated p-values below the specified threshold (Fig. 9). These results indicate that implicit verification of sentences containing proportional quantifiers does not happen in experiment 2 (missing sentence-final effect of truth value) and is not even attempted (missing sentence-internal effect of quantifier class). These conclusions support the hypothesis that the positivities observed at the noun and at the adjective in experiment 1 reflect the computational complexity of the verification process for sentences containing proportional quantifiers.
Interim discussion
We observed sentence-final negative effects for false versus true completions for Aristotelian and numerical quantifiers, albeit smaller and statistically less robust than in experiment 1. By contrast, the negativity on proportional quantifiers disappeared completely. The data therefore suggest that with Aristotelian and numerical quantifiers, participants are still able to track truth value even when not explicitly verifying the sentence, but they are not with proportional quantifiers. This may be explained by the algorithm for proportional quantifier verification being too complex to deploy when it is not strictly task relevant: the working memory resources required by the proportional verification algorithm are not available because they are allocated in the main task. This is further evidenced by the absence of sentence internal effects at the noun. An interesting side effect of participants not verifying sentences with proportional quantifiers is that it makes them faster at responding to the comprehension question. Since the more complex verification procedure is not performed at all, participants have more cognitive resources to devote to the experimental task when reading Bremnes et al. proportional quantifier sentences than they do when they are simultaneously reading and verifying nonproportional sentences. This post hoc explanation of the decrease in reaction time also supports our interpretation of the cognitive process manifested in the evoked potentials. Finally, as predicted, the post-N400 positivity for proportional quantifiers in false trials also disappeared, further strengthening the view that this positivity is an attentional or decision effect.
General discussion
Overall, we found that computational complexity, as measured by algorithmic verification differences, impacts neural activity during sentence processing. When participants had to perform an explicit picture-sentence verification task (experiment 1), we found a negativity in the N200-N400 time-window at the final word. The effect of false versus true trials is larger for Aristotelian (e.g. 'all') than for proportional quantifiers (e.g. 'most'), while numerical quantifiers (e.g. 'three of') fall in between: this finding is beyond the predictive scope of the automata theory of quantifier verification, but it shows that different quantifier classes have specific processing consequences at various stages of verification. With a comprehension question task (experiment 2), the truth value effect is attenuated for Aristotelian and numerical quantifiers, and disappear completely for proportional quantifiers. Additionally, proportional quantifiers were significantly more positive than the other two classes, both individually and collapsed, on the noun completing the subject noun phrase in the verification experiment. No such effect was found in the comprehension experiment, indicating the effect is due to verification and not to syntactosemantic differences relating to composition as per Hackl (2009).
These ERP effects can be interpreted in light of the previous literature. Most saliently, this is the same pattern observed with the auditory stimuli over pictorial contexts by De Santo et al. (2019). They found a positivity for 'most' relative to 'some' on the subject segment, and a larger positivity in false trials on the predicate segment. Importantly, we also observed differences in the size of the N200-N400 negativity, which De Santo et al. (2019) did not. This could be a power-issue, as their study only had a small number of participants, but could also be due to the mode of presentation: their participants could verify the sentence while looking at the picture, whereas our participants had to recall the image from memory. Additionally, serial visual presentation of sentences is known to elicit different neural responses than auditory stimuli (Freunberger & Nieuwland, 2016). Since no other studies have compared different classes of quantifiers using EEG, a graded N400 effect could not have been observed. Particularly worthy of consideration is the fact that negative quantifierslike 'the fewest' in this studyhave been found not to give rise to N400 effects (Augurzky et al., 2020a;Nieuwland, 2016;Urbach et al., 2015;Urbach & Kutas, 2010). One possibility is therefore that this is what is driving the reduced N200-N400 effect for proportional quantifiers, as this class contained both a positive and a negative quantifier. However, even if this is the case, the fact that the N200-N400 effect is graded, i.e., largest for Aristotelian, smaller for numerical, and smaller yet for proportional, Fig. 7. ERP effects of truth value (False-True) across quantifier classes, time locked to the onset of the sentence-final adjective (0 ms) in experiment 2. Raw effect waveforms (left column) are displayed along with contour maps of sample-level statistics (middle column) and raster plots of cluster-level statistics (right column). Clusters with an associated p-value below the specified threshold (α = 0.05) are shown in blue shades; all other clusters (gray shades) were statistically not significant. (For interpretation of the references to color in this figure legend, the reader is referred to the web version of this article.) remains to be explained.
Another issue with the observed N200-N400 negativity is its latency. Like Augurzky et al. (2017) (see also Knoeferle et al., 2011;Vissers et al., 2008), the negativity observed for false trials is earlier than traditional N400s. It is therefore possible that it is a N2b (D'Arcy et al., 2000;Wassenaar & Hagoort, 2007), reflecting a mismatch between the active representation of the picture and the sentence. Early onset N400 effects have been demonstrated when semantic expectancy is very high (Van Petten et al., 1999), such as in the context of a picture (Vissers et al., 2008). Since both of these interpretations require the construction of a model or mental representation of the picture and the sentence, the argument made in the following does not rely on which of these interpretations turns out to be correct.
More generally, our results are consistent with and similar to previously observed ERP effect patterns. As in Augurzky et al. (2017Augurzky et al. ( , 2019Augurzky et al. ( , 2020aAugurzky et al. ( , 2020b, the more complex taskin our work, verifying proportional quantifiers; in their work, more complex pictorial stimuligave rise to a late positivity at the disambiguating position that only occurred in the verification task and that is thus plausibly related to an increase in decision complexity. The positivity at the noun also has antecedents in the literature, whether it be for semantic violations or the increase in complexity due to negative polarity (Augurzky et al., 2020a).
Our results are best explained by a procedure in which participants are building a model verifying the sentence on-line (Baggio, 2018;Clark, 1976;Clark & Chase, 1972, 1974Johnson-Laird, 1983;Just, 1974;Just & Carpenter, 1971;van Lambalgen & Hamm, 2005;Zwaan & Radvansky, 1998). Note that alternative explanations, for example in terms of visual context effects (Knoeferle et al., 2011;Vissers et al., 2008), also presuppose the construction of a model. This is evidenced by the N400-like negativity in false sentences relative to true, which presupposes that a verification procedurebuilding a model of the sentence has taken place. Interestingly, this negativity appears to be modulated by the complexity of the verification algorithm in that the more complex the verification procedure, the smaller the negativity. As the N400 is known to be modulated by probability in a context, this could imply that participants are less able to predict, or less confident of, the final word for proportional quantifiers, an option further substantiated by the positivity following the N400 in false trials for proportional quantifiers. Crucially, this positivity can be argued to be a decision effect reflecting Fig. 8. Grand-average ERP waveforms from 9 selected channels, time locked to the onset of the sentence-internal noun (0 ms) in experiment 2. Trials from nouns following Aristotelian quantifiers are shown in black, blue is numerical quantifiers, and red is proportional quantifiers. (For interpretation of the references to color in this figure legend, the reader is referred to the web version of this article.) increased cognitive demands (Augurzky et al., 2017;Sassenhagen et al., 2014), particularly as this effect disappears when the decision complexity is kept constant in the comprehension question experiment. The decreased certainty for proportional quantifiers may stem from the fact that more cognitive resources are required to perform the verification algorithm for proportional quantifiers, and consequently fewer resources are available for prediction.
If a model of sentence meaning has been built at the final word, then the positivity at the noun can be argued to be a signature of verification. The time-course and distribution of the effect is similar to the LPC Fig. 9. ERP effects of pairwise comparisons between quantifier classes, time locked to the onset of the sentence-internal noun (0 ms) in experiment 2. Raw effect waveforms (left column) are displayed along with contour maps of sample-level statistics (middle column) and raster plots of cluster-level statistics (right column). Clusters with an associated p-value below the specified threshold (α = 0.05) are shown in yellow shades; all other clusters (gray shades) were statistically not significant. (For interpretation of the references to color in this figure legend, the reader is referred to the web version of this article.) componentoften called the parietal old/new effectfrom the recognition memory literature (Hubbard et al., 2019;Ratcliff et al., 2016;Rugg et al., 1998;Yang et al., 2019). The LPC is associated with recollection memory (Rugg & Curran, 2007) -i.e., when recollecting contextual details of a stimulusand is only observed when it is task-relevant (Yang et al., 2019). Since the algorithms for proportional and nonproportional quantifiers differ precisely in the use of a memory component, an explanation in which participants recruit additional memory to perform proportional quantifier verification is well grounded in formal theory. The fact that this effect disappears along with the N400 for proportional quantifiers in the comprehension experiment further supports this interpretation. Given that a syntactosemantic composition effect would presumably manifest itself regardless of task, this explanation of the positivity at the noun is weakened by experiment 2. However, while links between P600 effects and episodic memory have been proposed (O'Rourke & Van Petten, 2011;Van Petten & Luka, 2012), this hypothesis has not been tested in actual sentence processing paradigms, but only with single words. This interpretation is therefore problematic, and there is a possibility that the positivity here indexes generic processing costs. De Santo et al.'s (2019) preliminary results, observing a similar effect when participants are listening to a sentence while viewing the picture, could be taken to support such a criticism. At the same time, the automata theory proves that, if participants go through the objects sequentially, memory resources are necessarily recruited for proportional quantifiers, but not for nonproportional quantifiers, and as such no strong conclusions can be drawn on the basis of an objection along these lines.
Regardless of the final interpretation of the observed effects, the present study demonstrates that the complexity of the verification algorithm impacts sentence processing online. Importantly, when verification is required by the task, proportional quantifiers modulate the evoked potential both when participants are constructing a true model of the sentence, as indicated by the positivity on the noun, and when this model is evaluated in relation to falsified predictions, as evidenced by sentence-final effects. On the other hand, when verification is not taskrelevant, the construction of a true model that generates predictions for the final word does not occur for proportional quantifiers even though it does for both nonproportional classes.
There are some limitations of the current study. Most notably, and as mentioned above, both a sentence internal positivity and the lack of N400 effects have been observed in relation to negative polarity quantifiers (Augurzky et al., 2020a;Nieuwland, 2016;Urbach et al., 2015;Urbach & Kutas, 2010). As the current experiment did not control for polarity, it is not possible to distinguish which effects are due to negative polarity and which are due to quantifier class. To circumvent these limitations, one could firstly refer to the evidence that suggests that quantifier class also gives rise to this positive effect (De Santo et al., 2019). Secondly, if the reduced N400 effect is merely due to negative polarity, a similar effect should be seen for Aristotelian quantifiers, which included positive 'all' and negative 'none of', but this was not observed. In fact, the N400-like effect for Aristotelian quantifiers is the largest of all three classes. A second limitation is that while the theory predicts the algorithmic difference to stem from a memory component, it is not possible to ascertain whether the difference we observed is indeed related to memory. The argument made above is hypothetical: further research is needed to establish the exact cognitive and physiological nature of the observed sentence-internal verification positivity.
Conclusion
We have shown that the algorithmic verification complexity of different quantifier classes is associated with different patterns of neural responses. Our findings suggest that algorithmic aspects of language processing are subjected to the same formal constraints applicable to abstract machines. Results of previous quantifier verification experiments, to the extent that they do not take formal distinctions between quantifier classes into account, may not generalize and may not be jointly interpretable: different classes of quantifiers are provably verified using different algorithms, and thus give rise to qualitatively distinct evoked potentials. An exciting open question at the intersection of computer science and psycholinguistics is whether formal proofs about the complexity of specific computational problems, such as verification, can inform us about which class of algorithms is plausibly implemented by the brain. Our research may serve as a stepping stone in that direction and as a proof of concept for a growing literature advocating algorithmic and complexity theoretic analyses in the construction of psychological and psycholinguistic theories (Isaac et al., 2014;van Rooij & Baggio, 2020van Rooij et al., 2019). | 2022-01-21T14:17:09.890Z | 2022-01-20T00:00:00.000 | {
"year": 2022,
"sha1": "1f40f3c10e38d1884dfcb3cdc4aa4781a35d1b67",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.1016/j.cognition.2022.105013",
"oa_status": "HYBRID",
"pdf_src": "Elsevier",
"pdf_hash": "87054c2c17cb177bfd19093e45e2e5fcd1c2b858",
"s2fieldsofstudy": [
"Psychology"
],
"extfieldsofstudy": [
"Medicine"
]
} |
145034228 | pes2o/s2orc | v3-fos-license | Measurement of $VH$, $H\to b\bar{b}$ production as a function of the vector-boson transverse momentum in 13 TeV $pp$ collisions with the ATLAS detector
Cross-sections of associated production of a Higgs boson decaying into bottom-quark pairs and an electroweak gauge boson, $W$ or $Z$, decaying into leptons are measured as a function of the gauge boson transverse momentum. The measurements are performed in kinematic fiducial volumes defined in the `simplified template cross-section' framework. The results are obtained using 79.8 fb$^{-1}$ of proton-proton collisions recorded by the ATLAS detector at the Large Hadron Collider at a centre-of-mass energy of 13 TeV. All measurements are found to be in agreement with the Standard Model predictions, and limits are set on the parameters of an effective Lagrangian sensitive to modifications of the Higgs boson couplings to the electroweak gauge bosons.
Introduction
with its large branching ratio of 58%. This paper presents a measurement of 'reduced' stage-1 V H STXS (defined in Section 3) using H → bb decays with 79.8 fb −1 of 13 TeV pp collisions collected by ATLAS between 2015 and 2017. The results are used to investigate the strength and tensor structure of the interactions of the Higgs boson with vector bosons using an effective Lagrangian approach [22].
Data and simulation samples
The data were collected with the ATLAS detector [23,24] between 2015 and 2017, triggered by isolated charged leptons or large transverse momentum imbalance, E miss T . Only events with good data quality were kept.
The Monte Carlo simulation samples used for the measurements presented here are identical to those used for the measurement of the inclusive V H, H → bb signal strength [15]. Several samples of simulated events were produced for the signal (qq → W H, qq → Z H and gg → Z H) and main background (tt, single-top, V+jets and diboson) processes. They were used to optimise the analysis criteria and to determine the expected signal and background distributions of the discriminating variables used in the final fit to the data. The multijet background is largely suppressed by the selection criteria and is estimated using data-driven techniques.
The signal templates in each STXS region were obtained from simulated qq → W H and qq → Z H events with zero or one additional jet, calculated at next-to-leading order (NLO), generated with the P -B v2 + G S + M NLO generators [25][26][27][28]. The contribution from loop-induced gg → Z H production was simulated at leading order (LO) using the P -B v2 generator [25]. Additional scale factors were applied to the qq → V H processes as a function of the generated vector-boson transverse momentum (p V T ) to account for electroweak (EW) corrections at NLO. These factors were determined from the ratio between the V H differential cross-sections computed with and without these corrections by the H program [29,30]. The mass of the Higgs boson was fixed at 125 GeV.
Event selection and categorisation
The object reconstruction, event selection and classification into categories used for the measurements, are identical to those described in Ref. [15]. The selection and the event categories are briefly summarised below.
Events are retained if they are consistent with one of the typical signatures of V H, H → bb production and decay, with Z → νν, W → ν or Z → ( = e, µ). Vector-boson decays into τ-leptons are not targeted explicitly. However, they satisfy the selection criteria with reduced efficiency in the case of leptonic τ-lepton decays.
In particular, events are kept if they contain at most two isolated electrons or muons, and two good-quality high-p T (> 45, 20 GeV) jets with |η| < 2.5 satisfying b-jet identification ('b-tagging') requirements (which have an average efficiency of 70% for jets containing b-hadrons that are produced in inclusive tt events [43]). The two b-jet candidates are used to reconstruct the Higgs boson candidate; their invariant mass is denoted by m bb . Additional jets are required to have p T > 20 GeV for |η| < 2.5 or p T > 30 GeV for 2.5 < |η| < 4.5, and not be identified as b-jets.
Events with either zero, one or two isolated electrons or muons are classified as '0-lepton', '1-lepton' or '2-lepton' events, respectively. The 0-lepton events and the 1-lepton events are required to have transverse momentum imbalance, as expected from the neutrinos from Z → νν or W → ν decays; in the 2-lepton events, the leptons must have the same flavour and an invariant mass close to the Z boson mass.
Additional requirements are applied to suppress background from QCD production of multijet events in the 0-lepton and 1-lepton channels. To suppress the large tt background, events with four or more jets are discarded in the 0-lepton and 1-lepton channels. Finally, a requirement on the reconstructed transverse momentum p V,r T of the vector boson V is applied. It is computed, depending on the number, N lep , of selected electrons and muons, as either the missing transverse momentum E miss T (N lep = 0), the magnitude of the vector sum of the missing transverse momentum and the lepton p T (N lep = 1), or the dilepton p T (N lep = 2). The minimum value of p V,r T is 150 GeV in the 0-and 1-lepton channels, and 75 GeV in the 2-lepton channel.
Events satisfying the previous criteria are classified into eight categories (also called signal regions in the following), shown in Table 1, with different signal-to-background ratios. These categories are defined by the number of jets, N jet (including the two b-jet candidates), N lep , and p V,r T . Additional categories (also called control regions in the following) containing events satisfying alternative selections are introduced to constrain some background processes such as W boson production in association with jets containing heavy-flavour hadrons, or top-quark pair production. The signal contribution in such categories is expected to be negligible. Table 1: Summary of the reconstructed-event categories. Categories with relatively large fractions of the total expected signal yields are referred to as 'signal regions' (SR), while those with negligible expected signal yield, mainly designed to constrain some background processes, are called 'control regions' (CR). The quantity m top is the reconstructed mass of a semileptonically decaying top-quark candidate in the 1-lepton channel. The calculation of m top uses the four-momenta of one of the two b-jet candidates, the lepton, and the hypothetical neutrino produced in the event. The neutrino four-momentum is derived using the W boson mass constraint [15] and m top is then reconstructed from the combination of the b-jet candidate and neutrino longitudinal momentum that yields the smallest top-quark candidate mass.
Cross-section measurements
The reduced V H, V → leptons stage-1 STXS regions used in this paper are summarised in Table 2, which also indicates which reconstructed-event categories are most sensitive in each region. All leptonic decays of the weak gauge bosons (including Z → ττ and W → τν) are considered for the STXS definition.
To avoid theory uncertainties from extrapolations to a phase space not accessible to the measurement, the p Z T < 150 GeV stage-1 regions are split into two subregions, p Z T < 75 GeV and 75 < p Z T < 150 GeV. Currently, there are not enough data events to distinguish qq → Z H from gluon-induced Z H production, despite their different kinematic properties. As the gg → Z H cross-section is only 16% of that of qq → Z H, no attempt is made to measure the qqand gg-initiated processes separately. The qq → Z H and gg → Z H regions are thus merged together, after having modified the gg → Z H fiducial region definition to match that of qq → Z H. Therefore, the gg → Z H, p Z T > 150 GeV stage-1 regions (with zero or at least one extra particle-level jet) are modified by adding a p Z T < 250 GeV requirement, and events with p Z T > 250 GeV and any number of particle-level jets are put in a separate gg → Z H, p Z T > 250 GeV region, leading to a total of 14 modified stage-1 regions. These regions are then merged together in reduced stage-1 regions, chosen to keep the total uncertainty in the measurements near or below 100%.
Two sets of reduced stage-1 regions are considered. In one, called the '5-POI (parameters of interest)' scheme, five cross-sections, three for Z H production (75 < p Z T < 150 GeV, 150 < p Z T < 250 GeV and p Z T > 250 GeV) and two for W H production (150 < p W T < 250 GeV and p W T > 250 GeV), are measured. In the other one, called the '3-POI' scheme, three cross-sections, two for Z H (75 < p Z T < 150 GeV and p Z T > 150 GeV) and one for W H (p W T > 150 GeV), are measured. The 5-POI scheme leads to measurements that have total uncertainties larger than those in the 3-POI scheme, but are more sensitive to enhancements at high p V T from potential anomalous interactions between the Higgs boson and the EW gauge bosons. The reconstructed-event categories do not distinguish between events with generated p V T below or above 250 GeV. Discrimination between the two p V T regions 150-250 GeV and > 250 GeV for events with generated p V T below or above 250 GeV is provided by the different shapes of the boosted-decision-tree discriminant (BDT V H ) used in the final fit to the data, as illustrated in Figure 1 in the case of the 1-lepton, 2-jet category. This arises from the fact that the reconstructed p V,r T is largely correlated with the BDT V H output, for which it constitutes one of the most discriminating input variables together with m bb and the angular separation of the two b-jets.
The product of the signal cross-section times the H → bb branching ratio and the total leptonic decay branching ratio for W or Z bosons is determined in each of the reduced stage-1 regions by a binned maximum-likelihood fit to the data. The cross-sections are not constrained to be positive in the fit. Signal and background templates of the discriminating variables, determined from the simulation or data control regions, are used to extract the signal and background yields. A simultaneous fit is performed to all the signal and control regions. Systematic uncertainties are included in the likelihood function as nuisance parameters.
The likelihood function is very similar to that described in Ref. [15]. In particular, the same observables are used, namely BDT V H in the signal regions and either the invariant mass m bb of the two b-jets or the event yield in the control regions. The treatment of the background and of its uncertainties is also unchanged. The only differences relative to the likelihood function in Ref.
[15] concern the treatment of the signal: • Instead of a single signal shape (for BDT V H or m bb ) or yield per category, multiple shapes or yields are introduced, one for each reduced stage-1 STXS region under study. The 3-POI and 5-POI 'reduced stage-1' sets of merged regions used for the measurements, the corresponding kinematic regions of the stage-1 V H simplified template cross-sections, and the reconstructed-event categories that are most sensitive in each merged region. The stage-1 regions are modified (i) by splitting the two Z H, p Z T < 150 GeV regions (from qq and gg) into four regions, based on whether p Z T < 75 GeV or 75 < p Z T < 150 GeV; (ii) by adding a p Z T < 250 GeV requirement to the gg → Z H, p Z T > 150 GeV regions (with zero or at least one extra particle-level jet), and (iii) by adding a separate gg → Z H, p Z T > 250 GeV region. The three regions W H, p W T < 150 GeV, qq → Z H, p Z T < 75 GeV and gg → Z H, p Z T < 75 GeV, in which the current analysis is not sensitive and whose corresponding cross-sections are fixed to the SM prediction in the fit, are not shown.
Merged region
Merged region Stage 1 (modified) STXS region Reconstructed-event categories 3-POI scheme 5-POI scheme with largest sensitivity • Instead of a single parameter of interest, the inclusive signal strength, the fit has multiple parameters of interest, i.e. the cross-sections of the reduced stage-1 regions, multiplied by the H → bb and V → leptons branching ratios.
• Overall theoretical cross-section and branching ratio uncertainties, which affect the signal strength measurements but not the STXS measurements, are not included in the likelihood function.
The expected signal shapes of the discriminating variable distributions and the acceptance times efficiency (referred to as 'acceptance' in the following) in each reduced stage-1 region are determined from simulated samples of SM V H, V → leptons, H → bb events. The acceptance of each reconstructed-event category for signal events from the different regions of the 5-POI reduced stage-1 scheme is shown in Figure 2(a). The fraction of signal events in each reconstructed-event category originating from the different regions in the same scheme is shown in Figure 2 As shown in Figure 2(a), the current analysis is not sensitive to W H events with p W T < 150 GeV and to Z H events with p Z T < 75 GeV, since their acceptance in each category is at the level of 0.1% or smaller. Therefore, in the fits the signal cross-section in these regions is constrained to the SM prediction, within the theoretical uncertainties. Since these regions contribute only marginally to the selected event sample, the impact on the final results is negligible. A cross-check in which the relative signal cross-section uncertainty for the p W T < 150 GeV and p Z T < 75 GeV regions is conservatively set to 70% of the prediction (i.e. about seven times the nominal uncertainty) leads to variations of the measured STXS below 1%. The sources of systematic uncertainty are identical to those described in Ref.
[15], except for those associated with the Higgs boson signal simulation, which are re-evaluated [44]. In this re-evaluation the uncertainties are separated into two groups: • uncertainties affecting signal modelling -i.e. acceptance and shape of kinematic distributionsin each of the three or five reduced stage-1 regions (hereafter referred to as theoretical modelling uncertainties), and • uncertainties in the prediction of the production cross-section for each of these regions (hereafter referred to as theoretical cross-section uncertainties).
While theoretical modelling uncertainties enter the measurement of the STXS, theoretical cross-section uncertainties do not affect the results, but only the predictions with which they are compared. The consequent reduction of the impact of the theoretical uncertainties on the results with respect to the signal strength measurements is one of the main advantages of measuring STXS.
The two groups of systematic uncertainties are estimated for high-granularity STXS regions, and then merged into the reduced scheme under consideration. This approach makes it easy to compute the systematic uncertainties for merging schemes different from those presented here. The uncertainties are evaluated by dividing the phase space into five p V T regions (with the following lower edges: 0 GeV, 75 GeV, 150 GeV, 250 GeV and 400 GeV), and each p V T region into three bins depending on the number of particle-level jets (zero, one, or at least two), independently for the qq → V H and gg → Z H processes. When two STXS regions are merged, their relative theoretical cross-section uncertainties lead to a modelling uncertainty. These uncertainties are evaluated as the remnant of the theoretical cross-section uncertainties for the high-granularity regions after the subtraction of the theoretical cross-section uncertainty for the merged region.
The high-granularity regions are used to calculate theoretical cross-section uncertainties for the missing higher-order terms in the QCD perturbative expansion and for the uncertainties induced by the choices of the parton distribution function (PDF) and α S . Fourteen independent sources of uncertainties due to the missing higher-order terms lead to total uncertainties of 3%-4% for qq → V H and 40%-50% for gg → Z H with p V T > 75 GeV [44]. Thirty-one independent sources of PDF and α S uncertainties, each of them usually smaller than 1%, lead to a total quadrature sum between 2% and 3% depending on the STXS region. The theoretical modelling uncertainties change the shapes of the reconstructed p V,r T and m bb distributions in the same way as described in Ref. [15]. Four independent sources for the QCD expansion and two independent sources for the PDF and α S choices are considered.
Systematic uncertainties in the signal acceptance and shape of the p V,r T and m bb distributions due to the parton shower (PS) and underlying event (UE) models are estimated from the variations of acceptance and shapes of simulated events after changing the P 8 PS parameters or after replacing P 8 with H 7 for the PS and UE models [15]. The signal acceptance uncertainties due to the PS and UE models (five independent sources) are typically of the order of 1% (5%-15%) with a maximum of 10% (30%) for the qq → V H (gg → Z H) production mode. Two independent nuisance parameters account for the systematic uncertainties induced by the PS and UE models in the p V,r T and m bb distributions. In addition, a systematic uncertainty due to the EW corrections is parameterised as a change in shape of the p V T distributions for the qq → V H processes [15].
Results
The measured reduced stage-1 V H cross-sections times the H → bb and V → leptons branching ratios, σ × B, in the 5-POI and 3-POI schemes, together with the SM predictions, are summarised in Table 3. The results of the 5-POI scheme are also illustrated in Figure 3. The SM predictions are shown together with the theoretical cross-section uncertainty for the merged regions computed as described in the previous section. The measurements are in agreement with the SM predictions.
The cross-sections measured in the p V T > 150 GeV intervals are not equal to the sum of those measured for 150 < p V T < 250 GeV and p V T > 250 GeV. This is because the signal template for p V T > 150 GeV in the 3-POI fit is computed from the sum of the templates of the two regions assuming that the ratio of yields in those regions is that predicted by the SM, while in the 5-POI fit the normalisations of the two templates are floated independently.
The cross-sections are measured with relative uncertainties varying between 50% and 125% in the 5-POI case, and between 29% and 56% for the 3-POI. The largest uncertainties are statistical, except for the W H cross-sections with p W T > 150 GeV in the 3-POI case and with 150 < p W T < 250 GeV in the 5-POI case. In the 5-POI case, an anti-correlation of the order of 40%-60% is observed between the cross-sections in the ranges p V T > 250 GeV and 150 < p V T < 250 GeV, which are measured with the same reconstructed-event categories.
The dominant systematic uncertainties are due to the limited number of simulated background events and the theoretical modelling of the background processes. The uncertainties due to the theoretical modelling of the V H signal are small, with relative values ranging between 6% and 12%. The uncertainties in the predictions are 2-3 times larger for Z H than for W H in the same p V T interval due to the limited precision of the theoretical calculations of the gg → Z H process. Table 3: Best-fit values and uncertainties for the V H, V → leptons reduced stage-1 simplified template cross-sections times the H → bb branching ratio, in the 5-POI (top five rows) and 3-POI (bottom three rows) schemes. The SM predictions for each region, computed using the inclusive cross-section calculations and the simulated event samples described in Section 2, are also shown. The contributions to the total uncertainty in the measurements from statistical (Stat. unc.) or systematic uncertainties (Syst. unc.) in the signal modelling (Th. sig.), background modelling (Th. bkg.), and in experimental performance (Exp.) are given separately. All leptonic decays of the V bosons (including those to τ-leptons, = e, µ, τ) are considered.
Constraints on anomalous Higgs boson interactions
The i are numerical coefficients, are added to the SM Lagrangian to obtain an effective Lagrangian inspired by that in Ref. [45]. Only dimension D = 6 operators are considered in this study, since dimension D = 5 operators violate lepton or baryon number, while dimension D > 6 operators are further suppressed by powers of Λ.
The results presented in this paper focus on the coefficients of the operators in the 'Strongly Interacting Light Higgs' formulation [46]. This formalism is defined as the effective theory of a strongly interacting sector in which a light composite Higgs boson arises as a pseudo Goldstone boson, and is responsible for The corresponding CP-odd operatorsÕ HW ,Õ H B ,Õ W , andÕ B , are not considered.
Modifications of the gg → Z H production cross-section are only introduced by either higher-dimension (D ≥ 8) operators or corrections that are formally at NNLO in QCD, and are not included in this study, in which the expected gg → Z H contribution is kept fixed to the SM prediction.
The operator O d = y d |H| 2Q L Hd R (plus Hermitian conjugate) with Yukawa coupling strength y d , which modifies the coupling between the Higgs boson and down-type quarks, induces variations of the partial width Γ bb H and of the total Higgs boson width Γ H , and therefore of the H → bb branching ratio. This operator affects the measured cross-sections in the same way in each region. [47], using the known relations between such coefficients and the stage-1 STXS based on leading-order predictions [48]. Such relations include interference terms between the SM and non-SM amplitudes that are linear in the coefficients and of order 1/Λ 2 , and the SM-independent contributions that are quadratic in the coefficients and of order 1/Λ 4 . In the HEL implementation, the coefficients c i of interest are recast into the following dimensionless coefficients: where g and g are the SU(2) and U(1) SM gauge couplings, and v is the vacuum expectation value of the Higgs boson field. These dimensionless coefficients are equal to zero in the SM.
The sumc W +c B is strongly constrained by precision EW data [49] and is thus assumed here to be zero, and constraints are set onc HW ,c H B ,c W −c B andc d . The relations between the HEL coefficients and the reduced STXS measured in this paper are obtained by averaging the relations for the regions that are merged with weights proportional to their respective cross-sections.
Simultaneous maximum-likelihood fits to the five STXS measured in the 5-POI scheme are performed to determinec HW ,c H B ,c W −c B andc d . Due to the large sensitivity to the Higgs boson anomalous couplings to vector bosons provided by the p V T > 250 GeV cross-sections, the 5-POI results place tighter constraints on these coefficients (e.g. approximately a factor two forc HW ) than do the 3-POI results. For this reason, constraints obtained with the 3-POI results are not shown here.
In each fit, all coefficients but one are assumed to vanish, and 68% and 95% confidence level (CL) one-dimensional intervals are inferred for the remaining coefficient. The negative-log-likelihood onedimensional projections are shown in Figure 4, and the 68% and 95% CL intervals forc HW ,c H B ,c W −c B andc d are summarised in Table 4. The parametersc HW andc W −c B are constrained at 95% CL to be no more than a few percent, while the constraint onc H B is about five times worse, and the constraint onc d is of order unity. For comparison, Table 4 also shows the 68% and 95% CL intervals for the dimensionless coefficients when the SM-independent contributions, which are of the same order (1/Λ 4 ) as the dimension-8 operators that are neglected, are not considered. The constraints are typically 50% stronger than when the SM-independent contributions are not neglected.
Conclusion
Using 79.8 fb −1 of √ s = 13 TeV proton-proton collisions collected by the ATLAS detector at the LHC, the cross-sections for the associated production of a Higgs boson decaying into bottom-quark pairs and an electroweak gauge boson W or Z decaying into leptons are measured as functions of the vector-boson transverse momentum p V T . The cross-sections are measured for Higgs bosons in a fiducial volume with rapidity |y H | < 2.5, in the 'simplified template cross-section' framework.
The measurements are performed for two different choices of the number of p V T intervals. The results have relative uncertainties varying between 50% and 125% in one case, and between 29% and 56% in the other. The measurements are in agreement with the Standard Model predictions, even in high p V T (> 250 GeV) regions that are most sensitive to enhancements from potential anomalous interactions between the Higgs boson and the electroweak gauge bosons.
One-dimensional limits on four linear combinations of the coefficients of effective Lagrangian operators affecting the Higgs boson couplings to the electroweak gauge bosons and to down-type quarks have also been set. For two of these parameters the constraint has a precision of a few percent. | 2019-03-11T21:46:53.000Z | 2019-03-11T00:00:00.000 | {
"year": 2019,
"sha1": "ecbbd4cc0aaa8ce36d41a854012acfbe858c3ab9",
"oa_license": "CCBY",
"oa_url": "https://link.springer.com/content/pdf/10.1007/JHEP05(2019)141.pdf",
"oa_status": "GOLD",
"pdf_src": "Arxiv",
"pdf_hash": "f4e3397f592ec80eb64ace501647a48223c6b3df",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": []
} |
245560172 | pes2o/s2orc | v3-fos-license | Online Teaching at the Universidad Veracruzana: Emerging Strategies and Challenges
In this paper, we briefly dissert past pandemics throughout history before addressing Covid-19. The objective is to describe the effects it has had on higher education and especially language, translation, and research teaching, as well as the challenges that have been faced with shifting from face-to-face lessons to online learning with help from information technologies. The advantages and disadvantages of this educational approach in a postgraduate context are reviewed, and the solutions that had to be implemented to offer quality education and avoid negative consequences for students’ learning. This project explains the strategies applied by the authorities, professors, and students at the Universidad Veracruzana. It is a qualitative interpretative study in which the data was collected through semi-structured interviews. The most interesting findings were the writing and speaking improvements of the students, given the thorough review from their professors and peers. Moreover, they learned to construct their knowledge and become more autonomous in search of solutions to the problems during confinement. At the same time, the professors learned to operate multiple e-learning platforms so as not to stay behind. In conclusion, all involved learned to be more resilient and positively look at life, despite the obstacles.
Introduction
Education worldwide has undergone radical changes due to the appearance of Covid-19. Its speed of infection and harmfulness has caused schools to operate in a distance-learning mode. In some countries, the infrastructure for virtual education was already in place, and both students and teachers had the necessary resources to continue their academic activities without significant inconveniences. However, according to Gómez & Martinez (2021), the digital divide in Mexico has increased during the pandemic in rural and indigenous areas due to the low investment made by telecommunication companies. Furthermore, there is а limited availability of electronic devices due to the population's economic factors and education levels (Gómez & Martínez, 2021).
In some graduate programs at the Universidad Veracruzana, the obstacles that would prevent effective education may have been overcome due to the National Council for Science and Technology scholarships; those help students to buy electronic gadgets and pay for Internet service. This research aims to describe the effects the pandemic has had on higher education and especially language, translation, and research teaching, as well as the challenges that have been faced with shifting from face-toface lessons to online learning with help from information technologies.
The origins of pandemics
According to Martínez (2020), fear is the first human reaction to pandemics, followed by the desire to flee. However, he reports that there are always people interested in research, so the need for observation prevails. The second reaction when one is in the center of disaster is to look for culprits. This author also mentions that men have difficulty avoiding guilt; thus, epidemics are perceived as a punishment, he states: "About twenty thousand years ago, on a stormy sunset, the sorcerer Cro-Magnon was returning from a retreat of several days in the mountains, where he had been collecting magic herbs when he was informed that one of the men of the town had arrived sick from a long hunting day. Convinced of his healing power, the sorcerer covered himself in his deer garb and went to see him. He pushed aside the leather that covered the entrance to the cave and illuminated the sick man with his torch. He immediately gave a start, recoiled in terror, ordered to break camp, and flee to an uncertain end in the middle of the night. In the pustular face of the sick man, he had recognized some plague (perhaps smallpox), the horrifying image of which he had received through successive accounts of his father and his grandfather, and he knew that death was inevitable. " (Martínez,2020:1) Martínez (2020) recounts that the first significant pandemic occurred in the time of Emperor Justinian in the 6th century AD and lasted 60 years. Then, the black death appeared and ravaged all of Europe between 1347 and 1382. In Boccaccio's Decameron, the concepts of isolation and contagion had already existed (Martinez, 2020). During the first pandemics, it was seen that the risk of becoming ill increased when approaching a patient, as well as touching the corpses' clothes. The consequences were flight and quarantine. In the middle of the 19th century, the concepts such as incubation appeared, and quarantine was widespread to avoid infectious diseases; outbreaks often provoke hysteria leading to the stigmatization of minority populations. In 1918, the Spanish flu killed between 40 and 100 million people in the world in a single year. As a result, measures such as personal hygiene and isolation of those affected were implemented.
The 21st century has brought new epidemic threats, such as the Asian pneumonia in 2003. Then, in 2014, the Ebola virus reemerged in West Africa and triggered the cancellation of international flights and high economic losses. In 2009, the H1N1 virus was detected in the United States and Mexico and on April 25 of the same year, the World Health Organization declared a public health emergency and officially deemed it a pandemic. In our days, terror reemerged in 2019. The Covid-19 virus reminded us that random death, fear, rejection, and segregation come with every plague; therefore, confinement began all over the world.
Covid-19 overview in Mexico
In Mexico, the first case of the virus was detected on February 27, 2020. The carrier was a Mexican citizen bound from Italy. On March 24, phase 2 of the pandemic began, and economic activities were suspended. As well as avoiding mass gatherings, the authorities recommended remaining housebound, and the slogan Stay at home appeared everywhere.
A few days later, non-essential activities were suspended, except security, health, energy, and cleaning services. On radio and television, constant hand washing and disinfection of public areas were advised. Face masks became mandatory when going to public places to avoid contagion, and by March 30, the national health emergency was decreed. Later, Mexico's Department of Higher Education (2020) implemented several actions. For example, reporting confirmed Covid-19 cases to the authorities and preventing the spread of fake news. Children, who had the necessary means, were able to continue studying through television and distance lessons. However, those from marginalized groups ran the risk of being left behind. Teachers were required to adapt to the new pedagogical concepts for which they had received no previous training.
Since the beginning of the pandemic, the Universidad Veracruzana has followed the instructions of Mexico's Ministry of Health. They established an epidemiological traffic light on which the restrictions for the population were based to avoid the spread of Covid-19. This traffic light establishes four levels of alert: red, orange, yellow, and green. It is based on four aspects representing the impact of the disease: interpersonal transmission, territorial spread, response capacity, and health consequences.
Each month, the university mirrors the color of the traffic light for the community to follow. At the time we write these lines, August 17, 2021, it is reported that although the Federation places the state of Veracruz in yellow, the institution refers to the risk levels established by the state government, so the campuses in the cities of Xalapa and Veracruz are in orange and red, respectively. This decision means that academic activities will continue at a distance and library activities will be allowed up to 50% capacity.
Many higher education institutions worldwide had some resources to manage online teaching before the appearance of Covid-19. However, the training that would allow its correct use among the school community was missing. The Universidad Veracruzana prepared for distance learning almost 20 years ago with the creation of the Eminus system. This software integrates the functions of most commercial software, such as structuring of thematic units with various types of multimedia, the design of online evaluations, and a videoconferencing system. In the end, Eminus was not used by many professors. Instead, they used the tools they already knew well, for example, those developed by Google, Zoom, and other companies. Online teaching is also based on the well-known pedagogical theories.
Literature review
Epistemologically, education in a virtual modality is based on three popular pedagogical perspectives: behaviorism, cognitivism, and constructivism (Ally, 2004). The first is reflected in the activities programmed in a virtual language course, where the teacher codes a series of exercises to evaluate the students' knowledge. Through these, the student is forced to prepare the topics before solving the questionnaires. On the other hand, those theories that prioritize cognitive processes as the main vehicle for learning advocate for two primary elements: memory and individual differences (Ally, 2004). For example, if a virtual lesson is presented mainly in plain text, with no highlights of any kind, the student will probably have difficulty remembering its contents or meaningfulness. In other words: "The pandemic has accelerated a process of digitization that adds other peculiarities. It is about a more demanding read: the text is not physically in the environment; you have to find it in the cloud, requiring specific technical and cognitive skills. You have to manage apps, browse the web, move tactilely through the tiny pocket screen. Mentally you have to reconstruct the circumstantial context of each writing (the restaurant menu, the hand program, the map). " (Cassany, 2020: n.d.) Some studies mention the advantages and disadvantages of distance education. For example, Arkorful and Abaidoo (2015) report that some advantages are the time and space flexibility that virtual lessons offer, the easiness of obtaining information online, and, for asynchronous learning, the possibility that students review class materials at their own pace and learning style. Borstoff and Lowe (2007) point out that lower expenses for students and professors are also favorable. On the other hand, Arkorful and Abaidoo (2015) also claim that some disadvantages are the lack of personal interaction, the difficulty of answering students' questions through a computer, and problems that arise with assessing online activities, given the possibility of cheating. Other disadvantages mentioned are: "bandwidth issues, […] lack of human contact, […] and technical difficulties (Borstoff & Lowe, 2007:17).
The strategies used by teachers and students during online education should also be considered. Studies such as Bailey and Card (2009) emphasize the importance of fostering student-teacher relationships and the commitment of everyone involved to establish constant communication through different technological tools. Moreover, they claim that there must be an agreement in the group regard-ing the distribution, creation, and assessment of assignments and projects punctually and appropriately and establishing learning goals clearly at the beginning of the course, and its development (Bailey & Card, 2009). This situation strengthens the role of the professor as a facilitator and impacts the students' autonomy to improve their knowledge.
Other studies affirm the challenges that virtual teaching and learning involved. Jacobs (2013) claims that learning should focus on the students to develop their motivation, independence, and active participation in solving real problems. However, in Mexico and Latin America, the main challenges are technological (Ontiveros & Canay, 2013). These include, but are not limited to, issues relative to the digital divide, such as infrastructure costs and a stable Internet connection, and poor knowledge in using various technological tools. This problem is stressed in marginal areas. Jacobs (2013) explains a false belief that young students are capable of handling these tasks. Nonetheless, using technology to communicate or publish posts on social networks differs from its educational purposes. This difference could mean there is a need for training and practice to accomplish academic tasks online.
Finally, students' writing could benefit from online feedback from their professors and peers. This process can occur through electronic means (email, instant messaging, word processors, or online apps). A study conducted with foreign language students (Tai, Ling & Yan, 2015) concluded that peer-review and professor input and corrections, enhance feedback usefulness and quality. Similarly, McVey (2008) mentions that teachers should personalize their comments according to their students' needs and balance positive and negative observations.
Research design
This research was carried out during the January-June 2021 semester with professors and stu-dents from the Institute for Education Research of the Universidad Veracruzana. The project used a qualitative approach since it seeks to understand "how people interpret their experiences, how they construct their worlds and what meaning they attribute to their experiences" (Merriam & Tisdell, 2015: 6).
Semi-structured interviews were used to collect the data. This method allows "finding out things that cannot be directly observed [such as] feelings, thoughts and intentions" (Patton, 2014: 426). The selection of this technique responds to the duty of profoundly understanding the participants' perceptions, and it fits the aim of describing the effects the pandemic has had on higher education and especially language, translation, and research teaching, as well as the challenges that have been faced with shifting from face-to-face lessons to online learning with help from information technologies.
The informants were 11 professors and 10 students due to the sufficient and meaningful data they provided. The professors teach their classes in foreign languages, specifically in French or English. The students have different personal and academic backgrounds. The criteria to participate in the study were to belong to any of the institute's educational programs during January-June 2021. In addition, the students must have submitted homework, reports, or thesis extracts throughout the semester.
The participants' voices give meaning to reality at a given moment; in our case, they are the opinions of professors and students who interact under a virtual modality within an institute of the Universidad Veracruzana. An interpretive constructivist stance is taken to elucidate the meanings that participants assign to their experiences within educational programs and explain how they have overcome obstacles and exploited areas of opportunity that arose during online classes.
The ethical considerations met during this research respond to the global need to respect the integrity of the participants (Neumann, 2014). Be-fore starting the project, the students and professors signed informed consent. This document "explains aspects of a study to participants and asks for their voluntary agreement" (Neumann, 2014:151), therefore, to report the findings in this research, pseudonyms were used instead of the participants' real names. It states that their rights as participants will be respected and that if they felt uncomfortable or disagreed with something, they could withdraw from the study at any time.
Findings
The findings are divided into four different categories: 1) the advantages and disadvantages of online teaching and learning of language, linguistics, and translation in higher education; 2) the strategies used in distance learning; 3) the challenges met by the participants regarding to online education and, 4) the influence of online lessons in the students' writing.
Advantages and disadvantages of online teaching and learning
For professors, one of the advantages of online education is the ease of time management and decreasing travelling inconveniences. They also believe that students and their families spend fewer resources and reduce vehicles on the street. These views mostly coincide with Arkorful and Abaidoo's (2015) findings about time and space flexibility of distance learning. They mention that it is easier to help students in distant places through virtual lessons and courses. Moreover, they claim it is possible to use different teaching materials. Professor Tania comments: "We can have students from any region of the world as part of our school enrollment. There are no transportation costs; vitally crucial in a country with poor transportation. Students are not exposed to danger on the streets. [Besides,] the study material is cheaper. " (Professor Tania) For students, the main advantages are that individual and collective knowledge can be shared more easily. Conrado mentions: "You can, […], attend a course in Italy (for free) in the morning, another in the US in the afternoon, listen to a lecture, and participate in the evening. Thanks to distance education, it is possible to access materials and resources that were previously ephemeral (an unrecorded book presentation, a talk that was only given in a seminar, an untranscribed interview) and private (such is the case of many academics who have shared the links to their courses). " (Professor Conrado) Learners also mention that they can achieve greater autonomy at work due to independence and responsibility. They also comment that online tools have favored their punctuality habits, saved universities' expenses in goods and services, and the ability for each learner to work at their own pace, as pointed out by Arkorful and Abaidoo (2015).
Some of the main disadvantages for professors, as Manuel points out, are "…the lack of equipment and infrastructure which causes frustration and anxiety since it is necessary to invest in […] Socioeconomic differences are also heightened, as pointed out by Professor Guillermo: "The connectivity problems and the socioeconomic status of several students, especially undergraduate students, but also graduate students living in indigenous communities, have generated inequalities in the use of online learning. The university has not been able to ensure equitable connectivity for its students, which generates institutional discrimination. " (Professor Guillermo) For students, the main disadvantages of online education are 1) the tendency to procrastinate and 2) physical and mental health is affected by remaining in front of the computer for long periods. Also, some learners have been anxious because they can no longer do in-person fieldwork. They added that many disadvantages relate to socioeconomic situations since low-income students have more difficulty accessing information technologies.
Two disadvantages that almost all the participants point out are the lack of interaction between professors and students and the difficulty of carrying out collaborative work, which affects the creation of bonds between them. Professors also claim that exposure to various distractors at home affects the student's attention span. They state that online education is not synonymous with justice, equity, and democracy, which should be the principles of any educational system. Some professors forget that not all students learn at the same pace, yet the usual learning style is privileged.
Another drawback is the belief that being homebound, people have to be available at any time, even on weekends. The participants are concerned about the depersonalization of the educational process; they think the discussions held in class or when coffee was shared during breaks are necessary. Luis comments:
Strategies in distance learning
Professors comment that they organized online sessions with the students through different platforms every week. Regarding the strategies used for virtual teaching, there are written reports or essays, having students participate in class, and adaptations to the didactic materials. Some professors mention the distribution of texts throughout their course and the request for written reports to verify understanding of the topics. Such is the case of professor Alba, who states that she provided "the students with a program and all the readings materials that [they] reviewed […] in digital [format]. They did the reading, and I asked them about it [and] they gave me a reading report".
In addition, certain professors explain the importance of peer or workgroup review to check student progress. For example, Professor Juan mentions: "For me, it was important that in the session, they deliver a summary, synthesis, or short essay after each class, according to the topic that had been presented. In their writing it was clear who had been attentive to the class's progress and who had only listened to parts of the session. " (Professor Juan) The most sought-out activities to maintain attention and participation during the synchronous sessions were presentations, debates, and group feedback organized by the learners. In addition, time management during sessions is considered crucial to avoid boredom and monotony. Professor Manuel comments that:
"Two moments are stablished to avoid fatigue and monotony, thinking about classes of 4 or 5 hours. First, content presentations or readings [are shown], [then]a break of 15 or 20 minutes. [After that], questions and doubts, comments and reflections on concepts or topics derived from the authors and readings discussed. " (Professor Manuel)
In the same vein, Professor Juan expresses that "… it was important that [the students] spoke, that they took initiatives of participation, made presentations, asked questions and at least were present with their voices". This fragment matches the findings of Bailey and Card (2009), where teachers mentioned that group communication and timely feedback is essential for effective learning.
Most learners emphasized the organization and personal discipline they implemented during confinement to meet their academic commitments. Ariana mentions that "it was necessary to plan schedules and maintain the routine to meet the objectives of the planned school year". Homero comments that to stay organized; he had to "take notes in Word or on paper; […] Identify the key points of the class to return to later; […] Listen to [his] classmates and professors to learn from their opinions, experiences and appropriations of the subject to be analyzed".
Challenges in online education
As pointed out by Ontiveros and Canay (2013), three main themes prevail for teachers: 1) the shortcomings in terms of technological infrastructure and its proper use 2) the rules of organization in a virtual environment 3) the motivation of the students to participate during the sessions. The lack of devices in good condition in the household, and stable Internet connection that allows the punctual and permanent attendance of the students to the scheduled sessions constitute the two main challenges that stand out in the first theme. For example, Professor Alba mentions that it is difficult for students to "… connect on time. Sometimes I could not do it with bad weather, or the signal strength is feeble". Professor Guillermo summarizes that: "The main challenge is the inequality that generates or deepens among our students: students from rural or urban-popular contexts, and indigenous students are much more affected by digital gaps. [Moreover] these are aggravated by the socioeconomic situation that their families and homes are going through. " (Professor Guillermo) Professor Sabino mentions that it is necessary to "improve Internet access, but at the same time to create institutional conditions so that students who lack adequate equipment can access them (diagnosing who requires special support). " On the other hand, it is also mentioned that, despite having adequate resources, training is required for their correct use. Professor Carlos expresses that it is necessary "to train teachers, students and administrative staff in the efficient use of these technologies. " Distance education's ethical and interaction considerations arising from distance education are also mentioned, given the little familiarity with these virtual environments. Professor Carlos summarizes that it is vital: "To become aware of the social and cultural impact that living with others in a virtual ecosystem entails, which immediately translates into implementing clear protocols for coexistence, as well as the implementation of new collaboration strategies, considering that teamwork is carried out remotely, always respecting the private life of the members of such an ecosystem. " (Professor Carlos) Another critical factor is the motivation of the students to participate in the virtual sessions, which is similar to Jacob's (2013) argument that active participation online is crucial. In particular, Professor Juan points out that "it is difficult [because] sometimes students turn off their camera and video in practically all cases, and there is the feeling that they are only speaking in front of a computer. " Also, Alba states that "the main challenge is to convince students to interact in class. " Regarding students' perceptions, the three challenges above appear similarly. The lack of technology or its stability is an issue mentioned by some participants. For example, Ariana comments that "something which frequently interrupted the sessions was the instability of the network. " Gonzalo mentions that "sometimes there were technical problems related to the quality of the Internet and the performance of [his] laptop, which made it difficult [...] to take the lessons". These statements could be because of the remote location of the interviewees, where the connection to the network contin-ues to be poor, as claimed by Gomez and Martinez (2021).
The second predominant theme is the customization of a workspace in the students' homes; multiple interferences could hinder work and concentration by staying at home. Homer comments that it was necessary to "adapt a space for [the] lessons in which there was no noise or distractions. " Similarly, Yuridia expresses that she had to control "noises at home [as well as] to cope with external sounds, " and Conrado states that he had to "build his own space for intellectual creation and development" so as not to obstruct his virtual learning.
Third, most students report that they had to learn to use the technological tools chosen by the professors. This situation is very similar to Jacobs' (2013) findings of youth's technological skills. Furthermore, learners claim that adapting to the virtual learning modality was necessary, which generated a greater demand for self-learning and responsibility. For example, Homer comments that it was necessary to "learn to use Zoom, [Google] Meet, and Microsoft Teams." Mario concurs that "mastering the software to take class sessions, […] from knowing how to start, up to using all the tools available in the meetings", was unavoidable.
Finally, some informants indicated some affections in their emotional state due to confinement. Luis indicates that his main challenge was coping with "Loneliness. Standing in front of a screen without having actual contact with other people has not been easy. The emotional side has been compromised with this type of work, and concentration and mental health. " (Luis) Yolanda identified "new socio-emotional reactions such as stress, despair, among others, " and Gonzalo felt "unable to learn because the interaction process through the computer was hampered. "
Online lessons and their influence on students' writing
The postgraduate professors, in general, are not very satisfied with the progress of their students in terms of writing, Professor Manuel comments: "In general, they express themselves and make use of writing clearly, but there are mistakes in specific uses of academic writing, which even when they are taught models, it is difficult for them to assimilate and apply without it being clear since it is nothing more than adopting a pattern. […] At the end of the course, there are specific improvements, but slow progress is observed. " (Professor Manuel) They mention that students are used to being corrected and do not bother to review what they will turn in carefully. Also, they do not know how to use word processors properly. Other professors think there were setbacks in writing because learners are more exposed to poorly written materials and consult electronic sources that do not use academic language. They think that this reflects, in a way, that professors continue to be an essential reference in the educational process. Moreover, they believe in systematizing, classifying, and distinguishing worthwhile from trivial information. They also comment that they carefully reviewed all the essays and these chapters submitted by the students and marked their corrections using the Track Changes tool in Microsoft Word. Professor Tania comments: "I used the writings at the end of each unit. I offered Zoom sessions to discuss the pre-writing process (few were online). Subsequently, the Zoom session became a Writing Center for students who required it. " (Professor Tania) Students also point out their progress in writing and comment that they could write better in less time, thanks to the feedback they received. This idea concurs with Tai, Ling & Yan's (2015) research, especially when learners received feedback from peers and professors. Furthermore, they learned to express the main ideas in their theses, and their reading comprehension of scientific texts improved. Others indicate that the confinement provided them enough time to write, rewrite and contact authors from other countries and working groups. They were also able to share their texts with more people and receive recommendations from other authors and seminars. Homero explains what helped him improve his writing: "Regularly, we wrote reading reports on the topics to be analyzed in the different sessions. These were reviewed by the professor, who pointed out the mistakes or issues that could be improved. However, not all professors send feedback on our work. In addition, the students of this course had to review the reading reports of our classmates, which helped us identify linguistic and extralinguistic errors and learn other ways of interpreting the document. In the end, we made a portfolio with all the tasks; this evidenced our progress in writing and academic argumentation. " (Homero) Some students commented that they were aware of the importance of writing because it took two or three hours to review a paper, so as not to deliver something inconsistent. Others mention that they improved their academic writing thanks to the comments made by the students about their schoolwork, and in this way, they learned to argue and strengthen the syntactic, semantic, and pragmatic aspects of the writing process. As McVey (2008) shows, professors' comments appear in different ways and means, balancing positive and negative aspects. Learners agree that writing is key to efficient communication, starting with emails, as they have the most significant impact if on getting a positive response. In general, they comment that they have noticed progress throughout each semester, as they write better at the end of the school year.
Conclusion
The pandemic has urged us to reflect on the need to ensure that all young Mexicans from urban, rural, and indigenous environments have the opportunity to attend school and develop their knowledge, skills, attitudes, and values, which allow them to contribute to society. Educators must be prepared because perhaps lessons will occur in a hybrid format, and there should be training on how to use different software, video channels, online dictionaries, and other technologies. The digitalization process has quickened, and if we are not competent, both teachers and students will continue to suffer from stress and anxiety.
We have changed the physical copies of books for computers, mobile phones, WhatsApp texts, PDF files downloaded from the Internet, but we continue doing almost the same tasks: teachers prepare the programs and distribute assignments at the beginning of the courses; the students comment and prepare reading reports, essays, and theses. Some professors refuse to review online; others understand that doing it carefully from the beginning and being strict about writing will pay off for everyone, as the time invested will be more than rewarded by seeing their students' progress.
At first, it was thought that it was straightforward to change face-to-face for online classes, with the exact schedules, programs, and objectives, but we soon realized how difficult and tiring it is to spend several hours in front of the screen. Some students turn off their video cameras and dedicate their time to other activities. Hence, the professor does not have the same control as in an in-person class. Among the main advantages of the new modality for professors were fewer inconveniences in transportation, traffic reduction, and the chance to work with learners in remote locations. Likewise, the students value their independence and increased responsibility during their virtual education. Also, they seem to appreciate how online learning forced them to be punctual and that professors and tutors are available to support them.
Both students and professors agree that México's main disadvantages and challenges for online education are the lack of technology and training, especially for rural learners. Furthermore, the limited human contact during online lessons is also a concern. Regarding writing progress, the professors are not generally satisfied, but the students are. A possible solution appears to be the level and quality of the feedback received by peers and tutors. Moreover, learners report being more thorough now that their academic independence has been strengthened. Now the challenge is to, as much as possible, build a more resilient and just society. Nevertheless, if we are optimistic, our vision of the past teaches us that humanity ends up prevailing even through the most devastating epidemics. | 2021-12-30T16:04:31.864Z | 2021-01-01T00:00:00.000 | {
"year": 2021,
"sha1": "701f59b567e6b7b862ccd5e0c5b7b2b263c6dd1e",
"oa_license": "CCBY",
"oa_url": "https://scindeks-clanci.ceon.rs/data/pdf/0352-2334/2021/0352-23342104051D.pdf",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "4e786ac19d559acfc39e37fa800731e9b3fb1897",
"s2fieldsofstudy": [
"Education",
"Computer Science"
],
"extfieldsofstudy": []
} |
236033361 | pes2o/s2orc | v3-fos-license | Monaldi cavernostomy for lung aspergillosis: A case report
Pulmonary aspergillosis in patients with respiratory failure can severely affect the pulmonary functional status and may aggravate it through pulmonary suppuration, by recruitment of new parenchyma and hemoptysis, which can sometimes be massive, with lethal risk by flooding the bronchus. The treatment consists of a combination of medical therapy, surgery and interventional radiology. In small lesions, less than 2-3 cm, medical therapy methods may be sufficient; however, in invasive forms (larger than 3 cm) surgical resection is necessary. Surgical resection is the ideal treatment; nevertheless, when lung function does not allow it, action must be taken to eliminate the favorable conditions of the infection. In such cases, whenever the lung cavity is peripheral, a cavernostomy may be performed. Four cases of lung cavernous lesions colonized with aspergillus, in which the need for a therapeutic gesture was imposed by repeated small to medium hemoptysis and by the progression of respiratory failure, were evaluated, one of which is presented in the current study. Cavernostomy closure can be realized either surgically with muscle flap or spontaneously by scarring, after closure of the bronchial fistulas by epithelization and granulation. There were no recurrences of hemoptysis or suppurative phenomena. There was one death, a patient with severe respiratory failure caused by superinfection with nonspecific germs. However, in the case presented in this study, the patient recovered following cavernostomy, which seems to be an effective and safe method for cases in which lung resection is not feasible.
Introduction
Pulmonary aspergillosis is a lung disease secondary to the presence of a ubiquitous germ, Aspergillus fumigatus, which occurs mainly in immunodeficient individuals. Aspergillus fumigatus is often the most blamed fungus in Aspergillosis lung injuries. It is a ubiquitous germ that can be aggressive through systemic reactions, as well as local or disseminated development. There are five described anatomo-clinical forms. The allergic bronchopulmonary form involves the development of the germ in conditions of asthma or bronchiectasis with the onset of allergic immunological reactions. Another non-invasive type is the localized form of pulmonary aspergilloma, which is found in the development of mycetoma in conditions of localized pulmonary emphysema or intraparenchymal cavernous lesions, where it finds optimal conditions for development (humidity, darkness and lack of ventilation) (1)(2)(3). The extreme form of local invasion is chronic necrotizing aspergillosis with extensive pleuropulmonary injuries (4)(5)(6). The systemic extensive form in immunosuppressed individuals is represented by invasive pulmonary aspergillosis (7)(8)(9). A form of hypersensitivity pneumonia caused by inhalation of aspergillus particles is also described (10).
The localized form of pulmonary or chronic necrotizing aspergillosis has a high lethal potential outcome, both by progressively induced respiratory failure and by possible acute-associated complications, including pulmonary suppuration and hemoptysis. Aspergilloma usually occurs in old post-tuberculosis lung cavernous lesions (1)(2)(3)(4)(5)11,12). Through recurrent infections with periods of pulmonary suppuration and remission, additional parenchyma is recruited with each exacerbation, functionally excluding new parenchymal areas. In addition, by releasing endotoxins, the hypertrophy of the local bronchial vascularization is stimulated until they rupture and trigger hemoptysis, a complication with high immediate lethal risk by pulmonary flooding; it exposes the lesion to superinfection (13)(14)(15)(16)(17)(18).
In the case presented in this study, the patient successfully underwent cavernostomy, a potentially effective and safe method for cases with life-threatening hemoptysis in which lung resection is not feasible.
Case report
Approval for the case report was obtained from the Ethics Committee of 'Marius Nasta' National Institute of Pneumology, Bucharest, Romania (no. 152/2020). Written informed consent was signed by the patient on 13.05.2020.
A patient with low social status, former smoker with a history of left pulmonary tuberculosis, cachexia, was admitted into the Department of Thoracic Surgery, 'Marius Nasta' National Institute of Pneumology, having presented repeated mild hemoptysis in the previous 3 weeks, bronchial suppurative phenomena and worsening respiratory failure manifesting through shortness of breath and hypoxemia.
Chest X-ray revealed a left apical pulmonary cavity (Fig. 1). The chest computed tomography (CT) showed a culminal cavity surrounded by consolidation, probably due to aspergillosis infection and aspiration of blood (Fig. 2).
The bronchoscopy revealed an abnormal structure of the left upper lobe bronchus, suppurative characteristics and hemoptysis traces in the culminal bronchus. The bronchoscope reached the aspergillary cavity; it revealed 9 mm diameter through the upper left bronchus and culminal bronchus.
The pulmonary function test results indicated: FVC: 2.66 l, 74%, and FEV1: 800 ml after antibiotic treatment. The arterial blood gas tests indicated a resting hypoxemia of 70 mmHg. Considering the results, performing lung resection was not possible due to the extremely high risks involved; therefore, another procedure was necessary.
Preoperative care included embolization of the bronchial arteries, in order to avoid hemoptysis in the perioperative period. Antibiotic and antifungal treatment regarding the suppurative phenomena was established based on the antibiogram.
The therapeutic procedure of choice was Monaldi Cavernostomy, in optimal conditions. The procedure was carried out in the left axillary region, consisting of 4 lateral sections of the ribs resected (3)(4)(5)(6), guided by peripheral contact of the cavern with the thoracic wall. Removal of the adjacent thickened pleura was performed, creating an opening, followed by removal of the mycetoma, cavity lavage and the identification of multiple bronchial fistulas inside the cavity (Fig. 3).
The next step was determined by musculocutaneous flap to pleuropulmonary edge suture. No per primam suture of the fistulas was intended. Slightly compressive bandage was performed to enable aerial drainage of the bronchial fistulas. Subsequently, daily dressing of the wound was practiced. Antibiotic treatment, including antifungal treatment, was maintained for 3 weeks. Furthermore, after 3 months, muscular myoplasty was necessary in order to fill in the cavity, to favor the closure of the remaining fistulas and a reopening of the skin stoma, which had a tendency of closing.
After 1 year, the patient did not present any episode of hemoptysis or suppurative process and no progressive degradation of respiratory function. Difficulty in speaking was present in the first two months after the surgery, being ameliorated by adducting the upper limb at the level of the skin stoma, due to major air losses through the bronchial fistulas. The difficulties of local care, local dressing, initially with betadine grooming and sterile dressing were overcome in approximately 3 weeks, the patient being independent after this period.
Discussion
The diagnosis of pulmonary aspergillosis is usually based on imaging, with the typical radiological aspect of intracavitary mycetoma being the Monod sign, especially in the context of tuberculosis-related history and clinical manifestations of recurrent hemoptysis. The bronchoscopy examination often reveals chronic bronchial alterations and, through bronchial aspiration, aspergillus fumigatus is identified (19,20).
The antifungal drug treatment is complementary to the surgical treatment, in order to limit the local suppurative effects (21)(22)(23). In addition, hemostatic drugs such as lysine derivatives type are used in order to decrease the postoperative hemorrhagic incidents, given the increased risk of bleeding by specific pleuropulmonary adhesions (13,(24)(25)(26).
The interventional radiology treatment, the embolization of the hypertrophied bronchial artery, has a temporary effect (13,27) and it can be used preoperatively to avoid suppurative or hemoptysis perioperative episodes. Single embolization has a risk of recurrence of hemoptysis of over 50% because the environmental conditions that allowed the development of aspergilloma remain present (13,24).
The treatment of pulmonary aspergilloma in a single, non-invasive form or in chronic pulmonary aspergillosis, as well as the invasive form is a combined surgical, radiological and medical treatment (13,(21)(22)(23)(24)(25)(26)(27)(28). The therapeutic strategy is individualized for each patient (2,24,28). The ideal treatment is surgical, by removing the pulmonary caverns or cavities with favorable conditions for fungal development, thus excluding the maintenance factors of suppurative phenomenon or recurrent massive hemoptysis (2,28). In addition, adjusted procedures are recommended as much as possible, avoiding remnants of affected lung tissue.
Whenever there is a patient with pre-existing compromised lung function that makes lung resection not possible, alternative surgical solutions should be considered. The purpose is to abolish the pathological conditions for the development of this fungus, such as lung caverns or superinfected air bubbles.
Cavernostomy, which represents the conservative surgical treatment of invasive pulmonary aspergillosis, should be considered in cases with a lung function that does not allow pulmonary resection, but also in patients with comorbidities that prevent them from benefiting from major lung resections, as in pre-existing high-risk cardiovascular conditions (17,(29)(30)(31)(32)(33)(34)(35). Da Silva et al (36) identified an indication for cavernostomy as the forms of chronic invasive pulmonary aspergillosis that require pleuropneumonectomy-fused intrapleurally with complete pulmonary destruction or with bilateral forms, usually occurring in immunocompromised patients or with modified clinical condition. Therefore, it can be indicated even in those with permissive respiratory function. The therapeutic objective is to abolish the conditions that lead to the development of Aspergillus by performing cavernostomy (34)(35)(36).
Cavernostomy is applied in selected cases with peripherally cavernous lesions colonized with Aspergillus, in close contact with the chest wall and its structures.
Height recommends closed drainage for 7-10 days, with intracavitary lavage with antifungals in the preoperative period (26).
It involves making an 'H'-shaped skin incision centered on the cavity or on the pleurotomy orifice of the previously placed pleural drainage tube. The approach site is usually axillary to the posterior axillary line for the lesions found in the Fowler segment, but also interscapular vertebral approach for posterior and even apical ones. In forms with significant lung damage and remodeling, cavitary wall exposure dictates drainage (34,(37)(38)(39).
Musculocutaneous flaps are harvested and are then attached to the pleuropulmonary edges of the cavern after the resection of 2-3 costal segments and after the excision of the exposed thickened pleura. Careful ligation of the affected intercostal pedicles and careful hemostasis are practiced, given the risk of hemorrhage secondary to adhesions and local vascular hypertrophy.
The cavity is cleaned, removing the mycetoma and identifying the bronchial fistulas.
The complete excision of the exposed thickened pleura is performed while avoiding the creation of 'pockets', in order to have a maximum opening and exposure of the pulmonary cavity.
Subsequently, by daily dressing and cleaning of the cavernostoma, the granulation and epithelialization of the cavern will be stimulated. Sometimes, it is necessary to reopen the stoma due to its tendency of superficially closing. Nakada et al uses the Alexis wound retractor to avoid this issue (31). Bronchial fistulas may persist for a long time, but they eventually slowly close. The closure of the cavernostoma, a secondary therapeutic objective, can be performed surgically by myoplasty or can occur spontaneously by epithelialization (31,33,34,40,41). Under optimal functional conditions, most authors indicate pulmonary resection as an elective treatment in invasive pulmonary aspergillosis. Cavernostomy, originally considered for tuberculosis lesions, was subsequently employed for patients with invasive pulmonary aspergillosis with borderline lung function, in whom the anesthetic-surgical risk was too high to perform lung resection.
Authors found that in the group with conservative drug treatment in symptomatic forms the results were poor, with a significant mortality rate at 12 months. Therefore, surgical treatment was imposed. It is unanimously accepted that in patients with absolute contraindication to surgery, parenteral treatment is not sufficient and it is recommended to inject intracavitary amphotericin, or saline solution, and if possible, draining the cavity under CT guidance. Complete or temporary remissions can be obtained (26).
Takahashi et al (42) presented an article of three case presentations, but all with acceptable lung function (which allowed a lung resection, i.e., FEV1 over 55%), but all on a postoperative background of neoplastic context. Patients had developed aspergillus forms on the remaining pleural cavities, probably air contaminated postoperatively by parenchymal or bronchial air fistulas.
Concerning surgical timing, Gebitekin et al (44) and Da Silva et al (36) performed the intervention in conditions of massive hemoptysis, but it is preferable to perform the intervention in chronic conditions, if possible, after bronchial artery embolization or after medical treatment with traxemic acid. Rergkliang et al did the cavernostomy for massive hemoptysis, but without having a preoperative pulmonary functional evaluation (44,45).
Cavernostomy is performed where the lesion is closest to the chest wall. The place of incision is the axillary area in most cases, but there was also one case in which the anterior area on the right medioclavicular line was the elected incision area.
In our group, out of 4 surgical procedures, 3 cavernostomies were performed in the right axillary area and one interscapuleovertebral on the left hemithorax after resection of the posterior rib arches 3 and 4. The procedure itself is guided by the peripheral area of the cavern, usually associated with lesions of the posterior and apical segments. The approach is limited in the scapular area if there is an exact overlap.
It is preferable to locate via guidance the peripheral contact area of the cave under a computed tomography or radioscopic.
The procedure is performed in a single stage under general anesthesia. Performing the procedure under local or regional anesthesia constitutes an exception. Gebitekin et al (44) simultaneously mobilized muscle flaps to fill the cavity. This option should be maintained if there are no major bronchial fistulas. Large flap muscles of the pectoralis major, serratus anterior or latissimus dorsi muscles can be used depending on the topography of the cavern. Shirahashi et al (46) reported a case of two-stage operation with omentopexy following cavernostomy, for lung abscess arising in the residual lung after bilobectomy.
The procedure may also involve mobilization of muscle flaps for filling the remaining cavity, but usually these should be avoided until the area is granulated and the fistulas have shrunk or even closed. It can be carried out later, after the closure of the bronchial fistulas or partially to favor the closure of the fistulas in case of reintervention for the reopening of the stoma. Regnard et al (47) and Sagawa et al (48) mentioned reinterventions to reopen the superficially closed stoma to restore communication.
Postoperatively, the authors mention the dressing of the cavernostoma with gauzes soaked with amphotericin B; however, daily dressing and cleaning the cavernostoma with sterile gauzes is sufficient, abolishing the favorable conditions for fungus development. Systemic antifungal treatment is recommended to precede the intervention by 2 weeks and to follow it for up to 3 months.
In a large study, Cesar et al (35) presented 111 cases subjected to cavernostomy for pulmonary aspergillosis associated with reduced lung function. The evolution of the patients was similar to the group in which lung resection was performed. The author found a higher rate of hemorrhagic complications, probably due to specific hypervascularization and recurrences, secondary to the tendency of superficial closure of the stoma or complete non-drainage of the aspergillary cavities (35).
Overall, the method of cavernostomy remains a good solution. It is effective and it can be performed in patients with complex fungal ball with peripheral location, with permanently or temporarily impaired pulmonary function. In stable patients without active hemoptysis, it demonstrated to be an easy-to-perform and low-risk procedure.
In conclusion, lung resection techniques for invasive pulmonary aspergillosis are of choice if lung function allows for it and there are no other contraindications. The high rate of mortality of lung aspergillosis with poor ventilator function despite medical treatment and arterial embolization demand a solution with minimal impact. Thus, cavernostomy should remain an option in selected cases that do not allow large-scale resections or an additional functional decrease. the patient. IB, NB and CS prepared the draft of the manuscript. CS was advisor of the surgical procedures. CS and NB reviewed the final version of the manuscript. CP and CS assessed the authenticity of all data. The authors read and approved the final version of the manuscript.
Ethics approval and consent to participate
Written informed consent was signed by the patient on 13.05.2020. Approval of the Ethics Committee of 'Marius Nasta' National Institute of Pneumology, Bucharest, Romania was obtained (no 152/2020).
Patient consent for publication
Consent for publication of the patient's data and images was obtained. | 2021-07-18T08:30:49.758Z | 2021-07-06T00:00:00.000 | {
"year": 2021,
"sha1": "bbf17de7194077e1b2bbc995c1055d2e7eb14157",
"oa_license": "CCBYNCND",
"oa_url": "https://www.spandidos-publications.com/10.3892/etm.2021.10389/download",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "bbf17de7194077e1b2bbc995c1055d2e7eb14157",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
202023056 | pes2o/s2orc | v3-fos-license | Specific recombinant proteins of porcine epidemic diarrhea virus are immunogenic, revealing their potential use as diagnostic markers
Highlights • A panel of PEDV structural and nonstructural proteins were expressed.• These PEDV antigens were tested for reactivity with sera from PEDV-infected pigs.• 851 porcine sera were analyzed by ELISA with antigens showing immune-reactivity.• Pattern of neutralizing antibody was more similar to that of IgA in weaning piglets.• ORF3C and E in addition to S1 can be used as diagnostic markers for serologic detection.
Introduction
Porcine epidemic diarrhea (PED) is characterized by severe diarrhea, vomiting, and dehydration followed by high mortality in suckling piglets . The causative agent, porcine epidemic diarrhea virus (PEDV), was initially identified in Europe in 1978, and its genome (˜28 kb in size) consists of seven open reading frames (ORFs) (Kocherhans et al., 2001). The 5' two-thirds of PEDV genome encodes ORF1 (consisting of overlapping ORF1a and ORF1b), and the 3' onethird harbors ORFs encoding four structural proteins, the spike (S), envelope (E), membrane (M) and nucleocapsid (N), and an accessory ORF3 between S and E Kocherhans et al., 2001;Tian et al., 2014). RNA synthesis in PEDV is carried out by a replicase-transcriptase composed of 16 nonstructural proteins (Nsp1-16) encoded by ORF1a and ORF1b. Among them, Nsp3 comprises multiple structural domains, including a highly acidic domain at the amino terminus (Ac), and a highly conserved ADP-ribose-1-phosphatase (ADRP) macrodomain. The Ac domain of Nsp3 is essential for virion assembly and plays a critical role in interaction with the viral nucleocapsid during early infection, whereas the ADRP provides activities necessary for synthesis of genomic and subgenomic RNAs (Hurst-Hess et al., 2015). As pigs of all ages are susceptible to PEDV (Alvarez et al., 2015;Annamalai et al., 2015), there is an urgent need for the development of highly sensitive and specific diagnostic assays for use in the field (Diel et al., 2016).
Since the identification of PEDV, several diagnostic tests based on PCR detection of viral RNA have been described in the literature (Diel et al., 2016). Another common diagnostic method is serological testing for the presence of specific antibodies against viral proteins, which is also fast and convenient for epidemiologic investigations. Many tools have been developed for the detection of anti-PEDV antibodies based upon the major structural proteins (such as S, M or N proteins) in serum, colostrum, milk, feces and oral fluid, including indirect immunofluorescence assays (IFA), virus neutralization assays (SN), enzyme-linked immunosorbent assays (ELISA), and fluorescent microsphere immunoassays (FMIA) (Diel et al., 2016;Gerber et al., 2014;Gerber and Opriessnig, 2015;Gimenez-Lirola et al., 2017;Okda et al., 2015). But comparative studies of the above assays using different PEDV structural and nonstructural proteins as antigens are rarely conducted. Meanwhile, the diagnostic potentials of specific PEDV accessory and nonstructural proteins, if any, have not yet been investigated in details.
In this study, a panel of recombinant PEDV ORFs encoding structural and nonstructural proteins were expressed in mammalian and/or bacterial cells and screened for reactivity with porcine sera from seven provinces of China by ELISA and/or western blot analysis, in order to determine which antigen is most suitable as a diagnostic marker for PEDV infection. Several rabbit polyclonal antibodies against these recombinant proteins were also generated and validated for use as diagnostic tools upon PEDV infection in vitro.
Another 88 serum and fecal samples were obtained from weaning piglets experimentally challenged with PEDV-ZJU/G2/2013 in three PEDV challenged experiments. The procedure of the PEDV challenge has been described previously . The Briefly, the 3-dayold conventional piglets were inoculated with about 5 ml 10 6 infectious titer (TCID 50 ) PEDV in 1 × PBS (pH 7.4). Blood and fecal samples were collected prior to inoculation and at 1, 3, 5, 7, 10, 14, 17 days postinoculation (dpi). Samples from a total of 11 survival piglets but not from dead pigs at each time point were collected. After centrifugation at 1500 ×g for 10 min, serum was harvested, aliquoted and stored at −80°C until use. Fecal swab samples were individually mixed with 1 ml of 1 × PBS (pH 7.4) immediately after collection, placed in a 2 ml cryogenic tube (BD Falcon™), and stored at −80°C until use. The animal experiments were approved by the Experimental Animal Ethics Committee of Zhejiang University (approval no. ZJU20170026).
Preparation of the purified PEDV virions
The PEDV-ZJU/G2/2013 strain was propagated in Vero cells in DMEM supplemented with 5 μg/ml trypsin according to the standard method . Briefly, a confluent cell monolayer was washed with minimum essential media (MEM) twice before infecting with the virus [MOI (multiplicity of infection) = 1] for 2 h at 37°C, after which additional culture medium was added without removing the inoculum. Observed cytopathic effects (CPE) reached approximately 90% in 2-3 days, and the virus culture supernatant was collected after three freezethaw cycles, then clarified by high speed centrifugation (4000×g for 30 min) and further purified by ultracentrifugation through a 20% (wt/ vol) sucrose cushion (140,000 ×g for 3 h). The purified virus was further exposed by a negative staining technique, examined using electron microscopy, then stored at −80°C until use. Concentration of viral proteins was measured by BCA protein assay kit (Beyotime, Shanghai, China).
Expression and purification of recombinant PEDV proteins
Full-length PEDV cDNA was used as a template for amplification and cloning of various PEDV genes, including those encoding the C terminus of PEDV ORF3 (ORF3C) and the complete sequences of E, Nsp1, Nsp2, Ac, and ADRP. The constructs used for this study are listed in Table 1. For transient expression in mammalian cells, the pcDNA-3.1 vector was used, and the expression plasmid pcDNA-PEDV-S1-Fc has been described in the previous study (Gerber et al., 2014).
For expression of His-tag fusion proteins in bacteria, PEDV target genes were cloned in-frame with N-terminal 6×His tags in the pET-28a (Novagen; for Ac, ADRP, Nsp1 and Nsp2) or the pET-28a-derived pSmart-I vector with an N-terminal SUMO (small ubiquitin-related modifier) tag (Smart-lifesciences, Changzhou, China; for ORF3C and E). The oligonucleotide primer sequences and approaches used are available upon request. The sequences of all constructs were confirmed by DNA sequencing (Huada Gene Technology Co., Ltd).
The above recombinant plasmids were transformed into BL21 (DE3) competent cells, respectively, and grown in 1 L of Luria-Bertani (LB) medium (Invitrogen) containing 100 μg/ml kanamycin at 37°C with shaking at 220 rpm. When an OD 600 of 0.6 was reached, 1 M isopropyl β-D-1-thiogalactopyranoside (IPTG) was added to the final concentration of 0.5 mM in the 1 L of LB medium, and bacteria were grown for an additional 14 h at 20°C. Cells were chilled at 4°C and harvested by centrifugation at 5000 ×g for 5 min, resuspended in 30 ml lysis buffer (20 mM Tris−HCl, pH 8.0), and disrupted by ultrasonication. Crude extracts were centrifuged at 10,000 ×g for 10 min at 4°C, and soluble expression of the His-tagged fusion peptides was confirmed by SDS-PAGE analysis prior to purification by the Ni-NTA His•Bind® Resin system (Transgen Tech, DP101, Beijing, China) according to the manufacturer's instructions. For the polypeptides that were expressed in inclusion bodies, they were first solubilized in a denaturing buffer (20 mM Tris−HCl, with 8 M urea, pH 8.0), and purified by Ni-chelating chromatography (GE Healthcare). Elutions were pooled, dialyzed at 4°C against 20 mM PBS (pH 7.4) with 150 mM NaCl and 4 M urea, and analyzed by SDS-PAGE and western blot.
Generation of rabbit polyclonal antibodies against recombinant PEDV proteins
Five purified, recombinant PEDV peptides (ORF3C, Ac, ADRP, Nsp1, and Nsp2) were selected and separately used to immunize two New Zealand White rabbits, using a custom antibody production service at Hangzhou Belta-Biotechnology Co., Ltd. (Hangzhou, China). Rabbits were not immunized with recombinant E protein.
For serum western blot analysis, purified S1, ORF3C, E, and Ac peptides were incubated after SDS-PAGE and membrane transfer with individual porcine sera diluted 1:1000, or with anti-His MAb or polyclonal rabbit serum as positive controls. HRP-conjugated rabbit antiswine IgG and goat anti-mouse IgG (1:10,000 dilution; Abcam, United States) were used, as appropriate, as secondary antibodies.
Indirect ELISA
Antigen concentration and dilutions of sera and HRP-conjugated antibodies were optimized by checkerboard titrations. The optimal amount of PEDV WV, S1, ORF3C or E antigen used for coating was 7.8, 0.44, 1.56 or 0.78 ng/well/100 μL.
Microtiter plates were blocked with 300 μl/well blocking buffer (Thermo Fisher Scientific, USA) for 1.5 h at 37°C. After coating, 100 μl of serum samples (1:100 dilution) were transferred in triplicate and incubated at 37°C for 2 h. Afterwards, 100 μl diluted HRP-conjugated goat anti-swine IgG or IgA (1:10,000 dilution; Thermo Fisher Scientific, USA) was added to each well and incubated at 37°C for 1 h. Wells were washed between incubation steps three times with 300 μl PBS (10 mM, pH 7.4) with 0.05% Tween-20 (PBS-T washing buffer). Finally, 100 μl TMB Color liquid (Solarbio, Beijing, China) was added to each well and incubated for 10 min at room temperature, after which the reaction was stopped by addition of 50 μl/well of 2 M sulfuric acid, and the plates were read at 450 nm using a spectrophotometer.
Initial PEDV-negative sera were obtained from the United States (a gift from Dr. Tanja Opriessnig) that was subsequently used for screening negative porcine sera in China as reported previously (Gerber et al., 2014). The ELISA positive cutoff values were calculated as the mean OD of negative controls (n = 4) plus three standard deviations. The positive and negative sera from experimentally-infected piglets were also confirmed by western blot on purified PEDV WV and S1 protein antigens as previously described (Huang et al., 2011). Positive and negative controls were run in duplicate on each ELISA plate.
Antibody response in all tested samples was represented as a corrected sample-to-positive (S/P) ratio, calculated as follows: S/P ratio = (sample ODmean OD, negative controls) / (mean OD, positive controlsmean OD, negative controls).
MEM supplemented with 0.5% (w/v) trypsin (MMT) was added to each well and incubated at 37°C for 48 h, then cells were fixed with 4% paraformaldehyde.
For specific detection of PEDV proteins, different rabbit anti-PEDV polyclonal antibodies were used as appropriate, with a mouse anti-PEDV S1 MAb (Cat no: 9191, JBT, Korea) used as a positive control. Secondary Alexa Fluor 488-conjugated goat anti-mouse IgG or goat anti-rabbit IgG (Invitrogen) were used at a 1:1000 dilution, incubated for 1 h at room temperature. Plates were washed three times between antibody incubations with 300 μl/well of PBS-T. Nuclei were stained with 4', 6-diamidino-2-phenylindole (DAPI; KPL, Inc.) at a 1:1000 dilution, and visualized under a fluorescence microscope.
Serum neutralization (SN) test
Sera from challenged piglets were tested for neutralizing antibodies (NA), according to the protocol with slight modification (Kusannagi et al., 1992). Briefly, serum samples were inactivated at 56°C for 30 min and then 2-fold serial dilutions (1:4˜1:512) were prepared in 96well plates. After mixing 50 μl of each dilution with 50 μl PEDV (10 5 TCID 50 /ml), samples were incubated for 1 h at 37°C and used to infect monolayers of Vero cells in 96-well plates. After adsorption for 2 h at 37°C, the inoculum was discarded, plates were washed three times with MEM, and maintenance medium (containing 5 μg/ml trypsin) was added to each well. After incubation at 37°C for 48 h, cells were observed on an inverted microscope for CPE such as cell fusion and nuclear atrophy. SN titers were calculated using the Reed and Muench method and expressed as the reciprocal of the highest serum dilution resulting in 50% inhibition of PEDV infection, relative to controls.
Data analysis
All data were processed using SPSS (version 20.0) software and the GraphPad Prism program as described previously (Gimenez-Lirola et al., 2017;Huang et al., 2012).
The cutoff value and diagnostic performance of each PEDV antigen was determined by receiver operating characteristic (ROC) analysis (SAS Version 9.4, SAS Institute, Inc., Cary, NC, USA) based upon the ELISA results.
The purified PEDV whole virus shows pleomorphism
PEDV-ZJU/G2/2013 strain was propagated in Vero cells and purified by ultracentrifugation on a sucrose-gradient. Electron microscopy revealed that the purified virus was comprised of a great number of vesicles with morphological heterogeneity and envelope fragments carrying spikes (Fig. 1A). Previously, the virions of several coronaviruses such as transmissible gastroenteritis virus (TGEV), turkey and bovine enteric coronaviruses have been observed with a diameter ranging between 60-220 nm (Dea and Garzon, 1991). To our knowledge, the pleomorphic property of PEDV virions containing not only the small or defective particles, but also the giant spherical particles ranging up to 350 nm in diameter, is reported here for the first time. Although a few of the PEDV particles had lost partial spikes, they were relatively intact. As spike glycoproteins are known to be the most immunogenic proteins of coronaviruses , purification by ultracentrifugation through sucrose cushions in this study proved to be reliable. Also, the level of PEDV protein was relatively high as detected by BCA protein kits, thus confirming that the quality, integrity and quantity of the purified virions were sufficient for use as the antigen in subsequent ELISA assays.
Characterization of six recombinant PEDV proteins shows consistency with their predicted sizes
Initially, we failed to detect the expression of the complete ORF3 using the pET-28a, the pSmart or the other bacterial expression vectors under different conditions by western blot analysis (data not shown). Therefore, the 127-aa of the C terminal part of ORF3 (ORF3C) showing the predicted hydrophilicity profile was chosen as the target antigen for ORF3.
The Ac, E and ORF3C recombinant peptides were expressed in the inclusion bodies, whereas the Nsp1, ADRP and Nsp2 proteins displayed soluble expression in the cultured supernatants. Expression yields of Ac, E, and ORF3C were very low, hardly visible, when examined by Coomassie blue staining after SDS-PAGE (data not shown); but confirmation of these three peptides with predicted sizes (Ac:˜23 KDa; E fused with a SUMO tag:˜28 KDa; ORF3C fused with a SUMO tag:˜34 KDa) could be done using an anti-His-tag antibody by western blot (Fig. 1B). On the other hand, SDS-PAGE and western blot analyses of purified Nsp1, ADRP and Nsp2 soluble proteins showed bands that were consistent with the predicted sizes of 13, 18 and 87 KDa, respectively (Fig. 1C). The purified S1 protein expressed in mammalian cells was also identified as a single band by SDS-PAGE (Fig. 1D) and by western blot using an anti-S1 monoclonal antibody as described previously (Gerber et al., 2014). Due to glycosylation of the S1 protein, the size of the band in the gel was larger than its predicted size (˜86 kDa).
3.3. Antibodies generated against recombinant PEDV Ac, ORF3C, and Nsp2 proteins resulted in specific fluorescence in vitro Purified recombinant PEDV peptides (ORF3C, Ac, ADRP, Nsp1, and Nsp2) were used to immunize rabbits, generating polyclonal sera that were used to detect viral proteins on PEDV-infected Vero cells by IFA. The E protein was not used to immunize rabbits in this study. Staining of anti-Ac polyclonal serum resulted in specific fluorescence at 48 h post-infection ( Fig. 2A, B) similar to the signal observed using the anti-S1 MAb as the positive control (Fig. 2G, H). Specific fluorescence was also detected with the anti-ORF3C (Fig. 2C, D) and the anti-Nsp2 polyclonal antibodies (Fig. 2E, F). In contrast, no specific fluorescence was observed when using the anti-ADRP and anti-Nsp1 polyclonal antibodies. IFA with pre-immune rabbit sera displayed no fluorescent signal in Vero cells infected with PEDV (Fig. 2I, J). The viral antigens were all detected in the cytoplasm of the infected cells. The anti-S1 Mab reacted more strongly than the three positive rabbit antisera based on a comparison of the positive cell numbers and fluorescence intensities.
Infection of Vero cells or Vero cells expressing the entry receptor porcine APN with the other swine enteric coronaviruses , such as swine enteric alphacoronavirus , porcine deltacoronavirus (PDCoV) and TGEV, had no detectable fluorescence after IFA with the anti-PEDV polyclonal antibodies described above (data not shown). Therefore, anti-PEDV-Ac, anti-PEDV-ORF3C and anti-PEDV-Nsp2 are PEDV-specific and do not cross-react with these known porcine coronaviruses.
IgG and IgA responses in PEDV-infected weaning piglets varied over time
Previously, we have developed and validated indirect ELISA based on the S1 protein to monitor serum anti-PEDV IgG and serum and fecal anti-PEDV IgA antibodies in postweaning pigs (Gerber et al., 2014;Gerber and Opriessnig, 2015). In this study, in order to determine the pattern of antibody response of weaning piglets in a 17-day weaning period after PEDV infection, serum or fecal samples from experimentally-infected 3-day-old piglets were examined by ELISA based on the PEDV WV or the S1 protein, and by serum neutralization test (Fig. 3). The results indicated that IgG and IgA responses against both antigens were detected in serum at different time points after PEDV infection ( Fig. 3A and B). Despite challenge with PEDV in these piglets, levels of serum IgG and IgA decreased from 1 dpi, reaching a minimum after 7 dpi as detected by both WV and S1 antigens (Fig. 3A, B), and the pattern or trend of neutralizing antibody (NA) was more similar to that of IgA (Fig. 3C). A good linear relationship between the S1-based IgA ELISA titers and NA titers was observed (Spearman's rank correlation coefficient of 0.98; p < 0.001), demonstrating the correlation between them (Fig. 3D). There were some differences in the sensitivity of the antigens to detect antibodies, as levels of serum IgA were slightly higher when the S1 protein was used as the detection antigen. Levels of fecal IgA were also highest before challenge (0 dpi), and continuously declined after challenge (Fig. 3E). The specificity and sensitivity of detection of S1 and WV antigens were similar for serum IgA. The high IgG and IgA antibodies and NA detected at the early stages of the weaning piglets are presumably maternal antibodies received from sows that were not PEDV negative. In addition, piglets during weaning have not developed their own immunity to the virus. These results also demonstrated that the S1-based ELISA is an alternative (to WV-based) and ideal serological assay for detection of anti-PEDV antibodies.
3.5. Antibodies against specific PEDV peptides were detectable in sera from naturally infected pigs Recombinant PEDV S1, ORF3C, E, Ac and Nsp2 peptides were used as antigens in western blots to detect antibodies in porcine sera from commercial farms since they produced antibodies in rabbits reacted to PEDV antigens in infected cells except for the E protein that was not tested (Fig. 2). The criterion for determining the seropositivity to a particular antigen of a sample was whether the expected protein band A mouse anti-S1 monoclonal antibody (G, H) and pre-immunization rabbit serum (I, J) were used as positive and negative controls, respectively. AlexaFluor 488-conjugated goat anti-rabbit IgG and goat anti-mouse IgG (green) were used as secondary antibodies, as appropriate. Antibody staining merged with DAPI nuclear staining (blue) is shown; magnification = 20× (For interpretation of the references to colour in this figure legend, the reader is referred to the web version of this article).
was presented on the membrane. According to this, serum western blot analysis of the anti-Nsp2 antibody showed high-background results and thus further investigation of Nsp2 was not performed.
A main specific band appeared for S1, Ac, E or ORF3C protein antigen tested (arrows indicated in representative samples in Fig. 4), though additional "fuzzy" bands were present at low or high molecular weights that were likely from nonspecific reactivity of serum components. Compared with the other proteins, the S1 band appeared to be more specific, with a cleaner background (Fig. 4A). PEDV-positive sera were screened and verified viathis method, which were used to compare with the results of the ELISA (Fig. 5; see below), whereas negative sera from S1-based ELISA were also analyzed as negative controls, and none of them showed reactivity (Fig. 4, the second lanes in all panels). However, in the case of Ac, the specific 23-KDa bands also appeared in some sera that were negative by ELISA (data not shown).
Sera from diarrheic pigs of different ages had variable reactivity with PEDV antigens
Furthermore, the specific IgG antibody responses in sera from a total of 851 diarrheic pigs from farms in China were analyzed using ELISA assays based on PEDV WV, S1, ORF3C or E proteins (Fig. 5A-D). The results indicated good correlations among these antigens. Overall, the levels of antibodies against PEDV WV and the three proteins antigens were generally higher in primiparous and multiparous sows than those in 3-5 weeks-old pigs (p < 0.01). The trend of serum IgG response to each PEDV antigen from the 1-week-old to the 3-week-old pigs was similar to that of the experimentally-infected weaning piglets (Fig. 3A). Antibodies were highest in the 1 week-old piglets (p < 0.01) except for WV antigen, and then declined to a minimum by 5 weeks-of-age significantly (p < 0.01) before increasing again to a relatively stable level from 7 to 20 weeks-of-age (Fig. 5B,C; p < 0.01). This pattern was in agreement with a recent report about the prevalence of PEDV antibodies in swine farms (Bertasio et al., 2016), which might reflect that newborn piglets receive maternal IgG antibodies from sows, though the level of the antibodies decline quickly, and the weaned piglets must begin to develop their own immunity to PEDV.
However, there were some differences in reactivity to the antigens used in the ELISAs. IgG detected by WV-based ELISA had a similar pattern to that detected by the other antigens, as the antibodies were in a relatively dispersed distribution, although its cutoff value (0.281) was closest to the ORF3C ELISA (0.224). Antibody levels against the PEDV E protein were higher with a relatively concentrated distribution, especially in pigs older than 7 weeks. However, the E protein-based ELISA had the highest cutoff value (0.511), and the distribution of antibodies from 1 to 20 weeks-of-age was obviously different from the S1-based ELISA, which had the lowest cutoff (0.184). Collectively, the data indicated that the PEDV S1 protein was more specific and accurate as a detection antigen in ELISA tests.
Discussion
With the introduction of PEDV into the North American herd in 2013-2014 Tian et al., 2014), the need for a suitable diagnostic marker for the accurate, rapid, and early diagnose of PEDV infection has become much more urgent. PEDV herd-infection status is very important for biosecurity and the control of PED. Compared to RT-PCR and other nuclei acid detection assays, serological tests are advantageous to study immune responses related to vaccination, wild-type virus infection, and to determine whether sow immunity is adequate in individual litters after PEDV exposure (Diel et al., 2016). PEDV infection is not always obvious in finishing pigs, which increases the risk of widespread disease in pigs of all ages (Bertasio et al., 2016). Thus, sensitive serological tests would allow detection of recent infections, to avoid the introduction of these animals into naïve herds. However, there are many different structural and non-structural proteins to choose from when selecting an antigen to use in novel diagnostic assays. Previously, we have developed and validated indirect ELISA based on the S1 protein to detect anti-PEDV IgG and IgA antibodies in postweaning pigs (Gerber et al., 2014). The present study was set up to investigate diagnostic potentials of specific PEDV accessory and nonstructural proteins, which have not yet been reported systematically. Fig. 4. Detection of antibodies against PEDV peptides in porcine sera by western blot. Representative serum samples from diarrheic pigs were used in western blots to detect PEDV S1 (A), Ac (B), ORF3C (C), and E proteins (D); 5 μg of recombinant peptides were loaded in each lane. HRP-conjugated goat anti-porcine IgG was used as the secondary antibody, and PEDV-negative serum was used as negative control. Fig. 5. Distribution of cumulative IgG ELISA sample-to-positive (S/P) ratios in serum samples collected from diarrheic pigs at commercial pig farms. Various indirect ELISA assays based on PEDV whole virus (WV) (A), S1 (B), ORF3C (C), and E proteins (D) were used to test serum samples from commercial sows and pigs with diarrhea at different ages with unknown PEDV infectious status. Sera from naïve piglets were used as negative controls; samples above the determined S/P cutoff (dashed line) were considered positive. One-way analysis of variance (ANOVA) was used for multiple comparisons between different ages among the individual antigens with an alpha value of 0.01. The "ns" denoted no statistical differences.
The sera and feces from the experimental infected piglets at weaning were first used for detection of antibodies using the PEDV WV and S1 protein-based indirect ELISAs. The pattern of change in anti-PEDV IgG/IgA from serum samples and in anti-PEDV IgA fecal samples (immediate decline post-infection) indicates that the piglets had obtained maternal antibodies at birth, and did not produced antibodies even after PEDV challenge until their immune system matured. Additionally, levels of NA were more closely correlated with IgA than IgG (Fig. 3B-D), as seen in other studies (Paudel et al., 2014). Abundant anti-PEDV NA have been demonstrated in colostrum on the day of birth, decreasing rapidly in milk by day 3, and gradually declining further from days 3-19 post-farrowing, which may contribute to variable protective capacity . The current study showed a similar decline, further confirming the reliability of the S1-based ELISA assays applicable to weaning pigs in addition to postweaning pigs (Gerber et al., 2014;Gerber and Opriessnig, 2015), and also highlighting the importance of accurate diagnosis in a short window for proper immunization of sows and piglets.
Compared to the PEDV WV, S1-based assays showed good reactivity and high sensitivity/specificity (Figs. 3, 4A and 5 B). It is worth noting that the recombinant S1 protein was expressed in a eukaryotic expression system and should display a natural conformation with high glycosylation, as shown in Fig. 1D, which may be one reason for its higher detection sensitivity. On the other hand, WV was mainly purified by sucrose density gradient centrifugation, differential centrifugation or polyethylene glycol (PEG) precipitation (Hoffmann and Robert, 1990). These methods would damage the integrity of the virus, especially the surface spike glycoprotein. Therefore, the eukaryotic-expressed S1 protein is likely more advantageous than the WV as the antigen for PEDV serological assays. For another two major structural proteins M and N, due to common epitopes shared by PEDV, TGEV and PDCoV, several studies have previously demonstrated that PEDV M or N presented some cross-reactivity to TGEV or PDCoV (Gimenez-Lirola et al., 2017;Lin et al., 2015;Ma et al., 2016). In contrast, recombinant PEDV-S1 had no cross-reactivity with sera from these porcine coronaviruses, showing the best diagnostic sensitivity (Gimenez-Lirola et al., 2017). Therefore, we did not pursue the development of serological assays based on M or N proteins in this study.
The accessory ORF3 protein is thought to have high potassium channel activity and may be associated with the virulence of PEDV (Wang et al., 2012). The small structural E protein has important roles in the assembly of coronavirus virions, virus egress and in the host stress response (Ruch and Machamer, 2012). Besides the structural proteins, PEDV has several non-structural proteins (Nsp1, Nsp2, Nsp3, among others) that express in the early stage of virus infection and have important functions in the viral replication cycle. The coronavirus Nsp3 is a conserved component of the viral protein processing machinery, and may be incorporated in the virion viaits intimate association with viral RNA (Neuman et al., 2008). The Nsp3 is known as the largest replicase subunit, consisting of numerous distinct structural domains separated by disordered linkers. Some of these, such as the Ac and ADRP (macrodomain), are well conserved across all genera of coronaviruses, though there have been no reports about serological assays based on PEDV Ac and ADRP domains. Considering their potential use in the study of host immune response, these proteins were specifically included in the current study.
Recombinant ORF3C, E, Ac, ADRP, Nsp1 and Nsp2 peptides were expressed and purified (Fig. 1); however, only ORF3C, Ac and Nsp2 produced functional rabbit antibodies recognizing PEDV antigens in infected cells (Fig. 2; the anti-E was not generated and tested). Subsequently, they were used individually as detection antigens in western blot and/or indirect ELISAs to detect anti-PEDV IgG antibodies in sera from diarrheic pigs (Fig. 5). The PEDV WV and the recombinant S1 expressed from mammalian cells was used for comparison. The S1, ORF3C, and E peptides each reacted strongly with the sera, reflecting expected distributions of PEDV-specific antibodies. The reactivity of Ac, ADRP, Nsp1 and Nsp2 peptides was less pronounced (data not shown), thus they were discarded from consideration as novel diagnostic antigens. The generally high level of IgG against S1, ORF3C, and E proteins in old-age pigs was consistent with recent reports that anti-PEDV IgG in infected pigs persisted for over than 17 weeks after the onset of diarrhea symptoms (Lin et al., 2016). All antibody levels declined to their lowest levels at 3 weeks-of-age, which was consistent with the result from experimentally-infected piglets at weaning (Fig. 3). Western blot was implemented to confirm the ELISAs, with additional "fuzzy" bands appeared when the ORF3C, Ac and E were used as detection antigens, indicating non-specific recognition (Fig. 4). This may be related to differences in antigens used and/or the sensitivity of the assays for detecting anti-PEDV antibodies.
The ORF3C antigen has moderate immune reactivity as evidenced by staining of anti-ORF3C in PEDV-infected cells displaying specific fluorescence in IFA ( Fig. 2C and D), by serum western blot (Fig. 4D), and by indirect ELISA (Fig. 5C). In future studies, we plan to employ the PEDV mutants with the ORF3 deletion generated by reverse genetics Zhao et al., 2017) in comparison with the wild type PEDV for more detailed evaluation of the protein for use as a marker in diagnostic assays. The accessory protein E showed intermediate sensitivity in western blot and extremely high sensitivity in ELISA assays compared with the other antigens ( Fig. 4C and 5 D). The ELISA sensitivity was too strong, as nearly all of the sera from sows and 0-1 weekold piglets were strongly positive, thus it could not properly reflect the trend of PEDV-specific antibodies. To our knowledge, this is the first report about the use of an ORF3-or E-based ELISA on such a large scale.
Of the non-structural proteins, Ac displayed strong immune reactivity ( Figs. 2A and 4 B), suggesting that it may be released into circulation or is picked up by antigen-presenting cells (Hurst-Hess et al., 2015). Therefore, the study of anti-Ac antibodies may contribute to a better understanding of the detailed function of Nsp3. Our results also complement previous mass spec identification of Nsp3 within purified virions (Neuman et al., 2008).
In summary, this study is the first to dissect the range of antibody responses against PEDV during infection, using different assays (ELISA, western blot, SN) to comprehensively analyze PEDV antibodies in porcine sera from China. The results confirmed high PEDV prevalence in China (Sun et al., 2016). The antibody profiles provided by the study offer more reliable information on the host immune response to different viral proteins, and will be useful for design of vaccines that better stimulate protective immunity. Above all, our data identified that besides S1, the recombinant ORF3C and E proteins can also be used as diagnostic markers; but S1 represents greater sensitivity for a wide range of PEDV-specific antibodies.
Declaration of Competing Interest
The authors declare no conflict of interest. | 2019-09-09T21:21:48.054Z | 2019-08-10T00:00:00.000 | {
"year": 2019,
"sha1": "1d235b2f0c41f51e8354f37561f0e1d8cfaaab7d",
"oa_license": null,
"oa_url": "https://doi.org/10.1016/j.vetmic.2019.108387",
"oa_status": "BRONZE",
"pdf_src": "PubMedCentral",
"pdf_hash": "f32ca6c4a2d33dd2c24391fa046c344031be4e9d",
"s2fieldsofstudy": [],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
} |
251639581 | pes2o/s2orc | v3-fos-license | A Case Report of Rapidly Necrotizing Fasciitis Post-Falling Down Treated Reconstructively
Necrotizing fasciitis (NF) is a necrotizing soft tissue infection that can result in fast tissue loss, necrosis, and potentially fatal acute sepsis. Diabetes, cancer, alcohol abuse, and chronic liver and renal disease are all risk factors for NF. In this case report, a 19-year-old man with a negative past medical and surgical history was diagnosed with aggressive rapidly progressive necrotizing fasciitis of the left lower extremity after a recent history of falling down from a skateboard. A successful treatment with long-term debridement surgeries followed by reconstructive surgery with skin grafting was made. Although the severity of this condition, the patient was able to resume a normal range of motion of the concerned extremity. NF has been described in the literature, but early diagnosis, which is crucial for successful management, rests a challenge.
Introduction
Hippocrates originally characterized necrotizing fasciitis (NF) in the fifth century, and it is a life-threatening soft tissue illness. The etiology has been known for ages, and Joseph Jones, a former Confederate Army surgeon, coined the term "Necrotizing Fasciitis" in 1871 [1].
NF is a rare bacterial inflammation (infection) that can destroy the skin and underlying tissues (connective tissue, subcutaneous fat, muscles, and muscle membranes). The disease can be very dramatic, with shock and damage to internal organs. Left untreated, the disease can lead to death. Usually, necrotizing fasciitis develops very quickly and can become life-threatening within hours or days. Immediate treatment and hospitalization are urgent, and usually, the inflamed tissue must be surgically removed [2].
There are two types of NF. Type I is polymicrobial, whereas type II is monomicrobial. Group A streptococcus (type 2 NF) is the most common cause of NF, and it can lead to streptococcal toxic shock syndrome (STSS), which is characterized by shock and multiple organ failures caused by a toxin produced by group A streptococcus. NF and STSS are occasionally seen together [in 40% of NF patients and 6% of other individuals (p0.001)] [3]. Differentiating NF from other soft tissue infections is notoriously tough, but it is critical since NF is a medical emergency that requires immediate and intensive surgical debridement. As a result, this condition puts physicians' diagnostic abilities and surgical tenacity to the test.
Group A streptococci live on the skin and throat of many people without causing harm, but can sometimes cause mild and, exceptionally, severe infections. Necrotizing fasciitis is one of the most serious infections caused by group A streptococci because the bacterium creates large amounts of toxins in the body. A combination of different bacteria is often the cause. Inflammation can spread from superficial as well as internal tissue damage. The pathogens can get into the soft tissue through small cuts, scratches, other wounds, or burns [4]. The bacteria can also come with the blood from other parts of the body and colonize the affected area. It is often not possible to find out how the bacterium got into the body. As a rule, the body's immune system succeeds in killing bacteria that enter the body. That is why it is often older people or people with a weakened immune system who are affected by the disease. However, necrotizing fasciitis can also occur in young, healthy people [5].
Symptoms of necrotizing fasciitis can appear very quickly -within a day -after a cut or other wound in the skin. The first and typical symptom of the disease is the rapid onset of severe pain in the infected area. The affected patients develop fever and chills, and the painful area may be red, slightly swollen, warm, and with overlying blisters. As the necrotizing fasciitis progresses, the inflamed area may turn black and blue, and it can be accompanied by shock due to low blood pressure. This leads to impaired consciousness, confusion, difficulty concentrating, cold sweats, and dizziness, or streptococcal toxic shock syndrome (STSS) [6]. If patients are not treated quickly enough, life-threatening internal organ damage can develop.
A previously healthy 19-year-old Caucasian male was transferred to Lebanese Hospital Geitaoui for consultation and further reconstructive interventions, after being diagnosed with rapidly progressive necrotizing fasciitis of the left lower extremity in another local hospital.
Prior to the transfer
The patient had a recent history of falling down from a skateboard, with resultant abrasions with an erythematous violaceous wound with blisters developed on his left lower extremity within 24 hours of the accident. After a few days, he presented to the hospital with fever, hypotension (systolic blood pressure: 50 mmHg), tachycardia (heart rate: 110 beats/minute), tachypnea, and desaturation. On physical exam, the patient's left lower extremity was swollen and indurated, along with patches of skin necrosis and crepitus. Significant abnormal lab results were documented ( Table 1). Immediately after the presentation, resuscitation was done, and the patient was admitted to the intensive care unit, as a case of septic shock resulting from rapidly progressive necrotizing fasciitis of the left lower extremity. Broad-spectrum antibiotics (Meropenem and Vancomycin) were started following the administration of a single dose of Amikacin and Ceftriaxone. Decompression fasciotomy of leg compartments was done one day after admission, followed by surgical debridement the day after. Blood cultures were negative, and tissue cultures showed Streptococcus Pyogenes. Antimicrobial de-escalation was performed accordingly: the patient was switched to Tigecycline. The clinical result after fasciotomy and debridement is shown in Figures 1, 2 respectively. Hence, the patient became hemodynamically stable with resolution of the acute kidney injury (creatinine: 0.7 mg/dl) and improvement of inflammatory markers (WBC: 10,000/microliter).
After transfer to Lebanese Hospital Geitaoui
On admission to the hospital, two weeks after the accident, the patient was hemodynamically stable and with normal labs. Empiric antibiotics, Piperacillin/ Tazobactam (Zocyn®, Taiho Pharmaceuticals, Japan), and Tigecycline (Tygacil®) were administered. The patient was maintained on a high protein and a high-caloric diet. Serial debridements (a total of three sessions), after obtaining written informed consent, were made for one week. Between one session and another, a 24-hour break took place. Each session took approximately two hours. Adequate removal of all necrotic and nonviable tissue was done. No progression of necrosis was shown after the third session. Tissue cultures showed the presence of Morganella morganii and Enterobacter cloacae; therefore, antibiotic administration was adjusted accordingly: Vancomycin was added. Dressings with silver sulfadiazine (Flamazine®, Smith & Nephew Pharmaceuticals Ltd. Origin, UK) and Jelonet paraffin gauze (Smith & Nephew Pharmaceuticals Ltd. Origin, UK) were done three times per week. The results after adequate debridement were satisfactory ( Figure 3). After two weeks after the final debridement, there was a good formation of granulation tissue. Moreover, tissue cultures became negative. At this point, three sessions (48-hour break between each session) of split-thickness skin grafts were done to cover the large defect. The aim of dividing the grafting procedure into multiple sessions was to limit blood loss from the donor site. The grafts were removed from the right thigh by a dermatome. The grafts removed were of medium thickness (0.013 inches). A thick (0.019 inch) graft was also taken to cover the knee joint. Dressings on the skin grafts were done with Betadine cream and Jelonet.
FIGURE 3: Clinical aspect of the concerned extremity after the last session of debridement.
Two weeks after, complete healing of the skin grafts ensued (Figure 4) and the patient was discharged home after two months of hospitalization. Physiotherapy sessions were advocated and compression garments with silicone were prescribed to reduce scarring. Six months later, a final follow-up was made and the patient had a full range of motion in his concerned extremity with grade four power of muscle strength. The patient was satisfied and able to completely resume a normal quality of life.
Discussion
More than 500 to 1,000 instances of NF are identified each year in the United States, according to the US Centers for Disease Control and Prevention (CDC) [3]. However, because there are so many synonyms for these items, it is impossible to know how accurate this estimate is. According to reports, the yearly rate of NF is 0.40 cases per 100,000 people [7]. This rate has recently been rising at an exponential rate [8]. The infection causes NF, and predisposing factors include medications, hypersensitivity, vascular issues, burns, insect bites, needle stick injuries, and trauma [9]. In individuals with immunosuppression, diabetes, cancer, drug addiction, or chronic renal disease, NF can cause severe sepsis [10]. According to several studies, intravenous drug use is a major risk factor for NF [11]. NF is more prevalent in men in the winter, despite the fact that cases of NF caused by Vibrio vulnificus are more common in the summer [12]. It may affect anybody at any age; however, the risk increases as one become older. About half of the patients have had a skin abrasion, a quarter has had physical trauma, and seventy percent have one or more chronic conditions. A single lower limb accounts for half of the instances, whereas a single upper limb accounts for one-third [8].
Tenderness, edema, erythema, and discomfort at the afflicted location, which mirrors non-severe soft tissue infections (NSTIs) such as cellulitis and erysipelas, make NF difficult to identify in the early stages [9]. The cardinal NF symptom is intense pain at the outset that is out of proportion to physical findings [8,12,13]. Vibrio and Aeromonas are well-known aquatic pathogens that induce NF in people with chronic diseases, particularly in the liver, with a high death rate. The key to narrowing down the possible organism is getting a thorough history of saltwater exposure or fish stings in patients with liver or spleen disease [14].
Fever (>38°C) is seldom present (44%), although tachycardia (>100 beats/min) is common (59%), and hypotension (<100 mm Hg) (21%) and tachypnea (>20/min) (26%) are occasionally present. These three aberrant vital signs point to NF rather than NSTI [15]. Although NF can affect any part of the body, it is most frequent in the extremities (36-55%), trunk (18-64%), and perineum (up to 36%) [16]. Erythema (80%), induration (66%), soreness (54%), fluctuation (35%), skin necrosis (23%), and bullae (11%) are all seen in infected areas [15]. When comparing NF to NSTI, the positive probability ratio for the presence of bullae is 3.5. In another study [17], NF patients had more tense edema (23% vs 3%, p=0.0002), purple skin coloring (10% vs 1%, p=0.02), and sensory or motor loss (13% vs 3%, p=0.03) than NSTI patients. Skin necrosis was found in 6% of NF patients compared to 2% of NSTI patients. The first physical signs of NF are often erythematous and ecchymotic skin lesions, but these can quickly progress to hemorrhagic bullae, which signal the obstruction of deep blood vessels in the fascia or muscle compartments; consequently, the presence of bullae is a critical diagnostic clue. Ludwig's angina (submandibular space) and Fournier's gangrene (scrotum and penis or vulva) are examples of NF variations that affect particular parts of the body and can have a rapid start and severe clinical course. There are currently no laboratory parameters specific to NF. A so-called Laboratory Risk Indicator for Necrotizing Fasciitis (LRINEC) score has been proposed to classify the average risk of NF [18]. Patients with an LRINEC score of six or above need to have a detailed examination to rule out necrotizing fasciitis [19]. In our case described above, the LRINEC score at admission was 10 (CRP 15 mg/L, white cell count 18,000 per microliter, hemoglobin 12.9 g/dL, sodium 132 mEq/L, creatinine 3.87 mg/dL, glucose 90 mg/dL) which is highly indicative of NF.
When compared to situations in which surgery is postponed for even a few hours, surgical debridement is the cornerstone of NF therapy and results in much lower mortality [16]. Patients with NF should be sent to the operating theatre as soon as feasible for a "search and destroy" mission of vigorous and comprehensive debridement. Tissues that have been infected should be carefully resected until there is no more sign of infection. The most essential factor in determining survival is the initial operation, and the wound must be thoroughly examined following the initial debridement. If further debridement is required, the patient must be taken back to the operating room as soon as possible. After the first debridement, a "second-look" procedure is usually performed 12 to 24 hours later. Patients with NF may need anything from five to forty surgeries, according to one research, with an average of 33 debridements and grafting procedures required [7]. It is preferable to remove the tissues with acceptable margins rather than leaving just actively infected or necrotic tissue since this will reduce the risk of recurrence from the remaining infected tissue. After debridement, skin grafts are applied as a second-stage technique once a clean, granulating recipient bed has developed [20]. In light of our experience, it is recommended that the idea of radical debridement with immediate skin grafting may be usefully implemented in the treatment of necrotizing fasciitis.
Conclusions
NF is a rare bacterial inflammation (infection) that can destroy the skin and underlying tissues (connective tissue, subcutaneous fat, muscles, and muscle membranes). The disease can be very dramatic, with shock and damage to internal organs. In this case report, serial debridement with adequate antibiotherapy followed by reconstructive surgeries were made. An early diagnosis remains challenging and crucial for salvaging the affected area.
Additional Information Disclosures
Human subjects: Consent was obtained or waived by all participants in this study. Conflicts of interest: In compliance with the ICMJE uniform disclosure form, all authors declare the following: Payment/services info: All authors have declared that no financial support was received from any organization for the submitted work. Financial relationships: All authors have declared that they have no financial relationships at present or within the previous three years with any organizations that might have an interest in the submitted work. Other relationships: All authors have declared that there are no other relationships or activities that could appear to have influenced the submitted work. | 2022-08-18T15:13:04.021Z | 2022-08-01T00:00:00.000 | {
"year": 2022,
"sha1": "cc5277b087e0cb8dfb48daa1cedbed471afa1936",
"oa_license": "CCBY",
"oa_url": "https://www.cureus.com/articles/107946-a-case-report-of-rapidly-necrotizing-fasciitis-post-falling-down-treated-reconstructively.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "3abe64c8507687e85fdc07c707a1b13f04c2369f",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
256671625 | pes2o/s2orc | v3-fos-license | Removal of Nutrients from Water Using Biosurfactant Micellar-Enhanced Ultrafiltration
The removal of NH4+, NO3−, and NH3− from wastewater can be difficult and expensive. Through physical, chemical, and biological processes, metals and nutrients can be extracted from wastewater. Very few scientific investigations have employed surfactants with high biodegradability, low toxicity, and suitability for ion removal from wastewater at different pH and salinity levels. This research employed a highly biodegradable biosurfactant generated from yeast (sophorolipid) through micellar-enhanced ultrafiltration (MEUF). MEUF improves nutrient removal efficiency and reduces costs by using less pressure than reverse osmosis (RO) and nanofiltration (NF). The biosurfactant can be recovered after the removal of nutrient- and ion-containing micelles from the filtration membrane. During the experiment, numerous variables, including temperature, pH, biosurfactant concentration, pollutant ions, etc., were evaluated. The highest amount of PO43− was eliminated at a pH of 6.0, which was reported at 94.9%. Maximum NO3− removal occurred at 45.0 °C (96.9%), while maximum NH4+ removal occurred at 25.0 mg/L (94.5%). Increasing TMP to 200 kPa produced the maximum membrane flow of 226 L/h/m2. The concentrations of the contaminating ion and sophorolipid were insignificant in the permeate, demonstrating the high potential of this approach.
Introduction
Globally, nutrient contamination is one of the most significant threats to aquatic ecosystems [1,2] There is uncertainty at every stage of the process, from the generation of pollutants to their final ecological and economic effects. Excessive nutrient loading endangers aquatic ecosystems by changing aquatic biodiversity and biogeochemical processes [2,3]. The bioaccumulation caused by organic inputs and agricultural runoff threatens the world's freshwater streams [4]. Despite the installation of strict environmental laws to address human impacts on aquatic species, the effects of nutrient loading on the functioning of stream ecosystems remain unknown [1][2][3].
It has proven difficult to remove different nutrients (such as phosphorus, nitrate, and ammonium) from wastewater [4,5]. Nitrogen and phosphorus are the two most important nutrients to take into consideration when talking about the discharges of treated wastewater [3,4]. They persist in streams that have been biologically treated, demanding additional sophisticated treatment. It has been demonstrated that the release of nitrogen and phosphorus accelerates lake eutrophication and increases algal bloom and freshwater habitats rooted in shallow streams [5,6]. The use of water containing algae and aquatic plants for drinking water, fish culture, or recreation produces various difficulties, including dissolved oxygen depletion in water bodies, toxicity to aquatic life, and a decrease in the efficiency of chlorine disinfection. Moreover, nitrates, the nitrogen byproducts of nitrification, are notorious for their lethal effects on infants. Septic systems, animal feedlots, agricultural fertilizers, manure, industrial wastewater, sanitary landfills, and rubbish dumps are all common causes of excess nitrate reaching lakes and streams [4][5][6]. The dissolution of metal ions and organic molecules in micelles can be attributed to the action of electrostatic or Van der Waals forces. After this, the micelle solution is passed over an ultrafiltration membrane that has a suitable molecular weight cut-off (MWCO) size. As a result, the micelles that are associated with soluble pollutants can be removed by the ultrafiltration membrane [1,4,14]. In a general sense, the retention coefficient of the pollutant that is being eliminated rises when the surfactant concentration in MEUF rises higher than the CMC [1,2,14]. MEUF possesses a variety of benefits, some of which include low operating costs, a high removal efficiency, and a high permeate volume flux, to mention a few of these benefits. In a nutshell, this system combines the high selectivity offered by reverse osmosis with the high flux offered by ultrafiltration. Because of these characteristics, MEUF is applied in the process of removing heavy metals [1,2,7,16].
MEUF offers numerous advantages, such as low operating costs, high removal efficiency, and a large permeate volume flux. This method combines the high flux of ultrafiltration with the superior selectivity of reverse osmosis. MEUF is used to remove both anions and cations because of these properties. However, one of the system's major shortcomings is the decrease in permeate flux caused by various experimental conditions, including membrane fouling [1,[15][16][17][18][19][20], which can be minimized by regular membrane cleaning and maintenance. The objectives of this research are to evaluate the effectiveness of MEUF with a sophorolipid biosurfactant for removing nutrients of various forms of nitrogen (ammonium and nitrate) and phosphate from water [1,17,18].
CMC Determination
The critical micelle concentration (CMC) of the sophorolipid used in this study was determined to be 30 mg/L or 0.003% of the sophorolipid. By determining the CMC of the biosurfactants, the minimum concentration at which micelles will form and the lowest concentration at which biosurfactant solutions will operate optimally are calculated. In addition, by measuring the CMC of the experiment's effluent, the biosurfactant concentration in the effluent may be calculated, as well as the relationship between biosurfactant adsorption to the media and biosurfactant concentration. There is a substantial variance in the CMC values of several types of biosurfactants. The lower the CMC value, the less biosurfactant will be required [1,2,[17][18][19][20]. The cost of biosurfactants constitutes a considerable component of the total cost of remediation, proportional to the amount of biosurfactant employed. Therefore, the ideal characteristic of a biosurfactant is a low critical micelle concentration (CMC) [18][19][20][21][22]. At that concentration, the measured surface tension was 44 mN/m. Finding the intersection of two tangent lines was used to establish this CMC [20,21]. The dissolution of metal ions and organic molecules in micelles can be attributed to the action of electrostatic or Van der Waals forces. After this, the micelle solution is passed over an ultrafiltration membrane that has a suitable molecular weight cut-off (MWCO) size. As a result, the micelles that are associated with soluble pollutants can be removed by the ultrafiltration membrane [1,4,14]. In a general sense, the retention coefficient of the pollutant that is being eliminated rises when the surfactant concentration in MEUF rises higher than the CMC [1,2,14]. MEUF possesses a variety of benefits, some of which include low operating costs, a high removal efficiency, and a high permeate volume flux, to mention a few of these benefits. In a nutshell, this system combines the high selectivity offered by reverse osmosis with the high flux offered by ultrafiltration. Because of these characteristics, MEUF is applied in the process of removing heavy metals [1,2,7,16].
MEUF offers numerous advantages, such as low operating costs, high removal efficiency, and a large permeate volume flux. This method combines the high flux of ultrafiltration with the superior selectivity of reverse osmosis. MEUF is used to remove both anions and cations because of these properties. However, one of the system's major shortcomings is the decrease in permeate flux caused by various experimental conditions, including membrane fouling [1,[15][16][17][18][19][20], which can be minimized by regular membrane cleaning and maintenance. The objectives of this research are to evaluate the effectiveness of MEUF with a sophorolipid biosurfactant for removing nutrients of various forms of nitrogen (ammonium and nitrate) and phosphate from water [1,17,18].
CMC Determination
The critical micelle concentration (CMC) of the sophorolipid used in this study was determined to be 30 mg/L or 0.003% of the sophorolipid. By determining the CMC of the biosurfactants, the minimum concentration at which micelles will form and the lowest concentration at which biosurfactant solutions will operate optimally are calculated. In addition, by measuring the CMC of the experiment's effluent, the biosurfactant concentration in the effluent may be calculated, as well as the relationship between biosurfactant adsorption to the media and biosurfactant concentration. There is a substantial variance in the CMC values of several types of biosurfactants. The lower the CMC value, the less biosurfactant will be required [1,2,[17][18][19][20]. The cost of biosurfactants constitutes a considerable component of the total cost of remediation, proportional to the amount of biosurfactant employed. Therefore, the ideal characteristic of a biosurfactant is a low critical micelle concentration (CMC) [18][19][20][21][22]. At that concentration, the measured surface tension was 44 mN/m. Finding the intersection of two tangent lines was used to establish this CMC [20,21].
Effect of pH on Removal Rate
The proposed pH range for this experiment was between 6.0 and 10.0. At a lower pH, the rate of nutrient ion elimination is greater. Based on the pH of the sophorolipid and other chemical characteristics of the pollutant anion and cation, the pH was determined. The sophorolipids used in the study are acidic with a pH of 4.5 and sensitive to pH, temperature, and electric fields. This may result in alterations to their performance [1,19,22,23]. By adjusting the pH, several sophorolipid aggregate states may be produced. At neutral pH values, sophorolipids exhibit strong emulsifying activity. In pure water, when pH decreases below 6.0, the emulsion's stability increases. However, in the presence of ions, considering the pH of the sophorolipid and the pH of the employed ions, the optimal pH range was determined to be between 6.0 and 10.0. Going below 6.0 in the presence of ions would cause a gel formation on the membrane surface causing more fouling, while going beyond 10.0 the emulsification stability of SL drops to a level that would reduce the system efficiency at a significant rate. As seen in Figure 2A, at lower pH levels (6.0-8.0), nitrate, phosphate, and ammonium removal rates were desirable with values between 70% and 96%. The removal rate varies based on the ions and the overall pH of the solution. In the solution, there are two anions and one cation [1,2,[23][24][25][26]. So, the impact of pH on the removal rate can be explained by anion exchange and reduction of the anions in the solution (NO 3 − , PO 4 3− ) [24][25][26].
Effect of pH on Removal Rate
The proposed pH range for this experiment was between 6.0 and 10.0. At a lower pH, the rate of nutrient ion elimination is greater. Based on the pH of the sophorolipid and other chemical characteristics of the pollutant anion and cation, the pH was determined.
The sophorolipids used in the study are acidic with a pH of 4.5 and sensitive to pH, temperature, and electric fields. This may result in alterations to their performance [1,19,22,23]. By adjusting the pH, several sophorolipid aggregate states may be produced. At neutral pH values, sophorolipids exhibit strong emulsifying activity. In pure water, when pH decreases below 6.0, the emulsion's stability increases. However, in the presence of ions, considering the pH of the sophorolipid and the pH of the employed ions, the optimal pH range was determined to be between 6.0 and 10.0. Going below 6.0 in the presence of ions would cause a gel formation on the membrane surface causing more fouling, while going beyond 10.0 the emulsification stability of SL drops to a level that would reduce the system efficiency at a significant rate. As seen in Figure 2A, at lower pH levels (6.0-8.0), nitrate, phosphate, and ammonium removal rates were desirable with values between 70 and 96%. The removal rate varies based on the ions and the overall pH of the solution. In the solution, there are two anions and one cation [1,2,[23][24][25][26]. So, the impact of pH on the removal rate can be explained by anion exchange and reduction of the anions in the solution (NO3 − , PO4 3− ) [24][25][26].
Effect of Temperature on Removal Rate
Based on the findings seen in Figure 2B, the removal rates of nitrate, phosphate, and ammonium improve with increasing temperature due to an increase in membrane flux [9][10][11][12][13][14]27] induced by the membrane material's thermal expansion and decreased solution viscosity [27]. Since the CMC of the surfactant fluctuates with temperature, the temperature is the most important parameter for MEUF [28][29][30][31][32][33][34]. The effect of temperature on the process was studied in this experiment in the range of 25.0 • C to 45.0 • C. Other parameters were constant. A noticeable decrease in the flux was observed when lowering the temperature below 25.0 • C, caused by higher viscosity of the SL solutions, and above 45.0 • C, due to greater concentration polarization. This was also the case for other surfactants, e.g., CTAB, CPC, and rhamnolipid [1,[35][36][37][38].
Effect of Concentration on Removal Rate
In Figure 2C, the removal rate at various concentrations (25.0, 50.0, 100.0, 200.0 mg/L) of initial nitrate (NO 3 − ), phosphate (PO 4 3− ), and ammonium (NH 4 + ) was evaluated. This range of ion concentrations reflects values that are 100-250 times more than the proportion of actual polluting ions found in Montreal's lake water. For this research study, ions with a small diameter for ultrafiltration were chosen based on pore size. Using a lower concentration would not have been useful for understanding the micelle formation with SL and the influence of other parameters (pH, temperature, permeate flow, ion concentration, transmembrane pressure) owing to ions lost before micellization occurs. The presence of untreated ions in permeate solutions becomes negligible when a concentration 100-250 times higher is used. Figure 2C depicts the effect of starting concentration on the elimination rate. With an increase in the initial concentration, the removal rate decreases substantially [1]. Figure 2D illustrates the relationship between the removal rate and the changing sophorolipid (SL) concentration in the solution. In this experiment, the SL concentration was examined at 0.10%, 0.20%, 0.30%, 0.40%, 0.50%, 1.00%, and 2.00% to determine the nitrate, phosphate, and ammonium ion removal rate. The greatest NH 4 + removal rate was 88.8% at 0.40% SL concentration, while the lowest rate was 58.7% at 0.10% SL concentration [1]. Initially, the experiment was carried out between CMC (0.003%) and 10 CMC of SL (0.03%) and the result was unsatisfactory with a removal rate below 40%. After increasing SL concentration from 33 × CMC (0.1%) to 133 × CMC (0.4%), the results gave better removal rates ranging between 60% and 96% due to a greater surface of attachment for the ions. In other studies, it was shown that the removal of pollutant ions and metal ions was higher when the ionto-surfactant ratio was between 1:1 and 1:1.5, and in this case the ion-to-biosurfactant ratio was 1:1.4 (wt/wt) [1,2,4,6,7].
This indicates that the rate of nutritional ion elimination is related to SL concentration. This indicates that raising the SL content from 0.025% to 0.10% facilitates nutrient clearance. When the concentration of SL in the feed solution increases, so does the concentration of micelles, which enhances the binding of nutritional ions [1,[33][34][35]39]. Table 1 summarizes the maximum and minimum values of the removal of the studied ions affected by pH, temperature, ion, and sophorolipid concentrations. Typical standard deviations on values were reported between 0.2% and 2.0% for pH, 0.5% and 2.0% for temperatures and transmembrane pressures, 1.0% for ion and SL concentrations, 1.0% and 3.0% for permeate fluxes, and finally, between 1.0% and 5.0% for removable rates.
Results indicated that 98.9% of NO 3 − was removed at a concentration of 0.40% SL, with the lowest removal rate occurring at a concentration of 2.00% SL. At a pollutant-ionto-sophorolipid ratio of 1:1.4 (wt/wt), the maximum clearance rate was reported. This proportion remained constant throughout the tests. The better removal rate at a higher concentration may be explained by the increased ratio of biosurfactant to pollutant ion, which increases the availability of the biosurfactant for the attachment of the pollutant ion. The fouling of the membrane began beyond this point due to the increased viscosity [1,35,36]. After a certain SL concentration, fouling and the increased viscosity of the solution reduced the removal rate [35,36,[40][41][42]. In this instance, the clearance rate is reduced when the concentration exceeds 0.40%. The apparent tension between the feed solution and the permeate was measured to verify the sophorolipid (SL) content of the solution both before and after the test using a tensiomat. The surface tension of the feed solution with the addition of SL was reduced to 41 mN/m, which is below that of the CMC. Therefore, micelles were formed as the concentration was above the CMC [36,37,39]. The surface tension of the per-Molecules 2023, 28, 1559 6 of 13 meate solution increased to 65 mN/m after filtration, which was slightly lower than the surface tension of pure water (72 mN/m). As the SL in the micelles was rejected by the membrane during filtration, only a few monomers passed through the membrane [37,38].
To verify the relationship between the independent variables of pH, temperature, ion, and SL concentrations with the dependent removal rates, variances were calculated and compared within each group. The variance in each group was in the range of 55 to 255%. Nitrate ions were referred to as group 1, while phosphate and ammonium ions as groups 2 and 3, respectively. Our hypothesis was to assume that no relation exists between experimental conditions and removal rate values for each group. This hypothesis is valid for low p values up to 5%. For higher p values greater than 95%, the alternative hypothesis, that at least one group differs from the overall mean, prevails. A one-way ANOVA test of comparison for at least three different groups was used to calculate the probability of finding at least one group higher than the mean. p-values ranged between 98% for pH, 65% for temperature, 68% for ion concentration, and 83% for SL concentration. This shows that pH is the most likely to influence the process and is the principal parameter precisely controlling ion removal rates.
Effect of Temperature on Permeate Flux
The effect of temperature on the process was studied in this experiment in the range of 22.0 • C to 45.0 • C. Other parameters were constant. The permeate flow increases as the temperature rises, owing to the thermal expansion of the membrane material and the lower viscosity of the solution. Due to the increased flow, however, more concentration polarization was observed [1,[35][36][37][38]. Because the surfactant CMC varies with temperature, the temperature is the most significant parameter for MEUF. The viscosity of the synthetic solution containing the sophorolipid solution decreases as the temperature rises, causing the flux to rise. The range 22.0 • C to 45.0 • C was chosen based on the flux rate and power required to raise the temperature. Lowering the temperature below 22.0 • C would cause a decrease in the flux while going above 45.0 • C would require a considerable power supply to heat up the solution causing extra operational costs [1,38,43]. When the temperature increases, the flux also increases, as seen in Figure 3A. The viscosity of the solution containing the sophorolipid solution decreases as the temperature rises, allowing the flux to increase [38].
Effect of Transmembrane Pressure in Permeate Flux
As shown in Figure 3B, increasing the transmembrane pressure (TMP) positively influences the permeate flow, implying that as the TMP increases, the driving force starts to rise, resulting in higher flux. Furthermore, a linear relation between TMP and flux shows negligible concentration polarization [6,28]. The lowest flux was 30 L/h.m 2 at TMP = 50 kPa, while the highest flux was 226 L/h.m 2 at TMP = 200 kPa. Low transmembrane pressure reduces operational expenses [42,44]. Because its value was greater than the linear trendline, a second-order regression was determined as the best match, as indicated by the R 2 value. The range between 50 and 150 kPa was chosen for the tests in which MEUF was utilized with a synthetic or biosurfactant for the removal of metal and inorganic ions [2,[6][7][8]. This study was first conducted in this range and the permeate flux achieved was not satisfactory. As a result, the maximum limit of the range was increased to 200 kPa [1]. The low flux achieved at 150 kPa can be explained with the MWCO used for this experiment because with the increase in MWCO the flux significantly reduces for the MEUF process [1,2]. For the removal of ions such as nutrients, which are smaller than metal ions, the ultrafiltration cartridge of higher MWCO is necessary, which causes the flux reduction. This can be interpreted as the reason why the necessary flux was not achieved at 150 kPa. The MWCO that was used for this investigation was 10 kDa, while the MWCO that was used in the mentioned experiments that were cited ranged from 3 kDa to 5 kDa. The polarity of the two ions with differing charges, as well as the concentration of those ions, might play a part in the experiment's permeate flux [2,6,7].
Increasing the transmembrane pressure (TMP) positively influences permeate flow, implying that as the TMP increases, the driving force starts to rise, resulting in higher flux. On the other hand, lower transmembrane pressure reduces operational costs. The range selected is 50-200 kPa since below that level there is almost no removal observed, while going above 200 kPa would significantly increase the operational power requirement, resulting in higher cost.
Effect of Transmembrane Pressure in Permeate Flux
As shown in Figure 3B, increasing the transmembrane pressure (TMP) positively influences the permeate flow, implying that as the TMP increases, the driving force starts to rise, resulting in higher flux. Furthermore, a linear relation between TMP and flux shows negligible concentration polarization [6,28]. The lowest flux was 30 L/h·m 2 at TMP = 50 kPa, while the highest flux was 226 L/h·m 2 at TMP = 200 kPa. Low transmembrane pressure reduces operational expenses [42,44]. Because its value was greater than the linear trendline, a second-order regression was determined as the best match, as indicated by the R 2 value. The range between 50 and 150 kPa was chosen for the tests in which MEUF was utilized with a synthetic or biosurfactant for the removal of metal and inorganic ions [2,[6][7][8]. This study was first conducted in this range and the permeate flux achieved was not satisfactory. As a result, the maximum limit of the range was increased to 200 kPa [1]. The low flux achieved at 150 kPa can be explained with the MWCO used for this experiment because with the increase in MWCO the flux significantly reduces for the MEUF process [1,2]. For the removal of ions such as nutrients, which are smaller than metal ions, the ultrafiltration cartridge of higher MWCO is necessary, which causes the flux reduction. This can be interpreted as the reason why the necessary flux was not achieved at 150 kPa. The MWCO that was used for this investigation was 10 kDa, while the MWCO that was used in the mentioned experiments that were cited ranged from 3 kDa to 5 kDa. The polarity of the two ions with differing charges, as well as the concentration of those ions, might play a part in the experiment's permeate flux [2,6,7].
Increasing the transmembrane pressure (TMP) positively influences permeate flow, implying that as the TMP increases, the driving force starts to rise, resulting in higher flux. On the other hand, lower transmembrane pressure reduces operational costs. The range selected is 50-200 kPa since below that level there is almost no removal observed, while going above 200 kPa would significantly increase the operational power requirement, resulting in higher cost.
Discussion
The primary purpose of this study was to evaluate the effectiveness of sophorolipids (SL) in removing ammonium, phosphate, and nitrate from water. As a biosurfactant for the elimination of ammonium, phosphate, and nitrate (NH 4 + , PO 4 3− , NO 3 − ), the SL was combined with a micellar-enhanced ultrafiltration (MEUF) system. By analyzing a variety of factors, the best conditions for each variable, including pH, initial concentration of anions, SL concentration, temperature, fouling, and transmembrane pressure, were identified. SL plays an essential role in the elimination of ions (NH 4 + , PO 4 3− , NO 3 − ) via the MEUF system at concentrations above its critical micellar concentration [21,28,38,[43][44][45][46]. The ions (NH 4 + , PO 4 3− , and NO 3 − ) were attached to the hydrophilic parts of the SL by ion exchange with counterions for the cations or electrostatic attraction for the anions with the negatively charged sophorolipid. As the aggregates were larger than the pore diameters of the hollow-fiber membrane filter, they were unable to pass through the membrane. However, pure water with small amounts of SL and ions was able to pass through.
Based on the experimental results, the following conclusions were drawn from this investigation: The fraction of anions and cations (NH 4 + , PO 4 3− , NO 3 − ) that were decreased was affected by variables such as pH, initial concentration, and SL concentration. A drop in pH and a rise in SL concentration greatly influenced the decrease in PO 4 3− and NO 3 − concentrations. Each sample had a pH of 6.0 and a concentration of 100.0 mg/L for NH 4 + , PO 4 3− , and NO 3 − . SL = 0.30% was selected as the best concentration for the reduction of anions. Temperature and transmembrane pressure served as critical operating parameters for the micellar-enhanced ultrafiltration system. When both were raised, the flow increased [2,[43][44][45]. However, transmembrane pressure had a greater influence on the flow than temperature. The concentration of feed had no effect on the concentration of SL in the permeate. As a biosurfactant in an ultrafiltration system with micellar enhancement, SL was very effective in removing nutrients from water [1,47,48].
The high nutrient concentration in the solution, along with the high MWCO that was applied, contributed to the poor permeate flux. The decrease in flow that was caused was primarily attributable to membrane fouling being present. This effect can be mitigated by increasing the solution's TMP and reducing the ion concentration in the solution as much as possible. When employed on a lab scale, it was difficult for MEUF to generate greater TMP. However, this issue can be resolved when applied on an industrial scale. When compared to other methods in use, MEUF features a smaller footprint and a more compact construction. Other approaches require more sludge generation and subsequent filtration to disinfect effectively [2,6,8].
In earlier research, MEUF was combined with synthetic surfactants (e.g., CTAB, CPC, ODA, DTAC) and biosurfactants (e.g., rhamnolipid) to effectively remove heavy metals (>99%) [2,6,23,24]. In another experiment, metal removal was performed in combination with sophorolipid and rhamnolipid [6], which likewise resulted in a high removal rate (>99%). Due to the ion size, competition between two differently charged particles of the ions, and the complexity of the process, there were insufficient investigations on removing nutrients from wastewater combining MEUF and biosurfactants. Only a few studies included the use of synthetic surfactants (CTAB, CPC, and ODA) for nutrient removal, and the removals ranged from 73 to 91% [23,24]. Due to the lack of research on sophorolipids with MEUF for nutrient ion removal and their effectiveness in removing heavy metals from groundwater, MEUF with sophorolipid was chosen for this research.
Materials
ACS-grade sodium nitrate (NaNO 3 ), dipotassium phosphate (K 2 HPO 4 ), and ammonium chloride (NH 4 Cl) were used and purchased from Sigma Aldrich, ON. The pH was corrected with 0.5 N nitric acid (HNO 3 ) and 0.5 N sodium hydroxide (NaOH) solutions purchased from Fisher Scientific. Sophorolipid (SL) biosurfactant was produced from Candida bombicola cultivated on a mixture of rapeseed oil and glucose [1][2][3][4][5]8,47] and purchased from Belgium's Ecover Company. It was composed of 30% acidic SL and 70% lactonic SL. Concentrations of 137.1 mg/L NaNO 3 , 183.4 mg/L K 2 HPO 4 , and 296.5 mg/L NH 4 Cl in double distilled water were prepared to obtain a stock solution of 100.0 mg/L NH 4 + , PO 4 3− , and NO 3 − . The decrease in NH 4 + , PO 4 3−, and NO 3 − by sophorolipid at various pHs and sophorolipid concentrations was studied in batch studies. To achieve equilibrium, the prepared samples were agitated at 60 rpm for 24 h, then centrifuged and analyzed.
In this work, we attempted to maximize the percentage of nutrient removal under suitable test conditions by maintaining appropriate permeate flux and maximizing the factors influencing permeate flux. For this, experiments were carried out at a flow rate of 200 mL/s by keeping the peristaltic pump at 70 rpm, the transmembrane pressure at 120 kPa with a molecular weight cut-off (MWCO) of 10,000, and an initial ion concentration of 100.0 mg/L at 22 • C, with a sophorolipid concentration of 0.30%.
Experimental Setup
The QuixStand BenchTop System (Figure 4) (M series from GE Healthcare) was used for the separation of nutrients (NH 4 + , PO 4 3− and NO 3 − ), which was attached to the surface of micelle from the solution of nutrient-sophorolipid. The system included a feed reservoir, peristaltic recirculation pump, inlet pressure gauge, hollow-fiber cartridge (Xampler cartridge), retentate outlet, outlet pressure gauge, sampling valve, and back pressure valve. The peristaltic pump that was included in the ultrafiltration system to pump the fluid was purchased from Watson-Marlow Company (313 S).
In this work, we attempted to maximize the percentage of nutrient removal under suitable test conditions by maintaining appropriate permeate flux and maximizing the factors influencing permeate flux. For this, experiments were carried out at a flow rate of 200 mL/sec by keeping the peristaltic pump at 70 rpm, the transmembrane pressure at 120 kPa with a molecular weight cut-off (MWCO) of 10,000, and an initial ion concentration of 100.0 mg/L at 22 °C , with a sophorolipid concentration of 0.30%.
Experimental Setup
The QuixStand BenchTop System (Figure 4) (M series from GE Healthcare) was used for the separation of nutrients (NH4 + , PO4 3− and NO3 − ), which was attached to the surface of micelle from the solution of nutrient-sophorolipid. The system included a feed reservoir, peristaltic recirculation pump, inlet pressure gauge, hollow-fiber cartridge (Xampler cartridge), retentate outlet, outlet pressure gauge, sampling valve, and back pressure valve. The peristaltic pump that was included in the ultrafiltration system to pump the fluid was purchased from Watson-Marlow Company (313 S).
Xampler™ Cartridge
The hollow-fiber cartridge used in QuixStand BenchTop (Ultrafiltration System) was purchased from GE Healthcare. A bundle of polysulfone fibers parallels inside a plastic housing and forms the cartridge. Molecular weight cut-off (MWCO) is essential in classifying ultrafiltration membranes. The MWCO that was used in the experiments was 10,000.
Ultrafiltration Tests
The QuixStand BenchTop System (M series from GE Healthcare, Buckinghamshire, UK) was utilized for the separation of ions (NH4 + , PO4 3− , and NO3 − ) bound to the chromium-rhamnolipid micelles. The system comprised a feed reservoir, a peristaltic recirculation pump, an inlet pressure gauge, a hollow-fiber cartridge (Xampler cartridge), a retentate outlet, an outlet pressure gauge, a sample valve, and a back pressure valve. The Xampler cartridge was acquired from GE Healthcare. The cartridge is composed of parallel polysulfone fibers housed within a plastic container. Classifying ultrafiltration membranes is dependent on the MWCO (molecular weight cut-off). In the trials, a MWCO of
Xampler™ Cartridge
The hollow-fiber cartridge used in QuixStand BenchTop (Ultrafiltration System) was purchased from GE Healthcare. A bundle of polysulfone fibers parallels inside a plastic housing and forms the cartridge. Molecular weight cut-off (MWCO) is essential in classifying ultrafiltration membranes. The MWCO that was used in the experiments was 10,000.
Ultrafiltration Tests
The QuixStand BenchTop System (M series from GE Healthcare, Buckinghamshire, UK) was utilized for the separation of ions (NH 4 + , PO 4 3− , and NO 3 − ) bound to the chromium-rhamnolipid micelles. The system comprised a feed reservoir, a peristaltic recirculation pump, an inlet pressure gauge, a hollow-fiber cartridge (Xampler cartridge), a retentate outlet, an outlet pressure gauge, a sample valve, and a back pressure valve. The Xampler cartridge was acquired from GE Healthcare. The cartridge is composed of parallel polysulfone fibers housed within a plastic container. Classifying ultrafiltration membranes is dependent on the MWCO (molecular weight cut-off). In the trials, a MWCO of 10,000 was utilized based on previous studies [1]. These tests were carried out in batches. The feed solution had a volume of 400 mL at the beginning, and the retentate stream was continually recycled. The water flux was monitored before and after the experiment at the optimal transmembrane pressure to confirm membrane fouling. It was time to clean the membrane when the water flux was less than 80-90% percent of the flux of a new membrane. The sodium nitrate, dipotassium hydrogen phosphate, and ammonium chloride salts were dissolved in distilled water to produce a stock solution of NH 4 + , PO 4 3− , and NO 3 − , and the required concentrations were produced by diluting the stock solution with the same water. Distilled water was used to dilute Ecover SL (41%) (SL18) to produce various molar solutions. The reservoir's feed solution, which contained anions, cations, and SL, was fed through the ultrafiltration membrane by a peristaltic pump. The retentate solution was returned to the feed reservoir after exiting the cartridge. Ion chromatography was used to measure the concentrations of NH 4 + , PO 4 3− , and NO 3 − in the permeate, retentate, and feed samples. All tests were conducted at a temperature of 22.0 • C and a pH of 6.0 unless otherwise specified. After each experiment, the flow loop was cleansed by running distilled water through the apparatus. Each test was performed three times, and the average was used to determine the outcome.
Analytical Methods
Statistical method: For each parameter, the test was run twice in duplicate, and the values were then averaged to obtain the numbers used in the table and graphs. Using ANOVA, the standard deviation (SD) values for the three sets of data were computed, and the error bars on the graph were generated using 2SD. In the first trial, the ratio of pollutant ion to sophorolipid was 13:1 (wt/wt), and the achieved removal rate was less than 85%. In the second set of experiments, the ratio of ions to sophorolipids was 1:1.4 (wt:wt). Based on the performance of the initial experimental data set, the values of the studied parameters (pH, temperature, TMP, ions, and SL concentrations) were chosen. The effect of altering a single parameter on the removal rate and permeate flux was investigated while the other parameters remained constant.
pH: The pH was measured using a Fisher Scientific Company AR25 Dual Channel pH/Ion Meter. Considering pH plays such a significant role in reducing NH 4 + , PO 4 3− , and NO 3 − ions, the effect of varying pH levels was investigated. Because the pH of the solution after adding SL is 7.82, the solutions were tested at pH 6.0, 7.0, 8.0, 9.0, and 10.0. Each test was conducted in triplicate, and the total sample amount was 50 mL. Temperature, anion and cation concentrations, and SL concentrations were fixed at 100.0 mg/L for NH 4 + , PO 4 3− , and NO 3 − , respectively, and 0.30% of SL. The pH was adjusted with 0.5 N NaOH and 0.5 N HNO 3 , and the initial and final contents of NH 4 + , PO 4 3− , and NO 3 − were determined using ion chromatography. Equation (1) [1,2,6] was used to calculate the percentage of anion and cation reduction: 10,000 was utilized based on previous studies [1]. These tests were carried out in batches. The feed solution had a volume of 400 mL at the beginning, and the retentate stream was continually recycled. The water flux was monitored before and after the experiment at the optimal transmembrane pressure to confirm membrane fouling. It was time to clean the membrane when the water flux was less than 80-90% percent of the flux of a new membrane. The sodium nitrate, dipotassium hydrogen phosphate, and ammonium chloride salts were dissolved in distilled water to produce a stock solution of NH4 + , PO4 3− , and NO3 − , and the required concentrations were produced by diluting the stock solution with the same water. Distilled water was used to dilute Ecover SL (41%) (SL18) to produce various molar solutions. The reservoir's feed solution, which contained anions, cations, and SL, was fed through the ultrafiltration membrane by a peristaltic pump. The retentate solution was returned to the feed reservoir after exiting the cartridge. Ion chromatography was used to measure the concentrations of NH4 + , PO4 3− , and NO3 − in the permeate, retentate, and feed samples. All tests were conducted at a temperature of 22.0 °C and a pH of 6.0 unless otherwise specified. After each experiment, the flow loop was cleansed by running distilled water through the apparatus. Each test was performed three times, and the average was used to determine the outcome.
Analytical Methods
Statistical method: For each parameter, the test was run twice in duplicate, and the values were then averaged to obtain the numbers used in the table and graphs. Using ANOVA, the standard deviation (SD) values for the three sets of data were computed, and the error bars on the graph were generated using 2SD. In the first trial, the ratio of pollutant ion to sophorolipid was 13:1 (wt/wt), and the achieved removal rate was less than 85%. In the second set of experiments, the ratio of ions to sophorolipids was 1:1.4 (wt:wt). Based on the performance of the initial experimental data set, the values of the studied parameters (pH, temperature, TMP, ions, and SL concentrations) were chosen. The effect of altering a single parameter on the removal rate and permeate flux was investigated while the other parameters remained constant.
pH: The pH was measured using a Fisher Scientific Company AR25 Dual Channel pH/Ion Meter. Considering pH plays such a significant role in reducing NH4 + , PO4 3− , and NO3 − ions, the effect of varying pH levels was investigated. Because the pH of the solution after adding SL is 7.82, the solutions were tested at pH 6.0, 7.0, 8.0, 9.0, and 10.0. Each test was conducted in triplicate, and the total sample amount was 50 mL. Temperature, anion and cation concentrations, and SL concentrations were fixed at 100.0 mg/L for NH4 + , PO4 3− , and NO3 − , respectively, and 0.30% of SL. The pH was adjusted with 0.5 N NaOH and 0.5 N HNO3, and the initial and final contents of NH4 + , PO4 3− , and NO3 − were determined using ion chromatography. Equation (1) Room temperature, transmembrane pressure, and pH were kept constant, and the solution had a pH of 6.0 and included 100.0 mg/L of NH 4 + , PO 4 3− , and NO 3 − , and 0.30% sophorolipid. Ion chromatography was used to determine the ion concentrations.
Transmembrane pressure (TMP): For observation of the effect of TMP on the permeate flux, various TMP values (40, 50, 100, and 150 kPa) were chosen. This experiment was performed at 22 • C and pH 6.0. The feed solution contained 0.30% SL. The permeate pressure was measured by a traceable manometer/pressure/vacuum gauge and the TMP was determined based on the following Equation (2): [1,2,6] Transmembrane pressure = (P inlet + P outlet ) 2 − P permeate (2) Molecules 2023, 28, 1559 11 of 13 The permeate flux was measured using Equation (3) The cartridge area was 140 cm 2 , and the permeate flow was measured by using the flowmeter for the permeate flow in the ultrafiltration system.
Ion concentration: An ion chromatograph, the 930 Compact I.C. Flex, was used to determine the starting and final ion concentrations. The ion chromatograph was connected to a sample processor in this approach, which was controlled by high-performance P.C. software (Metrohm's MagIC Net). The software controlled and analyzed the instrument, as well as evaluated and managed the data collected in a database. The final percentage was determined by multiplying the concentrations obtained by the appropriate dilution factor. Ions in the water were identified using ion chromatography (I.C.). Each analysis required 10 mL of sample collection. For this investigation, samples of untreated wastewater and MEUF permeate and retentate were collected after 2, 5, 10, and 20 min and diluted by a factor of 10 with DI water.
CMC determination: The surface tension of sophorolipid in various concentrations was measured using a tensiometer according to the Du Nouy method (Fisher Scientific, Tensiomat model 21). The force required to lift a thin metal ring (platinum ring) from the solution's surface is measured with a tensiometer. To assure the accuracy of the data, the tensiometer was first calibrated by measuring the surface tension of DI water. To measure the sophorolipid critical micelle concentration (CMC), the solution was diluted several times. [1,2,6,10] The surface tension of the solution was determined by submerging a platinum ring in the solution after each dilution step. The CMC of sophorolipid was determined using the Du Nouy method by plotting the surface tension versus biosurfactant concentration.
Conclusions
The primary purpose of this research was to devise a method for removing phosphorus, nitrate, and ammonium ions from contaminated water by using the biosurfactant with micellar-enhanced ultrafiltration technology. A sophorolipid (SL18) was used as a biosurfactant in this study for enhanced micellar ultrafiltration (MEUF) studies. The study's goals include assessing the feasibility of employing sophorolipid SL18 to remove phosphate, nitrate, and ammonium from contaminated water and determining factors affecting efficiency. Furthermore, the parameters that influence permeate flux and removal efficiency were investigated. This study aimed to determine the best conditions for increasing permeate flux and achieving maximum efficiency. Various factors, including temperature, pH, the concentration of biosurfactants, pollutant ions, etc., were examined during the experiment. At a temperature of 45.0 • C and pH 6.0, there was a 90-96% removal rate for nitrate and phosphate. A maximum ammonium removal of 95% was achieved. These results indicate the high potential of this technique for nutrient removal.
Data Availability Statement:
The data presented in this study are available on request from the corresponding author. | 2023-02-09T16:15:36.496Z | 2023-02-01T00:00:00.000 | {
"year": 2023,
"sha1": "90496899d6e85d5a0d2853791253186b1256ed3e",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/1420-3049/28/4/1559/pdf?version=1675688445",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "a77db73ac5aba59b05c9bcdfc16329b235a9daba",
"s2fieldsofstudy": [
"Engineering",
"Biology"
],
"extfieldsofstudy": []
} |
4588631 | pes2o/s2orc | v3-fos-license | Reproducibility and Validity of the 6-Minute Walk Test Using the Gait Real-Time Analysis Interactive Lab in Patients with COPD and Healthy Elderly
Background The 6-minute walk test (6MWT) in a regular hallway is commonly used to assess functional exercise capacity in patients with chronic obstructive pulmonary disease (COPD). However, treadmill walking might provide additional advantages over overground walking, especially if virtual reality and self-paced treadmill walking are combined. Therefore, this study aimed to assess the reproducibility and validity of the 6MWT using the Gait Real-time Analysis Interactive Lab (GRAIL) in patients with COPD and healthy elderly. Methodology/Results Sixty-one patients with COPD and 48 healthy elderly performed two 6MWTs on the GRAIL. Patients performed two overground 6MWTs and healthy elderly performed one overground test. Differences between consecutive 6MWTs and the test conditions (GRAIL vs. overground) were analysed. Patients walked further in the second overground test (24.8 m, 95% CI 15.2–34.4 m, p<0.001) and in the second GRAIL test (26.8 m, 95% CI 13.9–39.6 m). Healthy elderly improved their second GRAIL test (49.6 m, 95% CI 37.0–62.3 m). The GRAIL 6MWT was reproducible (intra-class coefficients = 0.65–0.80). The best GRAIL 6-minute walk distance (6MWD) in patients was shorter than the best overground 6MWD (-27.3 ± 49.1 m, p<0.001). Healthy elderly walked further on the GRAIL than in the overground condition (23.6 ± 41.4 m, p<0.001). Validity of the GRAIL 6MWT was assessed and intra-class coefficient values ranging from 0.74–0.77 were found. Conclusion The GRAIL is a promising system to assess the 6MWD in patients with COPD and healthy elderly. The GRAIL 6MWD seems to be more comparable to the 6MWDs assessed overground than previous studies on treadmills have reported. Furthermore, good construct validity and reproducibility were established in assessing the 6MWD using the GRAIL in patients with COPD and healthy elderly.
Introduction
Chronic obstructive pulmonary disease (COPD) is a highly prevalent chronic disease affecting about 10% of adults above the age of 40 [1]. COPD affects respiratory function of patients and has systemic consequences as well, including peripheral muscle dysfunction and weakness, which contributes to exercise limitation and impaired quality of life [2,3]. Exercise intolerance is therefore an important clinical feature in patients with COPD. The 6-minute walk test (6MWT) is a method of obtaining the 6-minute walk distance (6MWD) and is used to evaluate functional exercise capacity. Furthermore, the 6MWT is used to assess response to treatment and predicts morbidity and mortality in patients with COPD [4].
According to the European Respiratory Society/American Thoracic Society (ERS/ATS) guidelines, a flat corridor of at least 30 meters is required to perform a 6MWT [4,5]. However, not all clinical facilities have such spaces. Therefore, treadmill walking tests offer advantages over overground walking tests, as limited space is needed, providing a safe environment without obstructions [5] and subjects do not have to turn, leading to an increase in walking distance [6].
Both COPD studies with the 6MWT have used regular fixed-paced treadmills. Conversely, the use of a self-paced treadmill, a feedback controlled function that adapts treadmill speed to its user, could be beneficial to adjust the walking speed more naturally and resulting in a more natural gait pattern compared to fixed speed treadmill walking [10]. In addition, the use of virtual reality during treadmill walking is becoming increasingly popular in the area of rehabilitation, since a virtual reality provides an engaging environment and induces a real life sensation [11]. The Gait Real-time Analysis Interactive Lab (GRAIL, Motekforce Link, Amsterdam, the Netherlands) system combines self-paced treadmill walking with virtual reality. Moreover, the GRAIL enables 3D motion capture to analyse gait patterns during walk tests. As the reproducibility of the self-paced treadmill-based 6MWT in patients with COPD remains currently unknown, it is necessary to assess the reproducibility of the 6MWT on the GRAIL and to compare the GRAIL 6MWT with the overground 6MWT. The aims of the current study were therefore to examine the reproducibility and validity of the 6MWT on the GRAIL in patients with COPD and healthy elderly.
Study design and sample
A cross-sectional observational study was conducted in CIRO, a centre of expertise for chronic organ failure located in Horn, the Netherlands. Sixty-one patients with COPD (FEV 1 /FVC <0.7) were recruited at pre-rehabilitation assessment between February 2014 and June 2015 [12]. Patients with walking aids, chronic oxygen use, orthopaedic ailments and/or neuromuscular co-morbidities affecting their walking patterns were excluded, as well as patients with a history of lung cancer, asthma, sarcoidosis, tuberculosis, and/or lung surgery. Forty-eight healthy elderly, aged 40-85 years, were recruited between July 2014 and October 2015. Healthy elderly were ineligible if respiratory or cardiac diseases, neuromuscular and/or orthopaedic ailments were present. The study complied with the Declaration of Helsinki and was approved by the Medical research Ethics Committees United (MEC-U) in the Netherlands (NL46880.060.13). Written consent was obtained from all participants.
Assessment of 6MWD
The GRAIL (Motekforce Link, Amsterdam, the Netherlands) was used to assess the self-paced treadmill 6MWDs. The GRAIL comprises of a 3D motion analysis system with a dual-belt, instrumented treadmill and a virtual reality 180 degrees projection screen (Fig 1). Four retroflective surface markers were positioned on the anterior superior iliac spine and posterior superior iliac spine of the participant. Marker positions were detected using a ten camera VICON motion analysis system (100 Hz, Oxford Metrics Ltd., Oxford, UK) and automatically labelled in D-flow (Motekforce Link, Amsterdam, the Netherlands) in order to control treadmill speed via self-paced treadmill walking. The virtual hallway environment was synchronised with the treadmill speed. Participants were not allowed to hold onto the handrails and wore a safety harness during each GRAIL 6MWT. All participants performed one familiarisation session on the GRAIL (15-20 minutes) prior to the first GRAIL 6MWT. This session comprised of an explanation of the system and the use of the self-paced function of the treadmill. A four-minute familiarization walk on the treadmill was conducted as well in each participant. Therefore, participants could become accustomed to the virtual hallway environment and self-paced treadmill walking. After the familiarisation session, patients performed two GRAIL 6MWTs in two days during the pre-rehabilitation assessment and healthy adults performed the GRAIL 6MWTs in one day (Fig 2). One GRAIL session took 45 minutes. In addition, all participants performed an overground 6MWT in a 125 meter circular hallway, which took 15-20 minutes. Patients performed two overground 6MWTs during pre-rehabilitation assessment. Healthy elderly performed one overground 6MWT after the GRAIL 6MWTs with a resting period of at least 60 minutes. The overground 6MWD in healthy elderly was considered as the best overground walking distance. All 6MWTs were conducted according to the ERS/ATS guidelines [4]. The 6MWD and average walking speed were assessed. Walking speed was continuously assessed in D-flow and averaged over 6 minutes. Borg scores for both dyspnoea and fatigue were recorded before and after the 6MWT, as well as the heart rate and transcutaneous oxygen saturation using a pulse oximeter (Nonin, Care Fusion, San Diego, USA). During the GRAIL 6MWT, post heart rate and transcutaneous oxygen saturation were recorded after the subjects stepped down from the treadmill.
Sample size calculation
Sample size calculation was based on the results of Stevens et al. [8]. Patients with lung diseases (76% COPD) achieved on average 374 ± 78 meters in the overground 6MWT and 323 ± 119 meters on a regular treadmill 6MWT. Using a posteriori sample size calculation with a power of 0.80, we calculated a sample size of 36 patients. We hypothesized that the difference in 6MWD will be smaller between overground and GRAIL walking in patients with COPD. We therefore included a larger number of subjects in both groups, which were available for this manuscript.
Statistical analyses
The assumption of normally distributed data was checked with the Shapiro-Wilk test. If data were not normally distributed, non-parametric tests were used. Differences between overground 6MWTs in patients, differences between GRAIL 6MWTs in each group and differences between the best GRAIL 6MWT and best overground 6MWT in both groups were identified by paired sample t-tests or two related samples tests. Differences between groups were identified by independent t-tests or two independent samples tests. For consistency with previous studies, mean values of non-normally distributed variables are reported. Predicted values of the 6MWDs for patients and healthy elderly were calculated using the formula of Troosters et al. [13]. The Bland-Altman method was used to assess agreement between the two test conditions. The intra-class correlation coefficient (ICC) values between repetitive GRAIL 6MWTs and between the test conditions were calculated. All analyses were performed using the statistical package SPSS (version 22, IBM SPSS Statistics). Statistical significance was defined as a p-value <0.05.
Results
In total 61 patients with COPD and 48 healthy elderly volunteered to participate. Patients had moderate to very severe COPD. Patients and healthy elderly subjects were comparable in age, height, weight and body mass index (BMI). The FEV 1 /FVC and FEV 1 % predicted differed between patients and healthy elderly (Table 1 and S1 Dataset).
GRAIL versus overground 6MWT
On average, the best GRAIL-based 6MWD in patients was significantly shorter than the best overground 6MWD (-27.3 ± 49.0 m, p<0.001). Conversely, the GRAIL 6MWD in healthy elderly was significantly greater than the overground 6MWD (23.6 ± 41.4 m, p<0.001; Table 3). The Bland-Altman plot (Fig 3) confirms that the majority of patients walked further
Discussion
The present study provides the first evaluation of the reproducibility and validity of the 6MWT assessed by the GRAIL in patients with COPD. It extends previous work on treadmill based 6MWTs by assessing the 6MWD using virtual reality and self-paced treadmill walking. On average, patients increased their 6MWD between the first and second walk test equally in the overground and GRAIL condition. Furthermore, the 6MWT on the GRAIL showed good reproducibility with ICC values of 0.65 for healthy elderly and 0.80 for patients. The best 6MWD in patients was obtained in overground walking, while healthy elderly covered greater distances using the GRAIL. Moreover, the 6MWT on the GRAIL showed to have good construct validity with ICC values of 0.74 and 0.77. Therefore, these results indicate that the 6MWD could be reliably and validly assessed by the GRAIL in patients with COPD and healthy elderly. The 6MWD between the first and second trial improved equally in the overground and GRAIL 6MWT in patients with COPD. These results are similar to existing literature in assessing the 6MWD within this patient group [15,16]. Larger increases in the second GRAIL 6MWT were found in healthy elderly compared to patients, despite all participants undergoing one familiarisation session prior to the first GRAIL 6MWT. A possible explanation is that treadmill walking requires different energy expenditure in each of the subject groups [7]. Another possibility could be that self-paced treadmill walking required more effort of the patient than healthy elderly, due to muscle weakness, balance deficits and/or anxiety [17][18][19].
In addition, adaptability towards learning new tasks might be affected in patients with COPD, as declines in a number of cognitive functions have been observed previously [20,21]. Patients achieved on average a lower 6MWD (-27.5 m) on the GRAIL compared to overground walking. This difference in 6MWD between the test conditions is smaller than previous studies using regular treadmills have established [7,8]. Our findings are supported by less increase in the Borg dyspnoea and fatigue scores, less decrease in oxygen saturation and less increase in heart rate in the GRAIL condition. Healthy elderly, however, achieved greater 6MWDs while walking on the GRAIL, which is in contrast with the findings of Elazzazi et al. [9]. Healthy elderly did not differ in their degree of dyspnoea or fatigue between their best overground and best GRAIL test. Therefore, we can assume that healthy elderly experienced equal exertion in performing the 6MWT in each condition. However, this was not seen in the heart rate, as the heart rates were higher after the overground 6MWT compared to the GRAIL 6MWT. Our study showed smaller differences in 6MWD between the test conditions, which might be due to the use of the self-paced treadmill walking. Self-paced treadmill walking offers a more naturally adjustment of walking speed, which could lead to a more natural gait pattern compared to fixed speed treadmill walking [10]. In addition, the overground track in this study required multiple turnarounds compared to the GRAIL condition. However, the turns were larger than most clinical settings use (30 meters). The virtual reality environment could have created a more realistic environment by providing optic flow. By combining the self-paced treadmill walking and virtual reality, a greater 6MWD might have been achieved compared to regular treadmill walking [11,22]. Despite the fact that the familiarisation session is only performed prior to the first GRAIL 6MWT, which could have led to less distance covered in the second GRAIL 6MWT, our results indicate that 75% of patients improved their walk distance during the second GRAIL test. This is comparable to the 80% of patients who improved during their second overground test. Therefore, we consider this effect to be minimal. Concerning the duration of a GRAIL 6MWT. One GRAIL session takes more time to perform compared to the overground 6MWT. In addition, the GRAIL is less accessible to all centres compared to regular treadmills. We do however not argue that the GRAIL should be implemented everywhere to assess the 6MWD only. The GRAIL is a unique method to conduct analysis of gait impairments in patients with COPD, as these patients have reported walking as one of the most problematic activity in daily life [23]. Future study should therefore focus on gait assessment in patients with COPD. Furthermore, predicted distance values and minimal clinical important difference (MCID) of the 6MWT for patients with COPD are available in the overground condition. As these have not been determined in the GRAIL condition, next steps should be to derive new reference values for the GRAIL condition in healthy elderly subjects and to determine MCID, if the GRAIL will be used for further assessment of patients with COPD.
A first limitation of this study is that healthy elderly performed one overground 6MWT, which might have led to a shorter best overground 6MWD. However, a previous study found a minimal difference of 5 meters between two overground 6MWTs in healthy elderly [24]. Therefore, healthy elderly could achieve their best 6MWD during their first overground 6MWT. The second limitation is that there is a time gap between the GRAIL 6MWT and the post assessment of oxygen saturation levels and heart rates compared to the overground 6MWT. It is possible that conclusions based on the differences in heart rates and oxygen saturation might not explain the differences between the test conditions. A third limitation is that GOLD stage 4 patients are less represented in this study. Therefore, our findings should be carefully interpreted for stage 4 patients. Moreover, complex patients with COPD were excluded, as this is the first study to assess the 6MWT using the GRAIL. Consequently, patients should be able to perform the GRAIL tests without using the handrails and be able to control the self-paced treadmill. A fourth limitation is that this study is a monocentric study, as access to the GRAIL in other centres is currently limited. Moreover, CIRO is a specialized pulmonary rehabilitation clinic, which may limit the external validity of current findings. A fifth limitation is that the learning effect of 6MWT on the GRAIL in repetitive tests of more than two trials has not been established. Consequently, it is not known if the learning effect attenuates in a third GRAIL 6MWT. Another limitation is that balance during treadmill walking might be affected. However, the GRAIL provides the opportunity to assess quality of gait (e.g. balance) continuously in a virtual reality environment and during self-paced walking, which is not possible using a regular treadmill or in an overground condition. As a result, new insights in determinants of walking balance in patients with COPD could be achieved by using this system.
In conclusion, the GRAIL is a promising system to assess the 6MWD in patients with COPD and healthy elderly. The 6MWD assessed by the GRAIL seems to be more comparable to the 6MWDs assessed overground than previous studies on treadmills have reported. Furthermore, good construct validity and reproducibility were established in assessing the 6MWD using the GRAIL in patients with COPD and healthy elderly. | 2018-04-03T05:35:43.628Z | 2016-09-08T00:00:00.000 | {
"year": 2016,
"sha1": "24d03cb292f8c3bff8d8fe1d00cb61b6601c9bb0",
"oa_license": "CCBY",
"oa_url": "https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0162444&type=printable",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "24d03cb292f8c3bff8d8fe1d00cb61b6601c9bb0",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
18334281 | pes2o/s2orc | v3-fos-license | Transcriptomic responses of corpuscle of Stannius gland of Japanese eels (Anguilla japonica) to Changes in Water Salinity
Physiological studies of a unique endocrine gland in fish, named corpuscles of Stannius (CS), described a Ca2+-regulatory function for this gland mediated by stanniocalcin-1, a hypocalcemic polypeptide hormone. However, to date, the endocrine functions of the glands have not been completely elucidated. We hypothesized that other unidentified active principles in the glands are involved in the regulation of plasma ion (Na+, Ca2+) and/or blood pressure. In this study, transcriptome sequencing of CS glands was performed using Japanese eels (Anguilla japonica) adapted to freshwater (FW) or seawater (SW) to reveal the presence and differential expression of genes encoding proteins related to the ion-osmoregulatory and pressor functions. We acquired a total of 14.1 Mb and 12.1 Mb quality-trimmed reads from the CS glands collected from FW and SW adapted eels, respectively. The de novo assembly resulted in 9254 annotated genes. Among them, 475 genes were differentially expressed with 357 up- and 118 down-regulated in the SW group. Gene ontology analysis further demonstrated the presence of natriuresis and pressor related genes. In summary, ours is the first study using high-throughput sequencing to identify gene targets that could explain the physiological importance of the CS glands.
I n 1839, the German zoologist H. Stannius identified an endocrine gland on the ventral surface of fish kidneys 1 . Stannius thought that the glands were equivalent to the mammalian adrenal glands due to their anatomical position. The name of the glands, corpuscle of Stannius (CS), was coined in 1847 2 and the assumption that this gland was an adrenal gland was maintained until the 20 th century. In 1942, the ontogeny of the CS glands was reported. It was discovered that the glands were histologically distinct from the piscine interrenal and chromaffin tissues 3 . No steroidogenic activity was detected in these glands 4 . Electron microscopic studies revealed that the CS cells possessed cytoplasmic features of polypeptide hormone-secreting cells 5 . Accordingly, CS was confirmed to be a unique endocrine gland found only in fish.
Surgical removal of the CS glands (stannioectomized, STX) from fishes causes plasma hypercalcemia 6 as well as a reduction of plasma Na 1 and Cllevels 7,8 , and a decrease in dorsal aortic blood pressure 9 . Intriguingly, STX fishes that were maintained in low-calcium water did not suffer from the rise of serum Ca 21 levels. These observations indicated that the ambient water was the major source of the Ca 21 responsible for hypercalcemia in the STX fishes. Consequently, fish gills were suggested to be the main tissue for Ca 21 absorption from the ambient water. This hypothesis was later supported by a study conducted by Fenwick and So 10 who demonstrated that the rate of gill calcium transport (GCAT) was significantly increased in STX fishes. Conversely, the increase of GCAT can be reduced by injection of CS extracts [10][11][12] . This finding indicated that the active principle(s) from the CS extracts contained ''inhibitory factor(s)'' that can reduce the rate of GCAT 10,13 . One of the major active principles in CS gland extracts, a hypocalcemic hormone named stanniocalcin-1 (STC-1), was identified in 1980s [14][15][16][17] . However, the characteristic and significance of other substance(s) with ion-osmoregulatory and pressor functions remain unknown [18][19][20][21] .
Previous physiological studies had demonstrated that CS glands are involved in the regulation of blood pressure and natriuresis 22 ; however, there is limited information regarding this regulatory role of CS glands, to date. Thus, we hypothesized that there might be some unidentified active principles in the CS glands associated with these reported physiological functions. In this study, a highthroughput transcriptome sequencing (RNA-seq) approach was adopted to investigate the transcriptome profiles of the CS glands from fish adapted to freshwater or seawater environments. The differential expression patterns of the CS glands were compared and the genes involved in Ca 21 metabolism, ion-osmoregulation, and blood pressure were identified. This study provides an important resource for future investigations on CS glands functions.
Methods
Maintenance of Japanese eels (Anguilla japonica). The methods were carried out in Hong Kong Baptist University in accordance with the approved guidelines. All experimental procedures were approved by the Hong Kong Baptist University, Hong Kong Special Administrative Region. Japanese eels (A. japonica) weighing between 500-600g, were reared in fiberglass tanks supplied with charcoal-filtering aerated tap-water (freshwater, FW) at 18-20uC under a 12 h: 12 h L:D photoperiod for at least 2 weeks of acclimation before the experiments. The fish were then either maintained in FW (n 5 5) or transferred to seawater (SW) (n 5 5) for another two weeks. After this, the fish were anesthetized with 0.1% MS-222 (Sigma) for the collection of the CS glands.
RNA Isolation, cDNA Library Construction, and Illumina Deep Sequencing. Total RNA was isolated from the CS glands of fish using TRIzol reagent (Life Technologies, CA, USA). The RNA concentration was measured using QubitH RNA Assay Kit in QubitH 2.0 Fluorometer (Life Technologies, CA, USA). RNA samples (300 ng) with a RNA Integrity Number (RIN) greater than 8, as determined by the Agilent 2100 Bioanalyzer system (Agilent Technologies, CA, USA), were used for library construction. Four independent libraries were prepared for RNA sequencing. Briefly, the cDNA libraries were prepared using the TruSeq Stranded mRNA LT Sample Prep Kit (Illumina, San Diego, USA) following the manufacturer's protocol. Index codes were ligated to identify individual samples. mRNA was purified from the total RNA using poly-T oligo-attached magnetic beads (Illumina, San Diego, USA), and, then, fragmented using divalent cations under elevated temperature in the Illumina fragmentation buffer. First and second strand cDNAs were synthesized using random oligonucleotides and SuperScript II, followed by DNA polymerase I and RNase H. Overhangs were blunted by using exonuclease/polymerase and, after 3' end adenylation, Illumina PE adapter oligonucleotides were ligated. DNA fragments that ligated with adaptor molecules on both ends were enriched using the Illumina PCR Primer Cocktail in a 15-cycle PCR reaction. Products were purified and quantified using the AMPure XP and the Agilent Bioanalyzer 2100 systems, respectively. Before sequencing, the libraries were normalized and pooled together in a single lane on an Illumina MiSeq platform. Paired-end reads, each of 150-bp readlength, were sequenced. Adapters and reads containing poly-N were first trimmed and the sequence-reads were dynamically trimmed according to BWA's 2 q algorithm 23 . Briefly, a running sum algorithm was executed in which a cumulative area-plot is plotted from 3'-end to the 5'-end of the sequence reads and where positions with a base-calling Phred quality lower than 30 cause an increase of the area and vice versa. Such plot was built for each read individually and each read was trimmed from the 3'-end to the position where the area was greatest. Read-pairs were then synchronized such that all read-pairs with sequence on both sides longer than 35 bp after quality trimming were retained. Any singleton read resulting from read trimming was removed 23 . All the downstream analyses were based on qualitytrimmed reads.
De novo Transcriptome assembly. Forward and reverse reads from all the libraries/ samples were pooled and subjected to transcriptome de novo assembly using Trinity (version r2013-02-25) with ''min_kmer_cov'' set to 2 and all other parameters set to default 24 . Trinity uses fixed k-mer to generate an assembly and it is efficient in recovering full-length transcripts as well as spliced isoforms.
Annotation of assembled transcripts. Coding sequences (open reading frames, ORF) were identified by Transdecoder 25 using the following criteria: (1) the longest ORF was identified within each transcript; (2) from the longest ORFs extracted, a subset of the longest ones was identified and randomized to provide a sequence composition corresponding to non-coding sequences before being used to parameterize a Markov model based on hexamers; and (3) all the longest ORFS were scored according to the Markov Model to identify the highest scoring reading-frame out of the six possible reading-frames. These ORF were then translated to protein sequences and subjected to (1) BLASTp search against UniProtKB/Swiss-Prot with a cut-off e-value of 1. predicted cDNAs of A. anguilla were retrieved from the ZF-Genomics database (http://www.eelgenome.com/) 28 . A transcriptome assembly of A. anguilla as well as an Eeelbase-specific microarray targeting A. anguilla transcripts were retrieved from the Eeelbase database (http://compgen.bio.unipd.it/eeelbase/) 29,30 .
Transcripts from A. japonica and A. anguilla were considered orthologous if they were the symmetrical best hits in each reciprocal all-against-all BLASTn search (i.e. Reciprocal Best Hit) 31 . Briefly, orthologs to the A. anguilla sequences were identified first by comparing the assembled transcript to the database using BLAST search. The highest-scoring hit was obtained and, then, a BLAST search was run against the database of the assembled transcripts. The hit in A. anguilla sequences was considered an ortholog of the assembled transcript if and only if the second BLAST search returned the assembled transcript that was the highest scorer in the first BLAST search. The transcriptome assembly was aligned to the draft genome of A. japonica and A. anguilla using GMAP (version 2014-08-04) with the parameter -no-chimeras and -cross-species 32 .
Differential expression and GO enrichment analysis. In our analysis, differential gene expression and TMM-normalized FPKM gene expression were calculated separately. This is because RSEM does not support gapped alignment, and the alignment accuracy of Bowtie used by RSEM is known to be lower than that of other aligners 33 , thereby hindering the use of the alignments produced by other aligners. Sequencing reads were mapped to the assembled transcripts using Novoalign (v3.00.05) with parameter -r ALL to report all multi-mapped reads (http://www. novocraft.com/). Alignment files were sorted using Samtools (http://samtools. sourceforge.net/) to generate a read-name sorted BAM file. Then, ''Samtools view -F 0x4'' was used to parse the mapped reads from the BAM file and the number of readpairs mapping to each transcript in each sample were summarized to generate a count table (http://seqanswers.com/forums/showthread.php?t529745) 34 . Ambiguously mapped read-pairs with each end mapped to different transcripts were discarded. Read-count data were then subjected to differential expression analysis using the edgeR package 35 . Samples with identical treatments were considered to be biological replicates. Genes with B&H corrected p-value ,0.05 and log2 (fold change) .1 were consider to show statistically significant differential expression. RSEM pipeline was used to independently calculate TMM-normalized-FPKM expression values 36 . Following this calculation, we cross-checked our edgeR results with those generated by the RSEM pipeline. Dysregulated genes were subjected to KEGG pathway analysis using DAVID Tools to decipher the molecular interaction networks that might be deregulated 37 .
Quantitative real-time PCR (qRT-PCR). To validate the sequencing data, an independent cohort of FW or SW (n53) adapted fish were sampled. The differentially expressed genes were selected for qRT-PCR analysis. Briefly, total cellular RNA (0.5mg) was reversed transcribed using the high capacity RNA-to-cDNA kit (Applied Biosystems, Foster City, CA, USA). qRT-PCR reactions were conducted using the Power SYBRH green PCR master mix with the StepOne TM real-time PCR system (Life Technologies, CA, USA). Verified gene-specific primers (Table S1) of A. japonica were used. The occurrence of primer-dimers and secondary products was inspected using melting curve analysis. Our data indicated that the amplification was specific for each individual set of primers and control amplification was done either without reverse transcriptase or without RNA. gadph was used as a housekeeping gene and the relative expression ratio of target gene/gapdh was calculated according to the method described by Pfaffl 38 : , where E 5 10 (-1/slope) and CP is the crossing point at which fluorescence rises above the background level.
Availability of supporting data. The sequencing data from this study have been submitted to the NCBI Sequence Read Archive (SRA) (http://www.ncbi.nlm.nih.gov/ sra) under the accession number SRP049701.
Results
Workflow of the study. In this study, a pair of CS glands from each fish was used to prepare one pooled RNA sample. Two biological replicates of FW and SW adapted fish were performed. Four cDNA libraries were constructed and subjected to Illumina transcriptome sequencing. The overall workflow of the study is shown in (Fig. 1).
Illumina RNA-Seq and de novo transcriptome Assembly. We obtained 7.22 Mb and 6.92 Mb quality-trimmed Illumina reads from the FW CS gland samples (FCS1 and FCS2, respectively) and, 6.61 Mb and 5.47 Mb quality-trimmed Illumina reads from the SW CS gland samples (SCS1 and SCS2, respectively). A total of 2.05 Gb and 1.73 Gb of clean bases were obtained from the FW and SW samples, respectively. The de novo transcriptome was formed by 78713 contigs with an average contig length of 791 bp (the shortest sequence was 201 bp and the longest one was 10424 bp) (Fig. 2). Gene annotation. The assembled transcripts were subjected to 6-frame translations and the data of likely coding sequences were extracted. These likely coding sequences were randomized to provide a sequence composition corresponding to non-coding sequences. All the longest ORFs were scored according to the Markov Model (log likelihood ratio based on coding/noncoding) in each of the six possible reading frames. If the putative ORF proper coding frame scored positive and was the highest among the other presumed wrong reading frames, then that ORF was reported. If a high-scoring ORF was eclipsed by a longer ORF in a different reading frame, it would be excluded. Annotation analysis were implemented to compare the predicted ORF sequences against the UniProtKB/Swiss-Prot database using BLASTp search with a cut-off e-value of 1.0 3 10 -6 . In this study, 9254 genes were matched to the UniProtKB/Swiss-Prot database (Table S2). Regarding the taxonomic distribution of the genes, according to the UniProtKB/TrEMBL database, 24.78% of the matched genes showed similarities with Lepisosteus oculatus, followed by Oncorhynchus mykiss (19.92%), Danio rerio (11.55%), Astyanax mexicanus (10.77%), Oreochromis niloticus (4.88%), Salmo salar (3.74%), Gasterosteus aculeatus (2.44%), Ictalurus punctatus (2.27%), Takifugu rubripes (2.10%), and others (15.95%) (Fig. 3). In fact, as of the date when the analysis was performed, only 1238 and 80 protein sequences of Anguilla species were deposited in the UniProtKB/TrEMBL and UniProtKB/Swiss-Prot databases, respectively. Therefore, it was not surprising to observe so few hits to Anguilla species, which was probably due to the under-representation of Anguilla species protein sequences in the UniProt database.
Comparative analysis of A. japonica transcripts with various eel species. We first compared our unified transcriptome assembly to the existing transcriptome resources of eel species. Eel endocrinology has been a subject of interest for a long time and, thus, numerous eel hormone sequences are available. To investigate what kind of hormones were expressed in the CS glands, we compared our assembled transcriptome to the existing eel protein sequences with hormone activity and found calcitonin, stanniocalcin, activin, adrenomedullin, insulin-like growth factor, natriuretic peptide, relaxin, urotensin, and ventricular natriuretic peptide in the CS gland transcriptome (Table S3). We then sought to study the expression of annotated eel genes in our CS gland-specific transcriptome. In order to accomplish this goal, we performed both nucleotide and protein level searches in our assembled transcriptome. The nucleotide search suggested that 39.2% of known transcripts discovered in various eel species are present in the CS gland of A. japonica (Table S4). We found that 27.8% of the proteins annotated in eel species by UniProt/TrEMBL non-reviewed database are present in our transcriptome assembly (Table S5). The EeelBase database provides transcriptome resources of A. anguilla generated from 640,040 reads sequenced by both 454 and Sanger technologies 29,30 . We compared our A. japonica transcriptome assembly to that of the Eeelbase database using BLASTn search, with e-value ,1E-5, identity $0.95, and number of aligned nucleotides $50%. It was found that 4085 A. anguilla transcripts annotated in the EeelBase database are present in our A. japonica assembly. The Eeelbase database has also developed an Eeelbase specific microarray 29 that targets a subset (,33%) of transcripts among their assembled A. anguilla transcriptome. The Eeelbase specific microarray targets A. anguilla transcripts that matched 2293 transcripts of our A. japonica assembly. Since the specificity of the microarray probe depends on the hybridization of the probe sequences to the cDNA to be probed, we required the alignment length threshold to be at least 90%, with an identity of at least 95%. Using these parameters among the 2293 transcripts, we estimated that only 25% (,582) A. japonica transcripts could be probed uniquely by the Eeelbase specific microarray (Table S6).
Orthologous transcripts between A. japonica and A. anguilla were identified by comparing our assembled transcriptome to the predicted cDNA of the eel species. We decided against the use of only cDNAs from A. anguilla because of the low-coverage sequencing depth of the available A. anguilla transcriptome in the EeelBase database. Based on reciprocal BLAST searches, 19382 putative homologs between A. japonica and A. anguilla were identified (Table S7).
With the availability of the draft genomes of A. japonica and A. anguilla, we sought to determine the gene structure of our assembled A. japonica transcriptome by aligning the assembled transcripts to the draft genomes. Regarding the A. japonica genome, we found that the majority (89%; 69920/78713) of the transcripts could be aligned. In fact, 82% (64601/78713) of the transcripts were almost completely aligned ($95% of the length of assembled transcripts). Regarding the A. anguilla genome, we found that the majority (78%; 61737/78713) of the transcripts could also be aligned. In fact, 70% (55078/78713) of the transcripts were almost completely aligned ($95% of the length of assembled transcripts). Based on both A. japonica and A. anguilla transcript-to-genome alignments, the majority (,90%) of the transcripts expressed in the CS glands of A. japonica have 6 or less exons per transcript (Table S8).
GO enrichment analysis and cluster classification. All the genes were analyzed according to GO functional enrichment analysis ( Table 1). The top five pathways involved in molecular functions included: GTPase regulator activity, nucleoside-triphosphatase regulator activity, nucleotide binding, small GTPase regulator activity, and ATP binding. The top five biological processes included: establishment of protein localization, protein localization, protein transport, intracellular transport, and regulation of small GTPasemediated signal transduction. The top five cellular components identified in this analysis were: intracellular organelle lumen, organelle lumen, membrane-enclosed lumen, nuclear lumen, and nucleolus.
Furthermore, the genes of the CS gland transcriptome were classified into three clusters according to their functional annotation ( Table 2). Cluster I included the genes involved in the regulation of calcium metabolism such as stanniocalcin-1 (STC-1), calcitonin, vitamin D(3) 25-hydroxylase, calcium-sensing receptor (CaSR), S100 calciumbinding protein A6 (S100A6), and stromal interaction molecule 1 (STIM1). In cluster II, atrial natriuretic peptide (ANP)-converting enzyme and endothelin-converting enzyme 1 (ECE-1) were listed. These enzymes are involved in the proteolytic cleavage of ANP and endothelin, respectively, to produce biologically active peptides that regulate blood pressure and natriuresis. In the cluster III are those transporters involved in ion-osmoregulation such as aquaporins, chloride intracellular channel protein 5, kidney-specific Na-K-Cl symporter, and voltage-gated potassium channel subunit Kv11.1.
Gene expression and differential gene expression. As shown in (Fig. 4), stanniocalcin is the highest expressed gene in CS glands 39 . Other highly expressed genes are either constituents of ribosomes or responsible for ribosome biosynthesis. The 25 th , 50 th , and 75 th quartiles of the average TMM normalized FPKM gene expression were 1.09, 1.85, and 4.34, respectively (Table S2).
By comparing the transcriptome data of the CS glands from FW and SW conditions, a total of 475 genes were identified to be differentially expressed after the transfer of fish from FW to SW (B&H corrected p-value ,0.05 and log2 (fold change) .1). These included 357 up-and 118 down-regulated genes in the SW group compared to the FW group ( Table S9). The differentially expressed genes were further analyzed using GO functional enrichment analysis ( Table 3 and Table S10). The top five pathways involved in molecular functions included: diacylglycerol binding, calcium ion binding, phospholipid:diacylglycerol acyltransferase activity, cation binding, and ion binding. The top five biological process included: cell adhesion, biological adhesion, cell part morphogenesis, cell projection morphogenesis, and homophilic cell adhesion. While the top five cellular components identified in this analysis were: ubiquitin ligase complex, basolateral plasma membrane, basal plasma membrane, endoplasmic reticulum, and basal part of the cell. Ten genes involved in the functional clusters were selected and validated by qRT-PCR analysis. Primers and amplicon sizes are listed in Table S7. The results of the qRT-PCR analysis agreed with the Illumina sequencing data ( Table 4).
Discussion
The corpuscle of Stannius is a unique endocrine gland located on the ventral surface of kidneys of bony fishes. Although there is no comparable structure identified in humans, the mammalian ortholog of the CS-derived polypeptide hormone, stanniocalcin-1 (STC-1), was cloned and shown to be involved in many biological functions (i.e. ovarian physiology, inflammation, and carcinogenesis) 40 . These results demonstrated the importance of the CS-derived factor in mammals, although the development of the glands disappeared during evolution. In past studies, STC-1 is the only polypeptide identified to be responsible for the role of CS glands in Ca 21 homeostasis. However, physiological experiments conducted in the past decades have also demonstrated ion-osmoregulatory and pressor functions of the glands while the identities of other CS-derived active principles, surprisingly, have not been elucidated to date. In this study, we sequenced CS glands isolated from FW or SW adapted fish, assembled the transcriptome, and identified differentially expressed genes. Our primary goal was to identify genes that explain the reported physiological importance of the CS glands in the regulation of plasma ion (Na 1 , Ca 21 ) and/or blood pressure. Nevertheless, because of the availability of extensive transcriptome and genomic resources of a closely related species, A. anguilla, and the draft genome of A. japonica, we performed a comprehensive comparison between our assembled A. japonica transcriptome and these resources. We found that the following peptides with hormone activity were Cluster II Blood pressure (Total 26 gene) Endothelin-converting enzyme 1 comp30404_c0 Atrial natriuretic peptide-converting enzyme comp2660_c0 Cluster III Ion-osmoregulation (Total 101 gene) Aquaporin-1 comp21702_c0 Aquaporin-3 comp12261_c0 Chloride intracellular channel protein 5 comp21102_c0 expressed: stanniocalcin, calcitonin, activin, adrenomedullin, insulin-like growth factor, natriuretic peptide, relaxin, urotensin, and ventricular natriuretic peptide. Based on the experimental transcriptome assembly available in the EeelBase database, we estimated that more than 4085 A. anguilla transcripts are found in our A. japonica transcriptome. However, the Eeelbase specific microarray may not be suitable for analyzing transcriptome-wide expression in A. japonica, primarily because only hundreds of A. japonica transcripts could be specifically hybridized to the array's probe. Based on the transcriptome wide predicted cDNAs available in the ZF-Genomics database, we identified 19382 putative orthologous between A. japonica and A. anguilla. We also provided a transcript-to-genome annotation of our A. japonica transcriptome.
Among our annotated 9254 genes, 475 genes were differentially expressed in the CS glands of SW adapted eels compared to those of FW adapted eels. GO enrichment analysis suggested that 14 differentially expressed genes mediated calcium ion binding (GO:0005509). In fish, gills and CS glands are the two major organs responsible for calcium homeostasis. A GO analysis on calciumchallenged fish gills showed that the gene category ''calcium ion binding'' (GO:0005509) was enriched 41 . Herein, this is the first report to identify the differentially expressed genes involved in calcium ion binding in CS glands in response to changes in environmental salinity and calcium. In addition, our data showed that a number of deregulated genes were associated with cellular protein modification (39 genes) and phosphorylation processes (10 genes). It is known that post-translational modifications are important for protein activities, stability, localization, or degradation 42 . After translation, polypeptide chains undergo modifications to produce functionally mature products. These changes are important for endocrine glands. For example, in the CS glands, they are involved in physiological signal detection and transduction via protein phosphorylation to stimulate production of STC-1 in SW adapted fish.
In addition to the general annotation, we addressed our particular research question using GO analysis to highlight the genes related to the three functional clusters: (1) Ca 21 -metabolism, (2) blood pressure, and (3) ion-osmoregulation. The differentially expressed genes under these three functional clusters were validated using real-time PCR analysis. In fish, STC-1 is known to be a hypocalcemic hormone involved in the regulation of Ca 21 homeostasis 40 . However, the roles of other well-studied mammalian Ca 21 -regulating hormones (i.e. parathyroid hormone (PTH); calcitonin (CT); and 1, 25 dihydroxyvitaminD 3 ) in calcium metabolism in fish, are largely unknown. In mammals, CT is produced by parafollicular cells while in fish its presence was reported in ultimobranchial glands 43 . In this study, we firstly identified the expression of CT in CS glands. A significantly higher CT expression level was 44 , suggesting a hypocalcemic function for CT. However, in another study, administration of CT caused hypercalcemia in brown trouts 45 . A recent study in zebrafish suggested that CT has a hypocalcemic function to inhibit ECaCl expression 46 . In mammals, CT is one of the important hypocalcemic hormones, opposing the effects of PTH, exerting inhibitory action on osteoclast, and reducing intestinal and renal Ca 21 (re)absorption 47,48 . Nevertheless, the identification of CT expression in CS glands warrants further investigation of the role of CT in plasma Ca 21 homeostasis in fish. Besides CT, recently the PTH gene family was identified in the CS glands of a cartilaginous fish, the elephant shark 49 . One of the members, Pth1, was found to exert PTH-like activity in mammalian UMR106.01 cells and was believed to play a fundamental role in cartilaginous fish, before evolving to regulate bone development in teleosts. Surprisingly there was no Pth-like transcript detected. This observation implies that PTH-producing cells may have different developmental origin than CS glands 50 . In addition to the identification of hormonal factors, our data showed an increased expression level of the stromal interaction molecule 1 (STIM1) in the glands of SW adapted fish. STIM1, a Ca 21 -sensor in endoplasmic reticulum, mediates the activity of store-operated Ca 21 entry (SOCE) to regulate intracellular Ca 21 homeostasis. Upon Ca 21 depletion, STIM1 is translocated from the endoplasmic reticulum to the plasma membrane to activate Ca 21 release-activated Ca 21 (CRAC) channel subunit 51,52 . A previous study in mammalian cells demonstrated that the function of the STC-1 paralog, STC-2, was to interact with STIM1 to negatively modulate SOCE 53 . Thereby, STIM1 may play a role in mediating the signal of extracellular Ca 21 to modulate STC-1 synthesis in CS glands. In addition to the Ca 21 -regulatory function, early studies of CS gland physiology denoted the presence of pressor substances. In STX fishes, a decrease of dorsal aortic blood pressure was reported 18 . An injection of CS extracts increased blood pressure of fish 54 . However, the pressor substances in the glands that increased systemic blood pressure remain unknown. Through this transcriptomic analysis, we identified the expression of atrial natriuretic peptide (ANP)-converting enzyme and endothelin-converting enzyme 1 (ECE-1). Although no significant difference in their mRNA expression levels in CS glands was measured between the FW and SW adapted fish, the two enzymes are known to indirectly regulate blood pressure. The ANP-converting enzyme is an endopeptidase that cleavages atrial natriuretic peptide hormone into an active form to promote natriuresis and vasodilation 55 . ECE-1 is involved in proteolytic activation of endothelins, which have strong vasoconstrictive effects 56 . The identification of these important enzymes suggests the involvement of the glands in the regulation of blood pressure in fish and provides an explanation to the pressor effects of the gland extracts.
The functional cluster named ''ion-osmoregulation'' was formed by identifying the changes in the expression levels of the membrane transporters which were interpreted as the modulation of membrane sensors to integrate extracellular signals to regulate CS gland functions. In particular, the expression level of AQP-3 was significantly reduced in the CS glands of SW fish. The studies of the functions of water-specific, membrane-channel AQP proteins in mammals and fish suggested that AQPs have unique permeability characteristics, are widely distributed across tissues, and play important roles in the regulation of water homeostasis 57,58 . AQPs are functionally classified as osmotic-stress effectors. A long-term osmotic-stress in oysters induced a reduction of AQPs activities in response to osmotic challenges 59 . The reduced AQP-3 expression in the CS glands of SW fish might serve to similar functions.
In summary, our work represents the first report using next generation sequencing to identify gene targets that could explain the reported physiological importance of the CS glands. Three functional clusters were defined and differential gene expression was observed in the CS glands of fish adapted to FW and SW conditions. Taken together, our data support the notion that CS glands are important in the regulation of ion homeostasis and blood pressure. It warrants further investigation to decipher the underlying mechanisms that characterize the additional functions of this unique endocrine gland. | 2018-04-03T00:39:45.651Z | 2015-04-24T00:00:00.000 | {
"year": 2015,
"sha1": "0130748a009e5196e2cbb840b61ddb9e5753f6f7",
"oa_license": "CCBY",
"oa_url": "https://www.nature.com/articles/srep09836.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "e3295b3bfc2cc52de549a2c911a06f30a2b6d89a",
"s2fieldsofstudy": [
"Environmental Science",
"Biology"
],
"extfieldsofstudy": [
"Medicine",
"Biology"
]
} |
118901043 | pes2o/s2orc | v3-fos-license | Deformed Heisenberg Algebra with a minimal length: Application to some molecular potentials
We review the essentials of the formalism of quantum mechanics based on a deformed Heisenbeg algebra, leading to the existence of a minimal length scale. We compute in this context, the energy spectra of the pseudoharmonic oscillator and Kratzer potentials by using a perturbative approach. We derive the molecular constants, which characterize the vibration--rotation energy levels of diatomic molecules, and investigate the effect of the minimal length on each of these parameters for both potentials. We confront our result to experimental data for the hydrogen molecule to estimate an order of magnitude of this fundamental scale in molecular physics.
I. INTRODUCTION
This work, is a continuation of the recent studies [1,2], where the effects of the minimal length on the vibration-rotation of diatomic molecules have been investigated through the pseudoharmonic oscillator (PHO) [1] and the Kratzer [2] interactions.
It has been in particular concluded that the presence of the minimal length in the formalism provides a natural ultraviolet regularization in quantum mechanics [26] and in quantum field theory [15]. Furthermore, upper bounds for this elementary length have been estimated; the values differ from one application to another and, mostly, belong to the range 10 −6 − 10 6 fm [27,28]. The minimal length seems to depend on the energy scale of the problem and might therefore characterize the size of the system under study [13,26]. The latter finding was behind the motivation of our recent investigations [1,2] on the GUP effects in diatomic molecules, because the spatial extension of these systems is relatively large, and the effect of the minimal length may clearly manifest.
In Ref. [1], we studied the vibration-rotation energy levels of diatomic molecules in the presence of a minimal length by addressing the Schrödinger equation with the PHO potential. A more detailed investigation was carried out in Ref. [2], by taking the Kratzer interaction. It has been explicitly shown that the minimal length would have some physical importance in studying the spectra of diatomic molecules. Furthermore, an upper bound for the minimal length has been estimated of about 0.01Å by confronting the correction of the GUP to an experimental result of the hydrogen molecule [2].
Here, we review the main results obtained in [1,2]: We give the expression of the vibration-rotation energy spectrum of diatomic molecules in the presence of a minimal length by studying the deformed Schrödinger equation with the two potentials. In both cases, we apply the obtained formulas to compute the spectroscopic constants of diatomic molecules and investigate the effect of the GUP on these constants. We show how the importance that the deformation parameter can play in fitting the experimental results. Moreover, the expressions of the molecular constants derived from the energy spectrum of the PHO show the importance of the generalization of these studies to the case of a two-parameter deformed Heisenberg algebra, which is presented in Sec. II.
The rest of this paper is organized as follows. In Sec. II, we review the essentials of the formalism of quantum mechanics with a GUP. Sec. III is devoted to investigate the KP in [ The operators X and P can be represented in both coordinate and momentum spaces as follow: in momentum space, the simplest realization is [12] where p = p and x = i ∂ ∂p represent the position and momentum operators of ordinary quantum mechanics.
In coordinate space one has [29] X = x, The formalism of of quantum mechanics with a minimal length has been extended to arbitrary dimensions (D) [12][13][14][15]. The modified Heisenberg algebra reads The corresponding GUP is in which the minimal uncertainty in position (minimal length) is [13] ( The position and momentum operators are represented in momentum space as [13,17] where γ is a small positive parameter related to β and β ′ .
In coordinate space, the simplest representation of the operators X i and P i is [16] where x i and p i satisfy the standard commutation relations of ordinary quantum mechanics.
Representation (8) satisfies the deformed algebra in the case β ′ = 2β up to the first order of β. It is especially more adequate in the treatment of the minimal length as a perturbation to the Schrödinger equation for a given interaction. In this special case the minimal length reads in 3-dimensions as The Schrödinger equation with the representation (??) can be written as follow: where terms of order β 2 have been neglected.
In the following sections, we will use equation (10) to study the Kratzer and the PHO potentials. The main goal of this study is to derive the expressions of the spectroscopic constants of diatomic molecules in the presence of a minimal length by using the deformed energy spectrum of these two interactions.
III. ENERGY SPECTRUM OF KRATZER POTENTIAL WITH A MINIMAL LENGTH
We are interested to the deformed Schödinger equation (10) with the Kratzer's molecular potential (KP), which has the form [33] with g 1 = D e r 2 e and g 2 = 2D e r e , where D e is the dissociation energy and r e is the equilibrium internuclear distance of a given diatomic molecule.
It has been shown in detail in Ref. [2] that the minimal length corrections ∆E nℓ can be analytically computed, and the energy spectrum of the KP in the presence of a minimal length is given by the following expression: with the notations γ = re 2µD e , λ = 1/2 where n and ℓ are, respectively, the radial (vibrational) and orbital (rotational) quantum numbers; and µ is the reduced mass of the molecule.
Formula (12) shows the effect of this deformed algebra on the energy levels of KP. As it has been outlined in Ref. [2], several features of diatomic molecules can be studied by using this formula. In the following, we apply Eq. (12) to investigate the effect of the minimal length on the spectroscopic constants, which characterize the rovibrational energy levels of diatomic molecules.
Spectroscopic constants of diatomic molecules
As we will see, the energy spectrum of the KP is similar to the well-known spectroscopic formula which defines the vibration-rotation energy spectrum of diatomic molecules [32,33].
To this end, we observe that the dimensionless parameter γ, in Eq. (12), is so large for most molecules (γ ≫ 1) [33], the levels E nℓ may then be expanded into powers of 1/γ as follow: This formula shows the different parts of the undeformed spectrum (β = 0), and the corrections caused by the minimal length. The effect of this GUP on each kind of the rovibrational energy was discussed in detail in Ref. [2]. Here, we will suggest some ideas on how to get information about the order of the deformation parameter β from formula (14), and then apply it to extract the spectroscopic constants of diatomic molecules, which is a way to give values of β for any molecule.
We now have basically two approaches to deal with the parameter β in formula (14): The first is to consider that β is independent of the molecular constants r e and D e , which have known values in molecular spectroscopy. Therefore, an upper bound for β can be estimated by assuming, e.g, that the minimal length corrections in in Eq. (14) are included in the gap between the experimental results and those predicted by the formula (14) in the case case β = 0. It has been argued in Ref. [2] that a better estimate may be obtained by considering the vibrational ground-state energy E 00 of the hydrogen molecule (H 2 ). This led to an upper bound for the minimal length with the value The second view point consists of looking at formula (14) as an energy spectrum of a three-parameter potential, i.e., D e , r e , and β; so, the values of β depend now on D e and r e . This viewpoint allows for adjusting the three parameters of the "deformed KP" with the spectroscopic data. To this end, we can suggest to extract, from the deformed energy spectrum (14), the spectroscopic constants for diatomic molecules.
We write now equation (14) in the form The identification of Eqs. (18) and (16) leads to the following deformed spectroscopic constants: We observe that the leading corrections of the minimal length are of order 1/γ 2 and concerns the constant of anharmonicity of vibrations ω e x e and the constant Y 00 , which does not influence the line positions in a spectrum. However, the rotational constant B e is not affected by this deformed algebra.
As the experimental values of the spectroscopic constants are available for diatomic molecules [32], it follows that the expressions (19) can be used to give values of β for each molecule. This investigation is under consideration in the general case of deformed Heisenberg algebra (4), with β ′ = 2β.
IV. ENERGY SPECTRUM OF THE PSEUDOHARMONIC OSCILLATOR WITH A MINIMAL LENGTH
The pseudoharmonic oscillator (PHO) is also one of the molecular interactions, which is used in the study of the vibration-rotation spectra of diatomic molecules. The form of this potential is where D e is the dissociation energy and r e is the equilibrium internuclear distance of a given diatomic molecule.
It has been shown in Ref. [1] that the minimal length correction of this potential can also be analytically derived by following the same procedures as in Sec. III. The deformed energy spectrum of the PHO reads [1] where E 0 nℓ is the undeformed spectrum, given by with the notations where, n = 0, 1, 2, . . ., and ℓ = 0, 1, 2, . . .are, respectively, the radial (vibrational) and orbital (rotational) quantum numbers, and µ is the reduced mass of the molecule.
The effect of the GUP on the vibration-rotation energy levels of a given diatomic molecule with the PHO interaction was qualitatively investigated in Ref. [1] by using formula (21).
Here, we use this deformed spectrum to give the expressions of the molecular constants in the presence of a minimal length.
Spectroscopic constants of diatomic molecules
Formula (21) which is of the form of the spectroscopic formula (16), with the following spectroscopic constants: The value of the constant ω e y e is zero in the order 1/γ 3 .
In contrast to ordinary quantum mechanics (β = 0), where the spectrum of the PHO leads to zero values of the spectroscopic constants ω e x e and α e , in the presence of a minimal length, these constants depend on β and have nonzero values. The basic empirical terms, known in the vibration-rotation energy spectrum of diatomic molecules, are now present in this deformed version of quantum mechanics. However, the sign of ω e x e and α e is conflicting compared to that of the experimental values, at least for the diatomic molecules listed in Ref. [32].
We can conclude that the PHO in this one-parameter deformed algebra could not be used to fit the deformed spectroscopic constants with the empirical results. It follows that the extension of this work to the general case β ′ = 2β is interesting; it allows us not only to adjust the parameters D e , r e , β and β ′ but also to establish a constraint between the deformation parameters β and β ′ by appropriately choosing the sign of the molecular constants. This study is in finalization and would be published else where.
V. SUMMARY AND CONCLUSIONS
We have investigated, in quantum mechanics with a deformed Heisenberg algebra including a minimal length (∆X i ) min = √ 5β, the vibration-rotation energy spectra of diatomic molecules by two molecular interactions: the Kratzer and the PHO potentials. We have discussed how an order of the deformation parameter β can be evaluated from the minimal length corrections and the spectroscopic data of diatomic molecules. With the Kratzer's potential, an upper bound of the minimal length of about 0.01Å has been obtained by comparing the theoretical and experimental values of the vibrational ground-state energy of the molecule H 2 . On the other hand, by supposing that the deformation parameter is a third parameter of the potentials, we have derived the spectroscopic constants of diatomic molecules with both interactions and we have examined the effects of the minimal length on each of these constants. We showed that in the case of the PHO the extension of this study to the general case of a two-parameter deformed Heisenberg algebra would be mandatory and interesting because it allows to give some physical constraint between β and β ′ . | 2017-11-11T18:46:59.000Z | 2016-01-25T00:00:00.000 | {
"year": 2017,
"sha1": "4180e125512b15c77916fd7f64f3f509c791e480",
"oa_license": null,
"oa_url": "https://doi.org/10.1088/1742-6596/670/1/012014",
"oa_status": "GOLD",
"pdf_src": "Arxiv",
"pdf_hash": "4180e125512b15c77916fd7f64f3f509c791e480",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics",
"Mathematics"
]
} |
86559433 | pes2o/s2orc | v3-fos-license | Ethanol production using hemicellulosic hydrolyzate and sugarcane juice with yeasts that converts pentoses and hexoses
The use of vegetable biomass as substrate for ethanol production could reduce the existing usage of fossil fuels, thereby minimizing negative environmental impacts. Due to mechanical harvesting of sugarcane, the amount of pointer and straw has increased in sugarcane fields, becoming inputs of great energy potential. This study aimed to analyze the use of hemicellulosic hydrolyzate produced by sugarcane pointers and leaves compared with that of sugarcane juice fermented by yeasts that unfold hexoses and pentoses in the production of second generation biofuel, ethanol. The substrates used for ethanol production composed of either sugarcane juice (hexoses) or hemicellulosic hydrolyzate from sugarcane leaves and pointers (pentoses and hexoses), and the mixture of these two musts. Fermentation was performed in a laboratory scale using the J10 and FT858 yeast strains using 500 ml Erlenmeyer flasks with 180 ml of must prepared by adjusting the Brix to 16 ± 0.3°; pH 4.5 ± 0.5; 30°C; 10 7
INTRODUCTION
With the decline in world oil reserves, along with price instability and the appeal for the sustainable use of natural resources, the search for alternative such as biofuel production has intensified (Oderich and Filippi, 2013).Brazil stands out as the world's largest producer of sugarcane, with an estimated production of 25.77 billion *Corresponding author.E-mail: marcia.mutton@gmail.com.Tel: +55 16 3209-2675.
Author(s) agree that this article remains permanently open access under the terms of the Creative Commons Attribution License 4.0 International License gallons of ethanol in the 2013/2014 harvest (Conab, 2013).In order to increase ethanol production, the technology of converting lignocellulosic biomass into fermentable sugars for ethanol production is an alternative to meet the global demand for fuels (Santos et al., 2012).
Due to the expansion of energy crops in conjunction with environmental responsibility measures, an agroenvironmental protocol on the cooperation between the government and the sugar-energy sector was established with the purpose of ending sugarcane burning and expanding mechanized harvesting.Without previous burning of sugarcane trash, mechanized harvesting results in large amounts of straw and pointers in the field, reaching 5-20 tons per hectare (Foloni et al., 2010).
The dry bagasse (which has now been used in cogeneration) and sugarcane trash account for two-thirds of the planted area, that is, only one third of the biomass in plants is used in the production of ethanol or sugar, but they have great potential to produce second generation ethanol (Fugita, 2010).For straw and sugarcane bagasse cellulose content with an average of 39 and 43% respectively, there is a potential ethanol production of about 88/101 billion liters (Nunes et al., 2013).
The ethanol production from lignocellulosic hydrolysates in an economically feasible process, requires microorganisms that produce ethanol with a high yield from all sugars present (hexoses and pentoses), have high ethanol productivity and can withstand potential inhibitors; furthermore, an integration of fermentation with the rest of the process should be investigated (Olsson and Hahn-Hägerdal, 1996).
Several studies have been conducted focusing on viable and low cost alternatives for the production of biofuel from biomass (Canilha et al. 2012;Cheng et al., 2008).In order to allow the release of sugars present in the hemicellulosic fraction of the mechanized harvesting residues and to make it available for fermentation using microorganisms, prior hydrolysis of biomass is required.
Amongst the available processes, acid hydrolysis provides recovery of up to 90% of fermentable sugars present in the hemicellulosic fraction (Rodrigues, 2007).However, this process may generate inhibitors, such as the phenolic compounds which are mainly formed during partial degradation of lignin (Martin et al., 2007), thereby inhibiting the fermentation process and resulting in low efficiency and low industrial production (Ravaneli et al., 2006).
The objective of this study was to evaluate the production of second generation biofuel, the ethanol from a hemicellulosic hydrolyzate obtained from sugarcane leaves and pointers, sugarcane juice and their mixture, fermented by two different yeasts.
Raw material
The raw material obtained from sugarcane variety RB867515 (straw, pointers and juice) was collected from a production unit in the region of Jaboticabal, SP.The straw and pointers were subjected to hydrolysis process.Before and after this process, these fractions were characterized as cellulose, hemicellulose and lignin (Van Soest and Robertson, 1985).The sugarcane juice was adjusted and available to fermentative process.
Hydrolysis
In order to obtain the hemicellulosic hydrolyzate, 2 kg of leaves and pointers previously dried in an aerated-oven at 60°C to constant weight were used.Acid hydrolysis of the hemicellulosic fraction was performed in a 40 L reactor under the following conditions: temperature of 121°C, residence time of 20 min, and 105 ml of sulfuric acid in 20 L of water.
Musts
To obtain the hemicellulosic hydrolyzate must (HHM), the hydrolysis fraction were initially detoxified for the removal of the fermentation inhibitors.The solution pH was adjusted to 7.0 by the addition of calcium oxide (CaO), followed by an adjustment to pH 4.0 using phosphoric acid (H3PO4).Furthermore, the hydrolyzate underwent adsorption using activated carbon (1%) in an incubator at 50°C (B.O.D) for 30 min.At the end of each pH adjustment step, the hydrolyzate was centrifuged and filtered (Marton, 2002), resulting in the must to be fermented.
To obtain the sugarcane juice must (SJM), the original juice was subjected to clarification process for the removal of impurities.This process consisted of 300 mg/L of phosphoric acid and pH adjusted to (6.0 ± 0.1) with calcium hydroxide (0.76 mol/L) of analytical reagent grade (a.r.).The lime juice was then heated to 100 to 105°C and was transferred to beakers, and allowed to rest for 20 min for all impurities settling.To promote high settling rate, the beakers contained a polymer (Flomex 9074 -2 mg/L) that group the small amount of impurities in high molecular weight flocs.After that, the juice was filtered through a 14 µm filter paper in order to separate the precipitated impurities, thereby resulting in a clarified juice.The clarified juice was standardized with distilled water to 16° Brix (soluble solids), and its pH was adjusted to 4.5 with sulfuric acid ( 0.3) at a temperature of 30°C, resulting in the must.
The third must (HSJM) was obtained by mixing the sugarcane juice must and hemicellulosic hydrolysate must in the ratio 1:1 (v/v).
Yeast strains
The following yeasts were isolated and mixed at the ratio of (1:1) (four replications): 1. J10 (Rhodotorula glutinis -xylose metabolizing) obtained from a stock-culture maintained at 4°C provided by the yeast bank of the Laboratory of Sugar and Ethanol Technology of the Department of Technology -School of Agrarian and Veterinary Sciences, UNESP, SP, Jaboticabal, (Guidi, 2000); 2. FT858 (Saccharomyces cerevisiae -used for industrial ethanol production) with the following characteristics: high-yield fermentation; resistant at low pH; tolerance to higher levels of alcohol; high viability during cell recycling fermentation; low foam formation; nonflocculent yeast strain; good fermentation speed (8 h when used in sugarcane industry), and low residual sugar levels in the must (Amorim, 2011).
The initial cell viability was determined for 72 h using a Neubauer cell-counter chamber (Lee et al., 1981), and a cell mass of both strains, containing a sufficient amount of cells to start fermentation (10 7 CFU/ml), was used.
Fermentation and ethanol production
Fermentation was performed in laboratory scale using 500 ml Erlenmeyer flasks containing the substrate used for ethanol production (180 mL): SJM, HHM and HSJM.A total cell concentration of 10 7 CFU/mL of the following strains J10, FT858, and J10 + FT858 was used.The flasks after inoculation with respective cultures at desired cell concentration were incubated at 30± 1°C with continuous stirring for 72 h.Cell viability, budding and buds viability were determined at 0, 6, 12, 24, 36, 48, and 72 hof fermentation (Lee et al., 1981).
The concentrations of sugars and ethanol were determined by HPLC (Waters, Milford, MA) with a Bio Rad Aminex HPX-87H column under the following conditions: column temperature 45°C, eluent: H2SO4, 0.005 mol/L, flow rate of 0.6 ml/min, and an injection volume of 20 μL.
The aliquots collected at 0, 6, 12, 24, 36, 48, and 72 h of fermentation, for analysis of sugar consumption and ethanol production, were properly diluted and filtered through a "Sep Pack" C18 filter (Millipore).The eluent was prepared by subjecting it to vacuum filtration using Millipore membrane filter (0.45 pm, Hawp) and was degassed in an ultrasound bath (Microsonic SX-50) for 15 min which was subsequently analyzed by HPLC.
Statistical analysis
The results of cell viability and ethanol production were subjected to analysis of variance by the F test, and the comparison of the means was performed by the Tukey test (Barbosa and Maldonado, 2011).
RESULTS AND DISCUSSION
The composition of the cellulose and hemicellulose was reduced when considering the results reported by Santos et al. (2012) (Table 1).After hydrolysis there was a reduction in the percentage of cellulose and hemicellulose, and an increase in lignin concentration.
The cellulose and hemicellulose have a low calorific value and after the hydrolysis process, the sugar were released and used as a substrate for ethanol production.Lignin has a high calorific value and can be used in cogeneration.The values obtained from cellulose, hemicellulose and lignin, for the sugar cane bagasse are around 48, 7.8 and 34.5%, respectively.These differences are explained by the straw characteristics and tips of sugarcane used in the study, which are structurally less rigid than the bagasse from sugarcane stalks.
The average values of Brix, pH, sulfuric acid concentration, total monosaccharides and phenolic compounds of the musts are shown in Table 2.It can be seen that the three musts (SJM, HHM and HSJM) had similar characteristics in terms of pH and Brix.Regarding to total acidity, highest values were found in the hemicellulosic hydrolyzate probably because sulfuric acid (0.5%) was added in the hydrolysis process.
The concentration process means an increase in the content of sugar and phenolic compounds, which were higher than the values reported in the literature.Phenolic compounds and other compounds that remain after detoxification can inhibit fermentation (Polakovic et al., 1992) directly affecting cell viability and ethanol production (Ravaneli et al., 2006;Garcia et al., 2010).
The presence of toxic compounds may influence fermentative organisms to an inefficient use of reducing sugar and formation of the product (Mussatto and Roberto, 2004).Martinez et al. (2000) observed a synergistic effect when inhibitors compounds combined; including a variety of phenolic, aromatic compounds and several types of acids, derived from lignin degradation, that ethanol production by E. coli was affected.
On the other hand, the results of yeast cell viability in the three substrates are given in Table 3, which clearly show that the yeast cell viability in the sugarcane juice must was 22.33% higher than that in the hemicellulosic hydrolyzate must.It was found that J10 had the best cell viability, while the worst viability was found for FT858; the mixture of these two yeasts showed intermediate viability values.
There was a continuous decrease in cell viability after 72 h of fermentation for all strains.This behavior is due to the natural metabolism of yeast strains since they transform sugar into fermentation products such as ethanol, acids, glycerol and other compounds that accumulate in the culture medium inhibiting their metabolic process, negatively affecting cell viability (Amorim et al., 1996).
The fermentation process was evaluated for 72 h, which in an industrial scale is considered a process too long for ethanol production.In the present study, the fermentation process occurred within the first 10 h, with cell viability of approximately 90, 86, and 78% for the sugarcane juice must, mixture (broth and hydrolyzate), and for the hydrolyzate must , respectively.Very low values around 40% were found for the hydrolyzate must at the end of the process due to the combination of inhibitory compounds which accumulate over time.
During sugarcane juice must fermentation, cell viability was statistically the highest, followed by the mixture of the fermented hemicellulosic hydrolyzate and sugarcane juice.Among the yeasts, the best performance was found for J10 and the mixture of J10 and FT858.The strain FT858 had shown lowest cell viability.
The bud was the highest in sugarcane juice broth and was found to be lowest in the hemicellulosic hydrolyzate.When strains used in the present study was compared, no statistical significant differences was obtained in terms of budding.The optimum budding index of a fermentation process should generally range between 5 and 15% (Amorim et al., 1996); while, in the present report, the hemicellulosic hydrolyzate was the one with values lower than the optimal ones reported in literature (3.67%.on average), probably affected by the presence of inhibitor compounds.
The buds viability yeast cells in the fermentation of the sugarcane juice was also statistically higher, followed by that in the fermentation of the must composed of the mixture of hemicellulosic hydrolyzate and sugarcane juice.Among the yeasts, the best performance was found for J10; FT858 produced the lowest performance, and the mixture of J10 and FT858 produced intermediate performance.
Literature reports suggest that hexoses and pentoses were completely consumed in the first few of fermentation as glucose is the universal carbon source (Schirmer-Michel et al. 2008).Similar results has been reported by Cheng and coworkers (2008) in sugarcane bagasse hydrolyzates.Xylose consumption in this study (the main sugar in the hemicellulosic hydrolyzate), however, was not complete (Table 4).
Our results are in accordance with the report of Toivari et al. ( 2001) wherein a higher concentration of phenolic compounds and acids could be responsible for lower production of ethanol.Evaluating the effect of the fermentation time (Figure 1) on the musts, it was observed that in 24 h of fermentation, the highest concentration of ethanol with the clarified broth of sugarcane juice yield was 70% higher than that of the hemicellulosic hydrolyzate (around 9 g/L) in same time period.The sugarcane juice must produced the highest level of ethanol (33 g/L), followed by the must composed of the mixture of hemicellulosic hydrolyzate and sugarcane juice (22 g/L).
The variation in the ethanol production, cell viability, budding and buds viability was mainly attributed to the composition of the pretreated substrates, which contain large concentration of inhibitory compounds; which were not efficiently removed during the detoxification process and this might have negatively influenced the final result.Some toxic compounds can stress fermentative organisms to an inefficient utilization of sugar resulting in product formation decreases (Silva, 2004).The final ethanol concentration varies according to the concentration of sugar, nutrients, contaminants, and inhibitors presents in the substrate.Accordingly, it was found that only one single detoxification process was not sufficient for the removal of acids and phenolic compounds, which negatively influenced the production of ethanol from the hemicellulosic hydrolyzate.The detoxification method has to be based on concentrations and the degree of microbial inhibition caused by the compounds.To a certain types of compounds, better results can be obtained by combining two or more different detoxification method (Silva, 2004).
In the present investigation, the level of ethanol produced using clarified broth of sugarcane juice, although lower than most of the literature reports using sugarcane juice, in the present investigation, the level of ethanol produced using clarified broth of sugarcane juice was the higher (about 9 g/L) compared with 1.5 g/L reported by Fugita (2010) that used sugarcane bagasse as raw material and J10 yeasts.
In conclusion, we observed highest cell viability and ethanol production in the clarified broth of sugarcane juice using the strain J10.The detoxification process used promoted a partial removal of acids and phenolic compounds.The use of a yeast co-culture produced the best performance in ethanol production.The pointer and straw cane are an important raw material to be considered for the ethanol production.
Figure 1 .
Figure 1.Graphical representation of the unfolding of the musts and yeasts (J10.FT858 and J10 + FT858) over a 72 h period for ethanol production.
Table 1 .
Cellulose, hemicellulose and lignin from the straw and sugarcane tip before and after hydrolysis of the hemicellulose fraction.
Table 2 .
Analytical of pretreated substrates used for ethanol production.
Values are represented as means.SJM, Sugarcane juice; HHM, hemicellulosic hydrolyzate sugarcane leaves and pointers; HSJM, the mixture of these two substrates.
Table 4 .
Variance analysis and comparison of means by the Tukey test (5% probability) of the use of xylose by yeasts J10, FT858, and J10 + FT858.*Significant at 1% (P<0.01);*Means followed by the same uppercase in letters in a column are not significantly different according to the Tukey Test. * | 2019-03-28T13:42:17.722Z | 2015-02-11T00:00:00.000 | {
"year": 2015,
"sha1": "f0df2dcbec949b19b0ccceb2d6437e75e95485d3",
"oa_license": "CCBY",
"oa_url": "https://academicjournals.org/journal/AJB/article-full-text-pdf/2AA965850411.pdf",
"oa_status": "HYBRID",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "861827d87921ab9b0272c1031f5893bcc02a9afa",
"s2fieldsofstudy": [
"Environmental Science",
"Chemistry"
],
"extfieldsofstudy": [
"Biology"
]
} |
19751303 | pes2o/s2orc | v3-fos-license | Structural Basis of Enzymatic (S)-Norcoclaurine Biosynthesis*
The enzyme norcoclaurine synthase (NCS) catalyzes the stereospecific Pictet-Spengler cyclization between dopamine and 4-hydroxyphenylacetaldehyde, the key step in the benzylisoquinoline alkaloid biosynthetic pathway. The crystallographic structure of norcoclaurine synthase from Thalictrum flavum in its complex with dopamine substrate and the nonreactive substrate analogue 4-hydroxybenzaldehyde has been solved at 2.1Å resolution. NCS shares no common features with the functionally correlated “Pictet-Spenglerases” that catalyze the first step of the indole alkaloids pathways and conforms to the overall fold of the Bet v1-like protein. The active site of NCS is located within a 20-Å-long catalytic tunnel and is shaped by the side chains of a tyrosine, a lysine, an aspartic, and a glutamic acid. The geometry of the amino acid side chains with respect to the substrates reveals the structural determinants that govern the mechanism of the stereoselective Pictet-Spengler cyclization, thus establishing an excellent foundation for the understanding of the finer details of the catalytic process. Site-directed mutations of the relevant residues confirm the assignment based on crystallographic findings.
Alkaloids are among the most important plant secondary metabolites, comprising approximately 12,000 compounds grouped into several families (1). Most alkaloids are derived from amines produced by the decarboxylation of amino acids such as histidine, lysine, ornithine, tryptophan, and tyrosine. The coupling of the amines to other metabolites represents the first step in the biosynthesis of alkaloids belonging to diverse families. So far, however, this important entry reaction has been fully characterized only for a few biosynthetic pathways.
Benzylisoquinoline alkaloids are tyrosine-derived compounds and include a number of biologically active substances that are widely employed as pharmaceuticals such as morphine, codeine, berberine, papaverine, etc. The enzymatic pathways leading to the amazing diversity of benzylisoquinoline derivatives have been shown to originate from a common route in which the first committed step consists of the Pictet-Spengler condensation of dopamine with 4-hydroxyphenylacetaldehyde (4-HPAA) 2 to yield the benzylisoquinoline central precursor, (S)-norcoclaurine (Fig. 1). The reaction is highly stereospecific, and the chirality of the (S)-norcoclaurine product is essential to drive the intricate pathway of substrate stereoselective enzymatic reactions toward the terminal metabolites (2,3).
The Pictet-Spengler reaction entails the acid-catalyzed electrophilic addition of an iminium ion to a substituted benzyl (benzylisoquinoline alkaloids) or indole (indole alkaloids) species. The reaction mechanism consists of a two-step process in which the iminium ion is generated first from the condensation between the aldehyde carbonyl and the phenylethyl amine (or tryptamine) substrate followed by a Mannich type cyclization to yield tetrahydro-benzylisoquinolines (or tetrahydro--carboline) (4). Enzymes that catalyze the synthesis of (S)-strictosidine (the most important natural -carboline), a central intermediate for indole alkaloids, have been recently characterized from the structural and mechanistic point of view. Strictosidine synthase, which catalyzes the formation of (S)-strictosidine from tryptamine and secologanin, has been cloned from Catharanthus roseus and Rauwolfia serpentina (5). Co-crystallization of strictosidine synthase from R. serpentina in the presence of tryptamine or secologanin substrates yielded key structural information into substrates binding and orientation within the active site of the enzyme (6). These studies provided the first example of an enzyme-catalyzed Pictet-Spengler reaction within the indole alkaloids biosynthesis.
In contrast, little structural information is presently available for norcoclaurine synthases (NCS), the key enzymes in the pathway that leads to the biosynthesis of benzylisoquinoline alkaloids. NCS from Thalictrum flavum (NCS; EC 4.2.1.78) has been identified recently (7) and has been fully characterized in terms of substrate specificity and enzymatic activity (8,9). The enzyme shares no sequence similarity to the functionally correlated indole alkaloid synthases, whereas it shows low (12-15% identity) but significant homology with proteins belonging to the vast family of Bet v1 and pathogen-related PR10 proteins, whose physiological functions are still poorly understood (9). NCS enzymes from T. Flavum, Papaver somniferum, Eschscholzia californica, and Coptis japonica (CjPR10A) are capable of catalyzing the Pictet-Spengler condensation of 4-HPAA with dopamine at neutral pH values and do not require any co-factor. It is worth considering that NCS activity has also been demonstrated for a distinct enzyme from C. japonica (CjNCS1) that is able to catalyze (S)-norcoclaurine synthesis by the condensation of dopamine and 4-hydroxyphenylpyruvate or pyruvate in addition to 4-HPAA (10). Enzyme kinetics studies on T. flavum NCS demonstrated sigmoidal binding of dopamine and hyperbolic binding kinetics for 4-HPAA, thus suggesting a cooperative mechanism at the basis of the catalytic behavior in enzymes from different species (6 -8). At present, however, the structural basis of NCS catalytic mechanism remains elusive. A first clue on the overall structure and catalytic site has been obtained by NMR investigations coupled to homology modeling on structurally related proteins (11). The overall ␣-carbon backbone was shown to conform to the fold of the major birch allergen protein Bet v-1, but the substrate binding site and amino acid side chains relevant for catalysis have not been assigned with certainty. So far, it is also unclear whether the enzyme is assembled in a quaternary structure that could explain the reported cooperativity. In fact, the association state in solution has been reported to be dimeric in gel filtration measurements, whereas NMR investigations indicated that the protein is mostly monomeric (7,11).
In the present work, we report the first crystallographic structure of NCS from T. flavum in its complex with dopamine and the nonreactive substrate analogue 4-hydroxybenzaldehyde (PHB). The x-ray data unveil the geometry of the active site and indicate clearly the nature of the structural determinants that govern the asymmetric Pictet-Spengler condensation at the basis of (S)-norcoclaurine synthesis. Determination of the catalytic activity in site-specific mutants confirms the roles of the catalytically competent residues identified by the x-ray data.
Protein Expression and Purification of SeMet Derivative-
The expression of the SeMet NCS protein was designed, synthesized, and optimized for Escherichia coli codon usage by the Gene Optimizer Assisted Sequence Analysis (Genart-Ag). The NCS protein truncated at the first 19 amino acids (12), with a N-terminal MTGS sequence and a His tag at the C terminus was subcloned into the NdeI and XhoI restriction site of the vector pET22-b. Transformation of chemically competent E. coli strain BL21DE3 and protein expression have been performed as described by Pasquo et al. (12). The protein has been purified using a 5-ml HisTrap Fast Flow column (GE) equilibrated with a 50 mM Tris HCl buffer at pH 7.5. A linear gradient of imidazole concentration from 0 to 0.5 M (buffered at pH 7.5) was applied. The protein eluted at an imidazole concentration comprised between 0.25 and 0.30 M (12).
Crystallization-SeMet NCS crystals grew under conditions similar to those identified for the His-tagged wild type protein (12). Protein concentration was ϳ11 mg/ml, and crystallization buffer was 0.1 M acetate buffer at pH 4 containing 1.4 M ammonium sulfate and 0.2 M sodium chloride. Crystallization temperature was 298 K. The SeMet crystals were cryo-protected in a solution containing 75% v/v of the reservoir solution and 25% v/v of glycerol and mounted on nylon loops. Then the crystals were flash-frozen by quick submersion into liquid nitrogen.
Data Collection and Data Analysis of the SeMet Derivative-A three-wavelength multiple wavelength anomalous diffraction data set was collected from SeMet-NCS on the ID14-2 beamline at the synchrotron radiation source Berliner Elektronenspeicherring-Gesellschaft für Synchrotronstrahlung (Berlin, Germany), using a CCD detector. Complete data sets (120°of rotation each) were collected at the peak ( ϭ 0.97966 Å), inflection ( ϭ 0.97984 Å), and remote ( ϭ 0.97800 Å) wavelengths, at a temperature of 100 K. Each frame was collected with an exposure time of 2 s and a 1.0°oscillation range.
The three data sets were processed with DENZO (13) and scaled with SCALEPACK (13). The three-wavelengths multiple wavelength anomalous diffraction data set of SeMet-NCS was further scaled and analyzed with SCALEIT from the CCP4 suite (14). The autoindexing procedure indicated that the crystals belonged to the trigonal space group P3 1 All three of the data sets were more than 96% complete with R merge values below 10%. All of the statistics of the scaling procedure are reported in Table 1. A value of V M ϭ 2.89 Å 3 Da Ϫ1 has been calculated according to Matthews (15), assuming 6 asymmetric units within the unit cell. Each asymmetric unit contained two monomers with molecular masses of 24 kDa each.
Structure Solution and Refinement of Norcoclaurine Synthase-A heavy atom site search and phase determination were performed by the program SOLVE (16) using data collected at the three wavelengths (indicated under "Data Collection and Data Analysis of the SeMet Derivative") in the 20-2.7 Å resolution range. A single solution was found with six heavy atom sites with a Z score of 14.3 for a signal-to-noise ratio of 0.3, leading to a mean figure of merit of 0.46. The program ARP-WARP (17) was used to improve the initial phase and build automatically part of the protein skeleton. The program disclosed the presence of two molecules in the asymmetric unit as indicated by the V m calculation. The phases thus obtained were improved using the solvent-flattening and 2-fold averaging density modification (18). Refinement was performed using the program REFMAC5 (19), which applies the maximum likelihood method. The refinement statistics are presented in Table 1. Model building was performed using the program COOT (20). The final model (a dimer) includes 327 residues (19 -27 and 31-194 residues in monomer A and 40 -194 in monomer B), each monomer contains four SeMet and 18 water molecules. The final R crys for all resolution shells (74-2.72 Å), calculated using the working set reflections (12882), is 23.9%, and the free R value calculated using the test set reflections (676) is 28.5%. The final R crys calculated for the highest resolution shell (2.72-2.79Å) using the working set reflections (898) is 33.7%, and the free R value calculated using the test set reflections (30) is 49%. The quality of the model was assessed by the program PRO-CHECK (21). The most favored regions of the Ramachandran plot contained 88.4% of nonglycine residues. The atomic coordinates and the structure factors have been deposited in the Protein Data Bank (accession number 2VNE).
Structure Solution and Refinement of Norcoclaurine Synthase in Complex with Dopamine and 4-Hydroxybenzaldehyde-The crystals of SeMet norcoclaurine synthase, obtained as reported above, were soaked in the mother liquor solution containing dopamine (2 mM) and 4-hydroxybenzaldehyde (2 mM) at 294 K for 24 h. Attempts to soak the crystals using the natural substrate, 4-HPAA, were unfruitful because of the intrinsic instability of the substrate itself (half-life of ϳ1 h) under the soaking conditions described above. The data were collected as 0.65°o scillation frames using the CCD detector on the x-ray beamline ID-23-1 at the European Synchrotron Radiation Facility (Grenoble, France), at a wavelength of 0.972 Å at 100 K using 25% v/v of glycerol as cryoprotectant. Data analysis, performed with DENZO (13), indicated that the crystal belonged to the trigonal space group P3 1 21 and had the following unit cell dimensions: a ϭ b ϭ 86.474, c ϭ 117.977 Å. The data were scaled using SCALEPACK (13) and had an R merge ϭ 7.5% and an 2 ϭ 1.012.
The structure was solved by molecular replacement using the monomer B of SeMet norcoclaurine synthase without ligands as search probe (Protein Data Bank entry 2VNE). The rotational and translational searches, performed with the program MOLREP (22) in the resolution range of 10-3.0 Å, produced a clear solution corresponding to a dimer in the asymmetric unit. Refinement was performed using the program REFMAC5 (19), and model building was performed using the program COOT (20) ( Table 1). The final model includes 327 residues (19 -27 and 31-194 residues in monomer A, 40 -194 in monomer B), a dopamine molecule, two 4-hydroxybenzaldehyde molecules, 176 water molecules, two acetate anions, and five chloride anions. The final R crys for all resolution shells (50-2.09 Å), calculated using the working set reflections (28573), is 22.07%, and the free R value, calculated using the test set reflections (1508), is 26.67%. The final R crys calculated for the highest resolution shell (2.144-2.089 Å) using the working set reflections (2045) is 26.5%, and the free R value calculated using the test set reflections (99) is 35.6%. The most favored regions of the Ramachandran plot contain 92.8% of nonglycine residues. The atomic coordinates and the structure factors have been deposited in the Protein Data Bank (2VQ5).
Catalytic Activity and Site-directed Mutagenesis-K122A, E110A, Y108F, and Y108A mutants were obtained on the same PEt 22-b vector, expressed and purified as described under "Protein Expression and Purification of SeMet Derivative." The Y108A mutant, however, resulted in the expression of insoluble, possibly unfolded protein and could not be recovered in soluble form by guanidine denaturation/refolding experiments. The three soluble mutants and the wild type protein were thus analyzed for enzymatic activity using the circular dichroismbased assay proposed by Luk et al. (9). A Jasco J-715 spectropolarimeter was used for kinetic measurements (see supplemental
RESULTS
Analysis of the x-ray structures 2VNE and 2VQ5 revealed a dimeric assembly in the asymmetric unit. A crystallographic tetramer composed of two asymmetric units, each containing a single dimer (A1-B1 or A2-B2), is represented in Fig. 2. Each NCS monomer revealed that NCS conforms to the overall fold adopted by proteins belonging to the Bet v1-like superfamily, which includes plant phytohormone carriers, pathogen-related proteins (PR10), MLN64-START domains, and the recently characterized tetracenomycin aromatase/cyclase (24,25). Accordingly, the ensemble of secondary elements consists of a seven-stranded antiparallel -sheets wrapped around a long C-terminal helix (␣3) and two smaller ␣-helical segments (␣1 and ␣2) (26). However, the C-terminal helix is longer in NCS than in Bet-v1 homologous proteins and is formed by two helical segments joined by an extended stretch (residues 173-177). Further, NCS has an additional N-terminal domain that forms a short ␣-helix and a long flexible segment that yields interpretable diffraction patterns only in one monomer within a single dimeric asymmetric unit. This segment folds in a -strand secondary element that stabilizes the interface of the crystallographic 2-fold symmetry related dimer within the tetramer (Fig. 2).
Each monomer shows an accessible cleft, located between the seven-stranded antiparallel -sheets and the three ␣-helices, that extends through the protein matrix forming a 23.4-Ålong tunnel (Fig. 3a). The wider opening (4.2 Å diameter), is formed by an array of hydrophobic residues and a polar patch composed by Tyr 108 , Tyr 131 , Tyr 139 , and Glu 103 side chains located at the entrance of the cavity. Deeper in the cavity, the side chain of Lys 122 protrudes toward the interior of the tunnel forming a "hook" capable of intercepting the carbonyl group of the aldehyde substrate (Fig. 3b). In correspondence with Lys 122 , the tunnel is thus restricted to a diameter of 1.2 Å. The smaller opening of the catalytic tunnel (3.2 Å) lies besides the Lys 122 side chain and Asn 117 and is solvent-accessible. X-ray data obtained on crystals soaked with the dopamine substrate and the nonreactive substrate analogue 4-hydroxybenzaldehyde (PHB) indicate that the two molecules adopt a stacked configuration with the respective aromatic rings lying on almost parallel planes (Figs. 4 and 5). PHB carbonyl oxygen is hydrogenbonded to Lys 122 amino group, whereas the phenolic oxygen is in contact (2.44 Å) with the carboxyl moiety of Asp 141 . Dopamine is hold in place by the stacking interaction with PHB and by hydrogen bonding of the C-1 phenol hydroxyl with the Tyr 108 phenol hydroxyl. Most significantly, dopamine C-5 carbon atom lies at 2.7 Å from the carboxyl group of Glu 110 , suggesting a key role for this residue in the catalytic mechanism (Fig. 6). Comparison between the structures of the unliganded (2VNE) and substrate-bound (2VQ5) derivatives brings about that only very small adjustments of the residues just described occurred upon substrates binding in the crystal state.
Enzyme kinetic measurements on the wild type protein and on the three mutants K122A, E110A, and Y108F are summarized in Table 2 and described in detail within the supplemental materials (paragraph 3). No signal relative to the formation of (S)-norcoclaurine was observed in the K122A mutant, whereas lower but significant activity was detected in both Y108F and E110A mutants. k cat values obtained on the wild type enzyme were lower than those obtained by Luk et al. (9), possibly because of variability in the determination of the absolute protein concentration.
DISCUSSION
The enantioselective synthesis of plant indole or benzylisoquinoline alkaloids has always been a challenging task for organic chemists and has led, within the past three decades, to the development of innovative processes in asymmetric synthesis (27,28). In general, the cyclization step of the Pictet-Spengler reaction is a typical acid-basecatalyzed mechanism in which an iminium ion (Schiff base) attacks an electron rich aromatic carbon with subsequent release of the aromatic proton. The propensity of the iminium carbon to form a bond with the appropriate aromatic carbon is enhanced in the case of indole species because of the acidity of the 1 and 2 C-H bonds. Thus, the first step in indole alkaloids synthesis is thermodynamically and kinetically favored under physiological conditions, and catalysis is required essentially to drive the correct stereochemistry in the ring closure step. In contrast, harsher reaction conditions are required in the case of substituted phenylethyl derivatives because of the lower acidity of the benzenoid proton (27). In this case, the positional contribution of ring substituents and their electron donating or withdrawing effects play a key role in determining the reaction rate of the ring closure step. In this framework, it is of extreme interest to understand the strategy employed by nature to evolve enzymes capable of catalyzing the latter type of Pictet-Spengler reaction. The present data offer a fairly complete view of the structural determinants that govern the reaction in the NCS enzyme. The geometry of the NCS active site is dominated by the presence of three strong proton exchanger, Lys 122 , Asp 141 , and Glu 110 , and of a hydrogen bonding donor, Tyr 108 . These residues shape the binding site of the two aromatic substrates and dictate the mechanism proposed in Fig. 6. The reciprocal orientations of dopamine and PHB (Fig. 3) and their relationships with neighboring amino acid side chains immediately suggest a general acid-base reaction mechanism that matches closely the classical two-step Pictet-Spengler scheme and eventually leads to the stereospecific ring closure to yield (S)-norcoclaurine. The presence of a strong interaction (2.6 Å) between the amino group of Lys 122 and the carbonyl oxygen of the aldehyde, coupled to the off-plane position of the carbonyl with respect to the phenyl ring, is suggestive of a proton transfer from the ammonium ion to the carbonyl oxygen and consequent stabilization of a partial positive charge on the carbon atom. Such a configuration supports the idea that Lys 122 is also involved in the water molecule release from the carbinolamine moiety subsequent to the nucleophilic attack of the dopamine amino group to the aldehyde carbonyl (Fig. 6, a and b). The electrophilicity of the imine double bond thus formed is the driving force of the subsequent Mannich type cyclization. Ring closure entails a rotameric rearrangement of the iminium ion adduct (steps c and d of Fig. 6) followed by electrophilic substitution at the C-5 position (steps d and e of Fig. 6). The rotameric arrangement (clockwise rotation of the bond connecting the iminium nitrogen and the adjacent dopamine carbon atom) occurring within steps c and d is a prerequisite for the effective ring closure in that it brings the iminium carbon atom in proximity of the ring C-5 atom. On the basis of the starting configuration of the two substrates, it can be inferred that the protein exerts a steric constraint on the adduct (step c) by allowing free rotation in the clockwise direction only. After deprotonation, assisted by the carboxyl moiety of Glu 110 , the S-stereospecific product is formed (Fig. 6f). The scheme indicated in Fig. 6 is in fair agreement with the mechanism proposed by Luk et al. (9), based on kinetic isotope effects. The authors suggested correctly that step d is driven by the transient formation of a phenolate ion (C-2 phenolate) that attacks the iminium ion in the first step of the aromatic substitution process. Thus, phenolate formation is judged essential to favor the release the C-5 proton. However, no proton acceptor is found in the vicinity of the C-2 dopamine hydroxyl that could serve to stabilize the phenolate species. It may be envisaged that solvent water molecules from the wider opening of the catalytic tunnel eventually scavenge the phenolic C-2 proton. In turn, the Tyr 108 phenol hydroxyl is hydrogen-bonded to the C-1 hydroxyl group of the dopamine substrate, thus increasing the partial positive charge on the C-1 oxygen. This interaction may favorably contribute to the transient loss of aromaticity neces-sary to the ring closure step (Fig. 6d). Thus, the cyclization step may be envisaged as a concerted process in which Glu 110 acts as a base on the catecholate moiety transiently stabilized by the Tyr 108 hydrogen bonding on the C-1 hydroxyl. The mechanism thus proposed is exquisitely stereospecific in that, given the position of Glu 110 with respect to the dopamine ring orientation, C-5 proton abstraction may only occur from a single possible configuration of the intermediate (Fig. 6e).
The proposed scheme is supported by the kinetic data on selected site-specific mutants given in Table 2. In particular, alanine substitution of residue Lys 122 completely abolishes the stereoselective synthesis of (S)-norcoclaurine. The residual activity observed in Y108F and E110A mutants may be interpreted as impaired dopamine binding, as demonstrated by the significant increase in the K m value for dopamine in both mutants. Alternatively, the lack of hydrogen bonding to the C-1 hydroxyl of dopamine and the consequent destabilization of the phenolate intermediate may be taken into account for the reduced activity of the Y108F mutant. The kinetic data, however, were obtained by monitoring selectively the (S)-norcoclaurine formation by CD spectroscopy and do not allow quantitative discrimination on the stereochemical control of the reaction in the mutants with respect to the wild type protein. A careful assessment of the enantiomeric excess of (S)-norcoclaurine chiral product versus the R product (still not reported even for the native protein) will be necessary to distinguish between background reaction contributions and possible nonstereoselective catalysis in some of the mutants. In this framework, the finer details of NCS catalytic mechanisms, such as the nature of the rotameric arrangement (steps c and d of Fig. 6) and the role of Tyr 108 on the electronic configuration of the dopamine substrate, still need to be fully clarified. Moreover, further studies will be necessary to understand the observed cooperativity in enzyme kinetics (7)(8)(9). At present, only minor ligand-linked tertiary structural changes have been detected at the active site (different rotameric arrangement of Phe 122 in subunit A versus subunit B), whereas the quaternary structures of the substratefree and substrate-bound proteins appear to be essentially superimposable.
On the basis of the present understanding of S-norcoclurine biosynthesis, it is of interest to compare the proposed reaction scheme and the structural determinants that govern the catalytic mechanism of NCS to those established for the functionally related strictosidine synthase STR1. As shown in Fig. 7, the basic tenet that envisage an acid-base catalysis as a common conduit for Pictet-Spengler cyclizations is respected in both NCS and STR1. However, the strategy for achieving the asymmetric condensation is reversed in the two enzymes. NCS employs the positive charge of Lys 122 as a strong polarizing agent for the carbonyl group of the aldehyde substrate upon which the amine substrate acts as a nucleophilic agent. Conversely, STR1 employs the negatively charged carboxyl moiety of Glu 309 to hold in place and eventually deprotonate the nitrogen atom of the amine substrate that subsequently reacts with the incoming aldehyde substrate. It might be inferred, however, that also NCS displays a glutamic acid residue (Glu 110 ) within the catalytic site, and hence alternative reaction schemes could be envisaged that entail the direct participation of the carboxyl moiety of Glu 110 to the binding of dopamine amino group as a first step of the reaction. Two lines of evidence argue against this possibility: (i) the coordinates of dopamine with respect to the carboxyl group of Glu 110 indicate unambiguously that the amino group points directly toward the carbonyl moiety of the aldehyde substrate in its adduct with Lys 122 and not toward the Glu 110 carboxyl and (ii) independent NMR and enzyme kinetics measurements indicate clearly that the aldehyde substrate, 4-HPAA, binds first to the enzyme and hence must form the observed adduct with Lys 122 before the amine substrates accesses the catalytic site (11). Thus, NCS and STR1 appear to adopt different mechanisms to achieve the Pictet-Spengler cyclization. In this framework, the idea of a possible common mechanism at the first step of alkaloids biosynthesis cannot be envisaged (6). Moreover, given the low sequence similarity and the poor structural overlap between the NCS monomer and a single STR1 domain, it is also difficult to hypothesize that enzymes catalyzing the entry steps in different alkaloids pathways might originate from a common ancestor. In contrast, the high structural similarity of NCS with proteins belonging to the vast Bet v1 family calls for a re-evaluation of the possible functional roles of the whole protein family (24,25). At present, despite the low sequence similarity, the overall fold and the tunnel comprised between the -plated sheet and the three ␣ helices appear to be strongly conserved among the members of this family of known three-dimensional structure (26). As an example, it can be noticed that the positioning of the zeatin molecule (a cytokinin with adenine scaffold) within the tunnel of zeatin binding protein from Vigna radiata provides a particularly striking example of structural analogy with substratebound NCS (26). In fact, the zeatin-binding site is formed within the large cavity inside the protein molecule, between the -sheet and the C-terminal helix ␣3. The zeatin molecule is accommodated within the binding site by hydrogen bonding interactions with a threonine (Thr 139 ) and a glutamate (Glu 69 ) and is stabilized by extensive van der Waals' contacts with aromatic side chains lining the bottom of the binding cavity (Phe 26 , Phe 56 , Phe 102 , Tyr 98 , and Tyr 142 ). Comparison of the amino acid sequences among members of the cytokinin-binding protein family also reveals that the residues forming hydrogen bonds and hydrophobic residues engaged in van der Waals' interactions with the inner ligand are conserved. It follows that the topological positions important for cytokinins binding are at least partially superimposable to those relevant for catalysis in NCS. This is not true, in general, for other members of the PR10 superfamily, because structural data show high variability in shape, volume, and chemical properties of the ligand binding tunnel. More structural data on members of this protein family will be necessary to unveil whether there are common features in the topology of residues involved in ligand binding or catalysis. | 2018-04-03T01:55:59.253Z | 2009-01-09T00:00:00.000 | {
"year": 2009,
"sha1": "50b2d4726890ec3471c6371f24630d5672c708dc",
"oa_license": "CCBY",
"oa_url": "http://www.jbc.org/content/284/2/897.full.pdf",
"oa_status": "HYBRID",
"pdf_src": "Highwire",
"pdf_hash": "7634309504ede6ca0831b3cadc7e25aafe7e7291",
"s2fieldsofstudy": [
"Chemistry"
],
"extfieldsofstudy": [
"Chemistry",
"Medicine"
]
} |
214043307 | pes2o/s2orc | v3-fos-license | PEDAGOGICAL EDUCATION CLUSTER : CONTENT AND FORM
The globalization process that is taking place all over the world requires the clustering of education as well as other areas. Globalization has led to a sharp increase in competition in the education services market. In this competitive environment, the cluster is a means of counteracting the power of globalization. The integration of education, science and industry around a common goal increases their potential. The pedagogical education cluster provides this cooperation. The article is based on the scientific view that the provision of competitiveness of subjects in the market of educational services by means of a cluster. The concept of the pedagogical education cluster is described and its needs, mechanisms, principles and directions of implementation are identified. The authors have extensively commented on the goals, objectives, principles and directions of the pedagogical education cluster. The organizational, practical significance and theoretical basis of the implementation of the pedagogical education cluster are described. The authors have sought to base their views on the opinions of Western scholars. Scientific researches of Western scientists on the educational cluster have been analyzed. There are scientific conclusions concerning social, economic, legal, marketing and pedagogical implications of clustering education.
Introduction
At the present stage of civilization, the complex development of society and the emergence of its negative consequences, along with the positive aspects of development, present new challenges to mankind. It is now impossible to find any region or state fully protected from interaction. The deeper understanding of the phenomenon and its peculiarities are important issues in order to minimize the negative impact and increase the positive impact of the currently intensifying globalization process on the world. An in-depth study of the nature and notion of globalization enables us to adapt to it, to change the direction we need, and to use its power 'against itself'. It is obvious that scientific development of mechanisms and mechanisms of positive and creative use of globalization process is one of the actual problems of today. One of the means of using the 'against itself' of the power of globalization is the cluster model. Clusters in the manufacturing sectors of the economy have penetrated Western education for more than a decade. Educational clusters are not analogues of production clusters, but have many similarities. There are some scientific studies on clusters in Kazakhstan in Central Asia. In other countries, however, neither research nor practical work has been done. Chirchik State Pedagogical Institute is creating scientific-theoretical basis for clustering of pedagogical education for a year.
METHODOLOGY OF RESEARCH
In the present study, the method of analysis and synthesis of scientific clusters of pedagogical education, a comprehensive approach to the introduction of innovations in the field of pedagogy, and a comparative analysis method for studying clusters abroad were used.
LITERATURE REVIEW
M. Porter's cluster theory has entered to many spheres, as well as, education during the last decade. It is worth noting the contribution of Russian scientists in this research. Notions, branches of usage and characteristics of education cluster were investigated in their research works.
Studying and analysing researches concerning cluster approach to education gives a chance to gather several viewpoints in it. Thus, cluster approach is: -Being a separate sphere (education, economics etc.), a mechanism for strengthening organizational forms of sectoral integration that are interested in achieving competitive efficiency [1, pp. 24-25]; -A structure, as an optional component, consisting of several equal parts and keeping its complete functional ability to work [2, p. 253]; -A set of interconnected business entities that are integrated into the structure of an organization based on modernity and regular approach [3, pp. 298-301]; -Combining the needs of production and training programs [4, pp. 7-13]; -A tool for forming support of innovation in the education-science-production system [5,[73][74][75][76]; -An innovatively effective way to establish forming human resources potential for the organization's future economy [6, pp. 1-7]; -Reorganizing the education system on the basis of successive principle according to the results of integration of different educational institutions (kindergartenschoolcollegeuniversity) [7, pp. 210-212].
The following researches were carried out by Russian scientists to study theoretical bases of formation and development of educational clusters: cluster approach to vocational education (B. Seven key cluster strategies are outlined by the same researchers: -Geographic strategy, types of clusters spreading from small local to global scale; -A horizontal strategy, an extended form of a cluster consisting of multiple clusters; -Vertical strategy, uniting one which means uniting several clusters of subjects at the same level; -Lateral strategy, clusters uniting subjects in different structures which can provide economies of scale, and lead to new combinations; -Technological strategy, clusters that are visible in a set of structures using the same technology; -Focused strategy, clusters located around one center; -The quality strategy, the form of the cluster, which is the focus of how organizations implement cooperation [9, pp. 75-76]. In our view, it is desirable to classify the above cluster strategies mentioned by Russian researchers as cluster forms, because they have a clear view of the form and types of the cluster, rather than its priorities (strategies).
Effective development of the educational cluster is also directly dependent on the following conditions and factors: -Availability of technologic and scientific infrastructure (D.A. Yalov); -Mental readiness of the participants to cooperate (D.A. Yalov, V.P. Tretyak); -Existing of a strong regional strategy focused on cluster development; -The ability to successfully apply management techniques of a project; -High information technology that facilitates the exchange of information between cluster entities [8].
Therefore, it is necessary to successful implement scientific and practical activities related to clustering of pedagogical education, to adapt existing technological and scientific infrastructure to achieve efficiency in them, to achieve full understanding of this innovative process through the implementation of propaganda activities in the subjects, to create opportunity to realize that cooperation brings many benefits, to develop well-thought-out cluster development strategies, ways and means of successful project management, and to enable rapid exchange of information between the participants. This is an extensive organizational process that requires time and well-targeted activity. However, the content of these organizational processes is not inseparable from our activities. Indeed, according to notes of the Russian scientists involved in the clustering of pedagogical education such as N.N. Davydova, B.M. Igoshev, A.A. Simonova, S.L. Fomenko the real effects of the cluster development will be apparent in 5-7 years [8].
RESULTS AND DISCUSSION A. Description and Details of Pedagogical Education Cluster
Based on the cluster concepts in the scientific literature, the concept of a cluster of pedagogical education can be defined as follows: a pedagogical education cluster is a strengthening mechanism of an integration of equal entities, technologies and human resources in close contact with each other to meet the needs of a competitive pedagogue of certain geographical area.
The cluster of pedagogical education is a mimetic (a Greek word mimiomai which means imitation) method, which involves the creative implementation of a model that has led to economic development in the education system. The pedagogical education cluster forms the following innovation chain "education -science -educational tools -technology -management -business", and its scientific research is one of the most important tasks for today's pedagogy. It is becoming increasingly necessary to maintain the natural connection between the links that make up the educational complex, from the point of view of interest and efficiency, based on the socio-economic conditions and needs of a particular region.
The main product of the training complex is the competitive staff and educational services. The ultimate goal of the Education Cluster is to improve educational and scientific processes. This requires significant organizational and structural changes in the education system, along with considerable changes in management, structure and quality in the training system. At the same time, there is a need to search for new forms and methods at all stages of the work, to strengthen the relations between all types of education on the commonality of purpose and the ownership of interests, and to promote integration.
When appropriate educational approaches are established in the management structure of educational institutions, it will be possible to evaluate the current situation, to accurately predict outcomes, to take timely actions and to make adjustments to organizational management. The cluster of education system provides the correct approach to address such issues. After all, the processes of cluster integration are considered to be the most powerful because they involve all the resources in the material, financial, technological, information, methodological and human resources. The cluster flexibly enables to create management system for their structures and to predict true development to ensure mutual trust [8]. Existing qualitative changes in the components of the education system, meaningful activities, general and special management functions, programs, technologies and methods, and the processes related to the development of the human resources of the participants enable the creation of a cluster environment.
The cluster model of pedagogical education develops in the general areas related to teaching, creating educational literature, improving the scientific potential of pedagogical staff, continuity of education and training. This shows the general methodological nature of the problem. At the same time, these general areas are privatized in such areas as management and organization of education, types and areas of education, continuity and integration, teaching methods and tools.
B. Aims and Objectives of Pedagogical Education Cluster
The main objectives of the Pedagogical Education Cluster are: To ensure effective succession in the field of pedagogy and promoting the best students in the teaching profession; To create an environment for future training of professionals based on innovative practices; To reduce the period of acquiring professional skills for young professionals; To create of a new generation of educational, educational and methodical, scientific literature, tools and didactic materials in pedagogical education; To improve scientific, scientific and pedagogical potential of pedagogical education; To accumulate and integrate intellectual resources around actual issues of pedagogical education development; To find and apply different forms of education, science and pedagogical practice; To improve mechanisms ensuring continuity of education and upbringing; To provide the opportunity for quick contact with preschool, secondary education and higher education institutions and other applicants in the preparation of pedagogical staff; To scientifically justify the need for association, interdependence and collaboration between the educational units.
To this end, the Innovation Cluster of Pedagogical Education has the following objectives: Effective use of innovative pedagogical technologies to improve the quality of education; Consistent organization of scientific activity in the field of pedagogy; Providing continuity of the content of the basic and auxiliary education tools at the stages of education; Organization of training courses for teachers of educational institutions in the region to fill the gaps in their knowledge; Organization of scientific-practical seminars with the aim of eliminating the problems related to teaching subjects in secondary schools; Strengthening scientific cooperation with research institutes, research centers and higher education institutions in order to enhance the scientific potential of the institute; Attracting teachers, who are able to do research, to investigative activities in secondary schools; Internship in the leading foreign universities in order to obtain the best international practices in the field of pedagogy.
The cluster of pedagogical education provides an opportunity to identify problems in the system, which in turn can identify its strengths and weaknesses. It is important that information about the state of affairs in the cluster is very objective. With the help of a cluster, the government and education authorities will be able to effectively apply the experience and research results of the development of education in the cluster region. The cluster approach to education enables governments to provide specific tools for effective interaction within the system, to better understand problems, and to plan the scientific basis for development in the region.
All of above-mentioned statements conclude: Firstly, they confirm the idea that the educational cluster is of great scientific and practical significance, which allows the system to achieve new synergistic quality through integration; Secondly, they create the environment and conditions that make the system competitive; Finally, they have political, economic and social significance.
The whole set of activities in this process is aimed at enhancing the competitiveness of education, which is the cornerstone of training scientific and professional personnel. However, it is important to remember that not all entities combined within a cluster can produce real results immediately.
The importance of the pedagogical education cluster can be categorized and be available as follows: in the economic field: in the formation of an effective market for educational services; in social sphere: employment of graduates of pedagogical educational institutions; in the field of marketing: promotion of innovative educational technologies, new opportunities in educational and upbringing affairs of educational institutions; in the legal field: Creation of the legal framework for cooperation within the cluster, as well as the transition to new forms of management of educational institutions; in the field of pedagogy: co-design of teaching staff in the system of continuous education.
C. Principles of Pedagogical Education Cluster
It is necessary to clearly define the goals and objectives of the innovation cluster of pedagogical education and to determine what principles it should follow in order to foresee the horizons of its activities. These are: ► Natural relevance, cooperation between cluster subjects, naturalness of the issue relevance, i.e. territorial, sectoral or functional objectives of the issue of dependency. Researchers argue that clusters cannot be artificially formed. Consequently, the cluster is a product of natural relevance resulted from personal interest, and its primary purpose is to maintain competitiveness, quality and result. Clusters are the best and most effective ways to strengthen existing natural links, direct dispersal potential towards specific goals, create and strengthen the legal framework, and accelerate the exchange of information and innovation. As a condition for providing naturalness in relevance, the following can be considered: -Geographical proximity; -Dynamics of education quality (progress); -Strengthening capacity of teachers; -Rational use of scientific potential of universities and research institutions; -Improving the quality of teaching tools; -Common goal setting, etc.; ► Inseparability and continuity are creating a chain of interconnection by the cluster subjects, having specific function of each section that forms the chain, and not allowing to gaps in the continuity chain. It should be noted here that inseparability is a phenomenon of meaning, while continuity is that of form. That means providing the natural sequence of the content of education and considering the age and physiological features of the trainees supply inseparability. Inseparability can be observed both within a particular type of education and between different types of education. And continuity occurs when there are no gaps in the sequence (or in the explanation of a particular subject) of learning. Consequently, inseparability and continuity are interdependent, common, and at the same time separate processes that should be directly linked to the quality of education and between the types of education. The discussion of the pedagogical education cluster around this phenomenon justifies the importance of inseparability and continuity.
As a condition for providing inseparability and continuity, the following can be stated: - ► Succession is the positioning of the cluster subjects in a vertical single line, followed by a gradual movement from bottom to top, from simple to complex. Succession is a phenomenon of both form and content, which means the distribution of form and content of education between types of education. This distribution should take into account the specifics and objectives of the types of education, state educational standards, and requirements for alumni. Succession is a key prerequisite for learning content. It can occur both within a particular type of education and between different types of education. This inter-disciplinary sequence is a phenomenon related to the pedagogical cluster, and the processes associated with its provision correspond to the problems that need to be addressed within the cluster. This theoretically justifies succession as an important principle of the pedagogical cluster.
As a prerequisite for providing succession, the following can be specified: -Development of normative documents, tools, forms and technologies related to education and upbringing on the principle of bottom-up, from simple to complex; -Development of normative documents, tools, forms and technologies related to education and upbringing, taking into account age and physiological features of pupils and students; ► Inheritance is achievement of systematic needs of qualified teachers as a result of cluster role in generational exchange, tutorship activity, clustering of pedagogical education. Inheritance is a process that is associated with the increasing prestige of the teaching profession in the community. One of the pedagogical education clusters' mission is to explore the issues of social protection of teachers, and to address issues related to teacher respect in the community.
As a prerequisite for inheritance it is possible to: -Strengthen the outreach activities to improve the status and status of teaching profession in the society; -Establish targeted training of gifted students for teaching and pedagogical profession; -Rational selection; ► Modernization is the establishment of modern scientific achievements in the field, the use of the best international practices, and the rational use of information and communication technologies. The principle of modernity can be understood in two ways: first, modernization of production processes (problems related to education, science and establishment of modern science achievements in production), and secondly, whether the productions (graduates) stand for the modern requirements. It is well-known that it is impossible to produce competitive, high-quality products without modernizing the production processes. This requires an innovative approach to the content of education, its processes and tools, technologies. The absence of a cluster without innovation is theoretically justified by promoting modernity as a cluster of pedagogical education.
As a condition of modernity it is possible to point out: -Continuously updating of establishment of modern information and communication technologies in the process of pedagogical education; -Creation of a functioning mechanism for integrating the modern scientific achievements into the educational process; -Modernizing the content and form of education; -Adjustment of state requirements for graduates with requirements for those of educational systems of developed countries.
► Routing is the targeting of each activity within the cluster, the ability to predict and evaluate the outcome. The pedagogical education cluster calls for project directions and the implementation of several well-targeted and scientifically-based projects in each area. It is desirable that all aspects of education, such as scientific research, informationanalytical, scientific-methodological and experimental-innovative, should be taken as project areas, and that a specific project will contribute to quality and efficiency in a particular area. Working in this way further clarifies, simplifies and focuses on the concept of pedagogical education clusters and activities in this area. The orientation of these aspects indicates the validity of the scientific proposal as a separate principle of the pedagogical education cluster.
As a prerequisite for providing routing, the following may be indicated: -Clear purpose; -Targeting of each activity; -Focusing on staff training as the main criterion; -To approach the concept of competitiveness from a global perspective, not from a local or national perspective; -Development of a methodology for predicting and evaluating the effectiveness of activities; -Providing projects that are exactly directed and guaranteed.
► Generality of purpose is the unification of cluster subjects around a single global goal, in addition to their specific objectives. An important factor in the process is finding the overall purpose involved in the activities of all subjects in the cluster. The overall objective is linked to the strategy, which implies a far-reaching plan. This may not be directly relevant to the subject, but the success of the cluster provides an effective activity of the subject which is indirectly relevant at the same time. The interests of all subjects that make up the cluster in general should be reflected. Otherwise, the cluster cannot be fully carried out. This is a disruption in the cluster chain that causes the system to malfunction or not to work at all. In these aspects, the commonality of the purpose is justified by the idea that the proposed one of the cluster of pedagogical education is an important principle. As a prerequisite for providing the principle of generality of purpose is the following: -Understanding that private interest is directly linked to a general purpose; -Ability to step out of their shell when defining strategic directions and plans; -Long-term vision (existence of long-term plans); -The "voice" of each subject constituting a cluster is taken into account when setting a common goal ► The privatization of interests is the legal, social and economic interest of each subject in the cluster model of pedagogical education. The private interests of their subjects ultimately serve the common interest. Without the benefits, there will not be cluster of pedagogical education. Economic clusters were also created to increase profits and increase competitiveness. If they see benefit as material thing, the cluster of pedagogical education focuses on the social, i.e., increasing capacity of the staff and quality of education. Social interest also ultimately contributes to the material interest of the industry. In general, issues related to increasing capacity of the staff and material incentives are interrelated concepts and are considered as parallel processes within each cluster. The principle of natural interconnection occurs only when the most rational private interest is provided. Consequently, private interests provide a natural connection, and these two principles are inextricably linked. The escalation of either of these two principles will in itself increase (or vice versa) the other.
As a prerequisite for supplying the principle of privatization of interests, it can be pointed out: -To have an interest in integration; -Private interest should not cause withdrawal from the common interest; -Equality between human resource development and material incentives; -Equality of interests of subjects within the cluster.
► Mutual control is the creation of a unified system of educational subjects integrated within the cluster model, and the interest of each subject in the functioning of the system in a flawless manner, knowing the failure or omission of a particular subject affects the performance of other entities, and the establishment of a system for evaluating subjects. It is clear that the pedagogical education cluster is a phenomenon of a particular system, which demands the principle of mutual control. The more the system is perfect, the stronger mutual control can be reached. In this regard, it is important to develop objective criteria for assessing the activity of subjects, which are based on the common purpose and the private interest.
As a prerequisite for the principle of mutual control, the following can be stated: -Integration as a single system; -Systematically working; -Understanding that private interest also depends on the quality of activities of other subjects; -Development of mechanisms of interaction; Based on the above-mentioned principles, it will be possible to identify several key areas in the creation of a pedagogical education cluster. These are: First, having the common purpose among the cluster subjects; Second, the legal basis for the joint activities of the subjects; Third, a system of mutually beneficial relationships between subjects that are united within a cluster; Fourth, the coordination of the management mechanism; Fifth, the activities of the subjects do not deviate from the general purpose; Sixth, adhering to the principle of mutual control between subjects.
D. The Directions of Pedagogical Education Cluster
The cluster of pedagogical education should be organized in the following areas: 1) the direction of education; 2) the direction of educational tools; 3) the direction of education and science; 4) the direction of education and production; 5) direction of education management.
The above-given classification covers all areas of pedagogical education, with each sector being integrated. The content of these areas and networks encompasses all forms, methods and technologies of cooperation between educational, scientific, methodological, educational tools and management.
The content of the pedagogical education cluster includes: 1. The direction of education: Development of mechanisms to identify, classify and eliminate existing problems; Development of the mechanism of vertical and horizontal movement of educational and methodological potential; Control and management of quality of lessons; Development and implementation of the simplest and most appropriate mechanisms for determining educational and methodical effectiveness; Establishment of inter-directions tutoring activities in educational and methodical areas. Enrichment and enhance the content of textbooks, manuals; Improvement of auxiliary literature and didactic provision of lessons; Achieving effective use of information technologies and pedagogical technologies.
3. The direction of education and science: Strengthening integration between science and education; Establishment of inter-directions tutoring activities in the field of science.
Increasing binary research in collaboration with teachers from universities and secondary schools (preschool institutions) (scientific developments are implemented by professors of higher education and applying them into practice is done by teachers of secondary schools); Development of a mechanism to provide the demand for scientific and pedagogical potential; 4. The direction of education and production: Strengthening integration between education and production; Increase binary research in collaboration with higher education teachers and production staff (scientific developments are implemented by faculty members of universities; their implementation is done by production staff); Having the combination of theory and practice; Improving the mechanisms for the rapid implementation of scientific achievements, taking into account the intensity of development; 5. The direction of Education Management: Carrying out research on innovative management of education; Creation of a system of territorial administration that would harmonize the interests of all types of education; Implementation of innovative methods and tools for management, information and communication technologies.
The effectiveness of the cluster serves for the interaction and openness that provides mutual support and control to all participants. Proximity, internal relationships, constant personal contacts and shared openness facilitate communication and information sharing. Clustering issues require news in the field of education, availability of new components and manuals, testing of the educational process, and new trends in the development of the education system. Implementation of the educational cluster requires the establishment of pedagogical conditions and experimental verification of the effectiveness of the formation of qualified specialists. The role of higher education in the cluster is evident in the creation of innovative products. The clusters, research institutes and production facilities will become the base of practice and will have the opportunity to participate in the formation of specialists in their research and educational activities in accordance with their needs and prospects of development.
All subjects of the cluster form and organize a multilevel system of training of qualified specialists. Both the employer and the secondary schools, secondary special and vocational education institutions and higher education are all part of the process.
The process of continuous education is a multilevel system, with changes at the social level and professional development of the subjects creating favorable conditions for its development. Therefore, the main idea of continuing education is to adapt the status, desires, and abilities of a person to the world of work and social relations in a rapidly changing world.
CONCLUSION
In conclusion, all work done should be directly related to the level of primary, professional, high professional and vocational training of the cluster participants and should be aimed at the implementation of the scientific and educational cluster. At the same time, educational institutions within the cluster and other organizations that are part of the cluster must work together for a common purpose. Training should also include additional and distance learning. It is also important to create the necessary conditions for the active involvement of a number of research institutions, industrial enterprises and other institutions of the republic in the cluster.
As a result of this: › Firstly, the need for qualified pedagogical staff is met with good quality (social consequence); › Secondly, an effective market for educational services will be formed (economic consequence); › Thirdly, there will be opportunities for rapid promotion of innovative educational technologies, new opportunities in educational work of educational institutions (the consequence of marketing); › Fourthly, the legal and regulatory framework (legal consequence) will be established for the interaction of educational institutions, as well as the transition to a new organizational form of management of the education system; › Fifth, the design of the pedagogical staff training system in conjunction with the cluster entities (pedagogical consequence) will be implemented.
Thus, the implementation of a cluster approach to education strengthens continuity and communication in the education system, the integration processes between the types of education. One of the major challenges facing the scientific community is to view this as an innovation in education and to develop mechanisms to measure its effectiveness and development of ways of implementation. The cluster approach will radically change the content of public education policies and provides an opportunity to look at the relationships of subjects with the criteria of development and effectiveness. As a result, the cluster creates a powerful mechanism for the integration of human resources, organizations and technologies in the region as an innovative approach to education. | 2020-02-20T09:04:36.873Z | 2020-01-30T00:00:00.000 | {
"year": 2020,
"sha1": "680f2a059908fbb394ee0dd5b39b09fa266b33dc",
"oa_license": null,
"oa_url": "https://doi.org/10.15863/tas.2020.01.81.46",
"oa_status": "GOLD",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "fb57ef43b5d37597496e9dde99e5bdd81eac4ead",
"s2fieldsofstudy": [
"Education"
],
"extfieldsofstudy": [
"Sociology"
]
} |
14150045 | pes2o/s2orc | v3-fos-license | Send Orders of Reprints at Reprints@benthamscience.net Adaptation of Mycobacterium Smegmatis to an Industrial Scale Medium and Isolation of the Mycobacterial Porinmspa
The adaptation of the organism to a simple and cost-effective growth medium is mandatory in developing a process for large scale production of the octamericporinMspA, which is isolated from Mycobacterium smegmatis. A fermentation optimization with the minimal nutrients required for growth has been performed. During the fermentation, the iron-and ammonium chloride concentrations in the medium were varied to determine their impact on the observed growth rates and cell mass yields. Common antibiotics to control contamination were eliminated in favor of copper sulfate to reduce costs. MspA has been successfully isolated from the harvested M. smegmatisusing aqueous nOPOE (n-octyloligooxyethylene) at 65°C. Because of the extraordinary stability of MspA, it is possible to denature and precipitate virtually all other proteins and contaminants by following this approach. To further purify the product, acetone is used for precipitation. Gel electrophoresis confirmed the presence and purity of MspA. A maximum of 840µg (via Bradford assay) of pure MspA per liter of the optimized simple growth medium has been obtained. This is a 40% increase with respect to the previously reported culture medium for MspA. INTRODUCTION MspA is an octamericporin in the outer membrane of M. smegmatis [1-3].The inner pore of MspA is the major general diffusion pathway for the organism and provides a passage for hydrophilic solutes. The crystal structure of MspA shows the assembly of eight monomers to a goblet shaped pore [4]. Each monomer contains two consecutive 16-stranded β-Barrels with hydrophobic outer surfaces (the so-called constriction zone, which is marked with an arrow in Fig. 1).
INTRODUCTION
MspA is an octamericporin in the outer membrane of M. smegmatis [1][2][3].The inner pore of MspA is the major general diffusion pathway for the organism and provides a passage for hydrophilic solutes. The crystal structure of MspA shows the assembly of eight monomers to a goblet shaped pore [4]. Each monomer contains two consecutive 16stranded β-Barrels with hydrophobic outer surfaces (the socalled constriction zone, which is marked with an arrow in Fig. 1).
These surfaces are thought to induce the anchorage of the porin in the lipid bi-layer [1]. Infrared and circular dichroism spectroscopy revealed that heating to 92 o C and 112 o C is required to dissociate the MspAoctamer and to unfold the sheet domain in the monomer, respectively [6]. The thermal stability of the MspA octamer exceeds even the remarkable stability of the porins of Gram-negative bacteria for every condition tested and is not diminished in the presence of 2 % SDS or within the pH-range from 2 to 11. Due to its superior *Address correspondence to these authors at the Kansas State University, Department of Chemistry, Manhattan, 213 CBC Building, KS 66506-0401, USA; Tel: 785-532-6817; Fax: 785-532-6666; E-mails: sbossman@ksu.edu; sw87@k-state.edu stability, anisotropy, and the ability to form a hydrophilic homopore, MspA is suited for many and surprisingly different applications. It has been deposited on HOPG-surfaces to form nanopores [5] and complex protein-networks [7], and letter-or star-shaped microstructures when deposited together with PMMA [8].MspA reconstitutes in artificial [9,10]and natural cell membranes [11], as well as polymerlayers [12]. It forms cation-selective ion channels that show voltage gating [12]. The probably most intriguing discovery is that MspA can freely stand with its axis of symmetry perpendicular to a MICA surface without the support of a self assembled monolayer or polymer-layer [13]. A self assembled monolayer is a layer of self-positioning molecules connected to a surface.
Due to its versatility, MspA represents a high-value product (currently $35 per 1µg of purified protein). Our goal is the efficient production of MspA from minimal medium and its efficient purification, so that a market price of less than $10 per 1µg could be reached. This would permit the broad application of MspA as a biological nanotool. One possible applicationmay be the use of MspA as a template for copper nanoparticles in the research for nano-sized digital storage devices [10]. Furthermore MspAmay be applied in the investigation of mycobacterial channel blocking agents,which represent a new strategy in the treatment of tuberculosis and the supply of M. tuberculosis with hydrophilic nutrients [14].The Niederweis group at the University of Alabama at Birmingham has developed a method to selectively extract Mspporins out of M. smegmatis grown in Middlebrook 7H9 medium [15]. This procedure exploits the extreme thermal stability of MspA by heating M. smegmatis cells to 100 C in the presence of 0.5% of the non-ionic detergent n-octylpolyoxyethylene and yields mainly Mspporins with very little contamination by other proteins [15]. The detergent extracts of the strain M. smegmatis ML10 (MspAMspC) do not show any porin band in Coomassiestained protein gels. A background expression of Mspporins is still detectable in immunoblots using an Msp-specific antiserum [15].It will be demonstrated here that efficient production of MspA can be attained in a relatively simple and inexpensive medium, with the total cost for the fermentation medium estimated at $20 per µg purified protein and with a productivity of 12 µg protein per liter of medium and hour. The composition of the different types of media is shown in Table 1.
Materials
The composition of the 7H9 Middlebrook is shown in Table 2. 4ml Glycerol, 2ml Tween 80, 300µl Hygromycinwere added additionally.
All chemicals mentioned are purchased from Sigma Aldrich if not otherwise noted.
Medium Preparation
All chemicals were purchased from Sigma-Aldrich. A 1l Nalgene bottle is filled with 400ml distilled H 2 O. Potassium phosphate, sodium chloride, sodium nitrate and ferric chloride were added (amounts see Table 2) and the pH is adjusted to 7.2 with hydrochloric acid. A second 1l Nalgene bottle is filled with 560ml H 2 O and Zinc Chloride and Glucose were added and pH is adjusted to pH 7.2 (HCl). Both bottles were autoclaved and the contents were then combined. 15ml mineral solution, 2ml Tween 80, and 2ml of a 0.01% Copper(II) sulfate-Malachite green solution were added.
Growth Cycle
The Growth cycle is structured in three steps: Maintenance-, medium size and large cultures. To start a new maintenance culture, 4.5ml medium and 0.5ml inoculum were combined in a 14 ml polystyrene round bottom Falcon tube The growth of the large cultures was carried out in 2000ml KIMAX shaking flasks. The flasks were filled with 960ml medium and inoculated with a complete medium size culture,bringing the total volume in the flask to 1000ml. The flasks were sealed with aluminum foil to prevent contamination and placed in the shaker for 3-5 days (see medium size cultures for conditions).
MspA Purification Process
A large culture (about 1 l) was centrifuged for one hour at 3,700g using 800 g capacity bottles. The supernatant was decanted while the cells were suspended in 50ml PBS buffer and the resulting suspension was divided in 50ml conical tubes for centrifugation at 10,000g for one hour using a fixed rotor. 10ml PEN buffer was added to the cell pellet to disperse the cells in the 50ml conical tubes after the supernatant was removed. The dispersed cells are collected in a 200 ml Nalgene bottle and kept at 60-65°C in a water bath under agitation (magnetic stir bar, 200rpm, 2cm length). The temperature range is very important for optimum results. 10 µl nOPOE was added and the cell suspension was kept in the water bath for one hour. The suspension was then transferred into a 50 ml conical tube and again centrifuged at 10,000 g for one hour. A 50 ml conical tube with water was used as a counterweight. The supernatant was collected in a 100ml nalgene bottle and 10 ml pre-cooled acetone (-20°C) was added before storing the bottle in a freezer (-20 °C) overnight. After centrifuging the cold suspension for 30 minutes at 10,000g the supernatant was discarded and the cell pellets were dissolved in 10ml PBS buffer before ultrafiltration (Millipore Centrifugal Filter Units MWCO 3000 Dalton, Millipore, Billerica, MA). Centrifuging at 10000g for 30 minutes concentrates the product further.
Product Analysis: Gel Electrophoresis
To prepare an acryl amide separation gel for the gel electrophoresis 3.30ml acryl amide-BIS solution (30:1) was added into a 15 ml conical tube. Also 4.50ml gel buffer, 3.30ml bidest. H 2 O and 2.25ml glycerol were added and mixed well. Next, 0.015ml TEMED and 0.135ml 10%APS were added and the solution was filled into the glass chamber and the top was covered with a layer of water. For the collection gel, 0.80ml acrylamide-BIS (30:1), 2.60ml gel buffer and 4.20ml bidistilledH 2 Owere added to a 15ml conical tube. Then 0.015 TEMED and 0.160 10% APS were added and mixed. The covering water from the separation gel was removed and the collection gel solution was added. A comb is placed for pocket formation.
The gel was placed after 30 minutes in the electrophoresis cartridge and the combwas removed. After loading the samples and molecular marker into pockets, the electrophoresis process was started using 125V constant voltage.
Electric power was supplied by BioRadPowerPac Basic and the gel cartridge was a BioRad Mini-Protean Tetra Cell.
The method was adapted from references [16] and [17].
HPLC
Protein-containing solutions were analyzed using a Gilson/Hewlett Packard HPLC workstation (Gilson 322 pump, Gilson, Middleton, WI, Hewlett-Packard series 1100 detector, Agilent, Santa Clara, CA) employing a POROS HQ anion exchange column (4.6mm diameter, Applied Biosystems, Carlsbad, CA). The mobile phase consisted of a gradient of two buffers (5mM SDS, composition and 5mM SDS, 1M NaCl, composition). The HPLC procedure has been adapted from reference 14. Flow rate was 0.50 ml/min and the injection volume was 50 µl.
Sterilization
Autoclaving (Yamato Sterilizer SM 200, Santa Clara, CA 95050, USA) 110°C, 30 min holding time, was used for sterilization. 70vol% ethanol in distilled water was used to sterilize equipment that could not be autoclaved.
Growth Experiments
M. smegmatis can be grown efficiently in a simple and well-defined medium as shown in Fig. 2). The effectiveness of the copper complex in suppressing competing microorganisms was demonstrated by the fact that none of the fermentations were contaminated without any great measures to maintain sterility.
The experiment was stopped after 70 hours of total growth-time with a maximum optical density of 0.84. A wet cell-mass result of 2.0g ±0.1g was harvested which equals a yield of 10wt% on the carbohydrates in the medium.
To analyze the impact of iron on the growth of M. smegmatisa second approach with various iron contents was started. Three cultures with an addition of 20, 50 and 100 mg ferric chloride per liter were investigated. The experiment was stopped after 40 hours of total growth-time with a maximum optical density of 0.96. Again a cell-mass result of 2.0g ±0.1g was achieved.
Surprisingly, no conclusive influence of iron in the range tested on the growth of M. smegmatiswas observed (Fig. 3) although iron has been reported to be of some importance in mycobacterial protein synthesis [19]. The absence of a consistent impact of iron concentration in the range tested points perhaps towards a very effective collection mechanism of iron in mycobacteria [20].
The third approach focuses on the nitrogen source. Besides sodium nitrate, we introduced ammonium chloride as a second nitrogen source to further promote growth in our cultures (Fig. 4). The attempt led to extended growth of the M. smegmatis cultures. Maximum growth times, as well as maximum OD600 and cell mass yield were increased: The cell mass harvested is shown in Table 3, along with the initial ammonium chloride concentration.
Therefore the maximum cell mass yield relative to the carbon source was 15wt%.
The addition of a different nitrogen source shows a major impact on growth and yield of M. smegmatis.The addition of 0.5g NH 4 Cl led to an increased biomass yield of 35% in comparison to the first approach. These results were exceeded by the addition of 1.0 g NH 4 Cl. Here the comparison with the first approach results in a 50 % cell mass gain. However, the growth did not reach the level discussed above when 2.0g NH 4 Cl were added. Apparently a somewhat toxic level was reached, resulting in partial growth inhibition. The harvested cell mass of M. smegmatis was decreased to the results of the basic approach and its growth showed only minor improvements.
The same trend as above has also been observed in the carbon-conversion yield to cell mass. The yield relative to Fig. (2). Standard recipe for minimal medium, all cultures grown under the same conditions (37°C, 75 rpm). 20g Glucose, 0.5g sodium nitrate, 1g monobasic sodium phosphate, 1.2g sodium chloride, 0.05g zinc chloride, 15ml liquid minerals, 2ml Tween 80, 2ml copper complex solution, line drawn to guide the eye only. the amount of the consumed carbohydrate substrate was increased from 10wt% in the reference experiments and the iron supplement experiments to approximately 15wt%.
MspA Extraction and Analysis
The extracted product MspA was analyzed in three steps. The first step, gel electrophoresis, shows the purity of the product and verifies the presence of MspA. Our results were identical with earlier findings by Niederweis et al. [6,15]. Fig. (5) shows the MspA band on the left side and the molecular marker on the right. The marker indicates a size of approximately 160kD and the single band present demonstrates the high purity of the extraction product. However the slight haze down the left lane indicates that there were likely some denatured protein fragments in the sample. Fig. (5) also shows the HPLC analysis of our extraction product. Two peaks at retention times of about 14 and 25 minutes are clearly discernible. Both fractions were collected and a gel electrophoresis experiment with the second fraction shows the presence of MspA. Comparison with the literature indicates that the first peak is MspC, a porin quite similar to MspA regarding size and structure [15].
The results of the Bradford assay show a concentration of about 600mg protein per ml (±5.6%), at a yield of 1.4ml per extraction. Fig. (6) shows the calibration standard for the assay. | 2018-05-08T17:35:49.145Z | 0001-01-01T00:00:00.000 | {
"year": 2013,
"sha1": "125370883a9085db7ea706a3e42f0d71f222a393",
"oa_license": "CCBYNC",
"oa_url": "http://benthamopen.com/contents/pdf/TOMICROJ/TOMICROJ-7-92.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "125370883a9085db7ea706a3e42f0d71f222a393",
"s2fieldsofstudy": [
"Biology",
"Engineering",
"Environmental Science"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
} |
271286038 | pes2o/s2orc | v3-fos-license | Examining Movement Patterns, Skeletal Muscle Mass, and Hip Mobility in Office Workers With or Without Lower Back Pain: An Analytical Cross-Sectional Study
Objectives: The purpose of this study was to clarify the relationship between Functional Movement Screen (FMS), skeletal muscle mass, and hip mobility in office workers with or without chronic lower back pain (LBP), as well as to determine whether the above items differed between office workers with or without chronic LBP. Methods: This study utilized an analytic cross-sectional design. The participants were 35 office workers (14 in the LBP group and 21 in the non-lower back pain group, or NLBP) who were willing to cooperate with the request for cooperation in this study. Movement patterns were assessed by FMS and skeletal muscle mass was measured by bioelectrical impedance analysis. Hip mobility was measured by prone hip extension (PHE) and straight leg raising. The correlations between each item and differences in the presence or absence of LBP were analyzed. Results: The LBP and NLBP groups showed different correlations (p<0.05) between total and subcategory scores and skeletal muscle mass. Total FMS score (p=0.02, r=-0.40) and PHE angle (p=0.01, r=0.43) were significantly higher in the LBP group than in the NLBP group. Conclusions: The FMS shows different relationships between total and subcategory scores and skeletal muscle mass for office workers with or without LBP. In addition, office workers with LBP may have different movement patterns and greater hip extension range of motion than those without LBP.
Introduction
Office workers who spend most working hours sitting often experience lower back pain (LBP).LBP has a lifetime prevalence rate of 84% and is one of the most common health problems in modern society [1,2].In recent years, changes in the working environment have increased the demand for prolonged sitting [3], and an increasing number of reports have suggested a relationship between prolonged sitting and LBP [4,5].LBP is a serious issue among office workers because it not only affects the quality of life but also decreases productivity and increases absenteeism at work [1,6].
In the management of LBP, it is important to evaluate not only the lumbar spine but also the whole body.For example, decreased mobility of the shoulder joint and thoracic spine leads to a compensatory increase in lumbar motion and dysfunction of the lumbar spine [7].Regarding the lumbar spine and lower limb joints, such as the hip and knee joints, dysfunction of one joint can also affect the other, leading to tissue damage [8][9][10][11].Thus, the reciprocity of movements is seen not only in adjacent joints but also in the entire body.In particular, office workers are susceptible to decreased mobility and muscle endurance in certain areas because they hold the lower extremities and trunk in flexed positions for long periods [12], and dysfunctions originating from other areas may also affect the lumbar region.Therefore, to treat LBP, an approach that considers not only the lumbar region but also the whole-body connection is required.
From this perspective, the Functional Movement Screen (FMS) is widely used to assess and score the quality of movement (movement patterns) of the whole body.Originally, the FMS was a test battery developed for injury prevention in athletes [13,14], but recently, it has been utilized for a variety of people, including the elderly, children, and those with pain [15][16][17].The FMS is often used to assess whole-body movement patterns [15,18].FMS consists of seven items that are subcategorized into primitive movement patterns and higher-level movement patterns (deep squat, hurdle step, and in-line lunge) [19].Primitive movement patterns are divided into basic mobility and stability movement patterns (shoulder mobility and active straight leg raise, or ASLR) and transitional movement patterns (trunk stability pushup and rotary stability) that require stability, coordination, and control [19].These subcategories provide important clues for interpreting body weaknesses and performing corrective exercises.However, none of the FMS assessments of movement patterns, including the subcategories, have examined their relationship with other factors, such as skeletal muscle mass and hip mobility, which may be associated with LBP [20,21], and have not compared movement patterns between LBP and non-LBP participants.
Therefore, the purpose of this study was to investigate the relationship between FMS scores, including subcategories, skeletal muscle mass, and hip mobility, in office workers with or without chronic LBP and to determine whether there were differences in the above items between office workers with or without chronic LBP.
Study design
This study was an analytic cross-sectional design, and measurements were taken at the laboratory of Hokuriku University between February 2023 and May 2023.Office workers who were willing to participate in the study were included.The study considered those who spent more than half of their working time in a sitting position, had worked in their current job for at least one year, and were working full-time.
The exclusion criteria were as follows: (1) typical physical disabilities such as paralysis or cognitive dysfunction, (2) implants or other metal objects such as pacemakers inserted into the body, (3) a history of surgery within the past 12 months, and (4) pregnancy.Information on LBP (Numerical Rating Scale, Oswestry disability index, and duration) was obtained in a preliminary survey.
Those with pain localized to the lumbar region lasting more than three months were defined as the LBP group and those without LBP pain as the NLBP group.Those who did not fall into either category (e.g., acute LBP) were excluded from the study, as were those with neurological symptoms in the lower extremities.None of the participants in the NLBP group had a history of LBP or other back disorders.For all measurements, the raters did not know whether the participants belonged to the LBP or the NLBP group.The sample size was calculated using G*Power v.3.1.(Heinrich-Heine-Universität Düsseldorf, Düsseldorf, Germany) with an effect size of 0.4, an alpha of 0.05, and a power of 0.8, resulting in 34 participants.
Ethical considerations
This study was conducted in accordance with the principles of the Declaration of Helsinki.The purpose and content of the study, the fact that the obtained data would not be used for any purpose other than the study, and precautions against the leakage of personal information were fully explained to the participants in advance.Consent to participate in the study was obtained from all the participants after obtaining their signatures.This study was approved by the Ethics Committee of Hokuriku University (approval number: 2023-2).
Measurements
The FMS is scored on a 4-point scale from 0 to 3 for each item.Scoring was given as 0 points if there was pain, 1 point if the test could not be completed, 2 points if the task could be completed but some compensation was required to perform the task, and 3 points if the task could be performed correctly without any compensation [13].The FMS used in this study was modified to apply to the LBP population, giving a score of 0 only if there was an increase in LBP during measurement [15].For items with two sides during the evaluation, the value of the side with the lower score was recorded, and the FMS score was calculated.The final score on the FMS-FS was the sum of all FMS test scores; the higher-level movement pattern item score on the FMS-HS was the sum of the higher-level movement pattern test scores; the basic mobility and stability movement pattern item score on the FMS-BS was the sum of the basic mobility and stability movement pattern test scores; and the transitional movement pattern item score on the FMS-TS was the sum of the transitional movement pattern test scores (Table 1).For the ASLR, the angle measurements described below were conducted simultaneously with FMS scoring.FMS was measured using the FMS test kit (FMS kits, Functional Movement System, USA) by a physical therapist who was FMScertified and had at least five years of experience in measuring FMS.
TABLE 1: Functional Movement Screen subcategories and constituent tests
Skeletal muscle mass was used as a measure of muscle mass in this study because it is a whole-body segment assessment index that is independent of the direction of movement and is not affected by fatigue or psychological factors.Skeletal muscle mass was measured by bioelectrical impedance analysis using a body component analyzer (Inbody 270, Inbody Japan Inc., Tokyo, Japan).Bioelectrical impedance analysis has been utilized as a non-invasive, simple, and safe alternative to dual-energy X-ray absorptiometry [22].Measurements were taken in the standing position with bare feet on the foot sensor of a body component analyzer.After measuring the body weight, the hand sensor was grasped.The skeletal muscle masses of the upper and lower limbs, trunk, and skeletal muscle index (SMI) were used as indices of skeletal muscle mass.The average muscle masses of the upper and lower limbs were used as representative values for the left and right sides, respectively.
Hip mobility was measured by capturing the prone hip extension (PHE) and ASLR angles using a fixed camera.The camera was positioned 2 meters lateral to the participant's hip joint at a height of 30 cm.Both tasks were performed with active motion and the participants stopped at the end of the task.Participants were instructed to perform movements as wide as possible.For the PHE, one co-author had the participants perform the task with their pelvis immobilized to prevent pelvic movements.From the obtained videos, the tilt angle of the thigh to the floor at the end position of the task movement was measured using the image analysis software ImageJ (Wayne Rasband, National Institutes of Health (NIH), Bethesda, USA).The intraclass correlation coefficient (ICC (1,1)) was measured for 10 participants in contrast to the LBP and NLBP groups in this study, and it was confirmed that the measurement reliability was sufficient (ASLR: 0.98, PHE: 0.90).Average left and right hip angles in the PHE and ASLR groups were used as representative values.
Statistical analysis
Statistical Package for the Social Sciences (IBM SPSS Statistics for Windows, IBM Corp., Version 28.0, Armonk, NY) was used for statistical analysis.Because the FMS is an ordinal scale, nonparametric tests were used to test for it.The Shapiro-Wilk test was used to test for normality.Spearman's correlation analysis was used to evaluate the relationships between each item.For the differences between the LBP and NLBP groups, a chi-square test was used to compare the proportions of those who had a left-right difference in the FMS items, those who had a score of 1, and those who had a score reduction in each FMS item that included the word "spine" or "torso" as a criterion, as well as the proportions of male and female LBP and NLBP groups.The Mann-Whitney U test was used to compare FMS scores, and unpaired t-tests were used for age, height, weight, skeletal muscle mass, and hip mobility.The significance level was set at p < 0.05.
Results
Thirty-five participants (20 males and 15 females, age of 45.5 ± 9.9 years, height of 165.6 ± 9.0 cm, and weight of 62.4 ± 11.4 kg) were included in the study (Table 2).The measured values of each item are listed in Table 3.None of the participants scored 0 on the FMS scale.There were significant positive correlations between the FMS-FS and FMS-TS and upper/lower limb/trunk muscle mass and SMI in the LBP group (Table 4).FMS-HS was significantly positively correlated only with SMI (Table 4).The NLBP group showed significant positive correlations with FMS-TS, upper/lower limb/trunk muscle mass, and SMI, whereas FMS-BS and ASLR angles were significantly negatively correlated with upper/lower limb/trunk muscle mass and SMI (Table 5).The FMS-FS (p=0.02,r=-0.40) and PHE angle (p=0.01,r=0.43) were significantly higher in the LBP group than in the NLBP group (Table 3).
Fourteen participants (100%) in the LBP group and 17 (81.0%) in the NLBP group had a left-right difference in one or more items, with no significant difference between the groups (p=0.08).Ten participants (71.4%) in the LBP group and 20 (95.2%) in the NLBP group had at least one item with a score of 1 or less, which was significantly higher in the NLBP group than in the LBP group (p=0.03).
FMS items that included the word "spine" or "torso" in the measurement criteria were "deep squat," "hurdle step," "in-line lunge," and "trunk stability pushup." Figure 1 shows the percentages of participants who lost points based on these criteria.Only "in-line lunge" was significantly more common in the NLBP group than in the LBP group (p=0.05).
Discussion
The purpose of this study was to clarify the relationship between FMS scores, including subcategories, skeletal muscle mass, and hip mobility, in office workers with or without chronic LBP, and to verify whether there are differences in each of the above items between office workers with or without chronic LBP.
The results showed that, in the NLBP group, the FMS-BS and ASLR angles were negatively correlated with skeletal muscle mass of the limbs and trunk and SMI, which are indices of skeletal muscle mass.Because joints tend to follow trajectories that cause less discomfort, such as tissue resistance and pain [10], it has been noted that movement patterns are influenced by other factors, such as tissue flexibility [23].In particular, there is a positive correlation between muscle cross-sectional area and muscle size on CT and MRI, and the passive stiffness of the muscle [24,25].Therefore, it is easy to understand why the FMS-BS, which easily reflects mobility, was negatively correlated with SMI in this study.The results of the present study also suggest that mobility decreases as muscle mass increases.Therefore, when training to increase muscle mass, it may be necessary to combine this with mobility training.In contrast, the LBP group showed no significant correlation between FMS-BS, ASLR angle, and skeletal muscle mass of the extremities and trunk, or SMI.This suggests that having LBP may result in a different relationship between mobility and muscle mass than the original relationship between mobility and muscle mass.
The FMS-TS was positively correlated with the skeletal muscle mass index in both the LBP and NLBP groups."Trunk stability pushup" and "bird dog," which are similar to "rotary stability," especially require trunk muscle activity [26,27], and this task requires the control to move the limbs while keeping the trunk stable.Therefore, in the FMS-TS, larger muscle mass and easier access to muscle activity are advantageous for controlling the limbs and trunk.
Furthermore, the FMS-FS showed a significant correlation with the skeletal muscle mass index in the LBP group, but not in the NLBP group.Because the FMS-TS and FMS-BS contrasted in relation to muscle mass, the total score, FMS-FS, may result from the offsetting results of the FMS-TS and FMS-BS in the NLBP group.Therefore, it suggests that in participants without LBP, the FMS should not only focus on the final score but also the scores of the subcategories when evaluating them.
Regarding the LBP and NLBP groups, only the FMS-FS and PHE angles were significantly higher in the LBP group than in the NLBP group.The higher the FMS-FS score, the better the movement pattern [13,14,19].Ko et al. [28] compared the FMS scores of participants with chronic LBP and healthy participants and reported that healthy participants had higher scores, which is different from the present results.One difference between the previous and present study is that several participants in the Ko et al. [28] study had items that scored 0 because pain existed while performing the FMS task, which may be related to the difference in the results from the present study.In the current study, no participants experienced pain during movement; therefore, pure movement patterns could be evaluated, which may have resulted in higher FMS-FS scores in the LBP participants than in the NLBP participants.In the FMS, trunk movement is a common evaluation criterion for many items, and smaller trunk movements are more likely to result in higher scores [19].In the presence of pain, increased muscle activity and reduced motion have been reported because of fearavoidance of pain associated with motion and protective mechanisms against tissue damage [20,29].In this study, it is possible that the FMS scores were higher in participants with LBP because they chose movement patterns with less load on the lumbar region to avoid pain.This is consistent with the fact that the LBP group showed less trunk movement in the "in-line lunge."However, the FMS-HS item, in particular, is a complex movement among the FMS items, and it is unclear whether lumbar movements were large.Therefore, the magnitude of the lumbar movement for each item needs to be investigated in future studies.
For injury prevention, a perspective different from that of the total score (FMS-FS) has been proposed.The FMS-FS scores higher by performing movements in a "perfect" movement pattern [19], but along with that, it has been pointed out the importance of looking at one point to determine if the person can move beyond the minimum standard and the presence of a left-right difference [13,19,30].In the present study, the NLBP group had more items with a score of one than the LBP group, which is consistent with the FMS-FS results.Therefore, it can be inferred that the LBP group had better movement patterns than the NLBP group.
Furthermore, the LBP group had a significantly larger hip angle than the NLBP group with respect to the PHE.A previous study [20] reported that participants with LBP in PHE had greater motion in the hip than in the lumbar spine.The results of the PHE in this study are consistent with those of previous studies and the FMS results, and it is possible that the hip motion in the PHE was greater because of routine hip-dominant motion over the lumbar spine.This may explain the lack of correlation between the FMS-BS or ASLR angle and skeletal muscle mass only in the LBP group.
These results contradict the concept proposed in the kinesiopathological model, which states that inappropriate movement patterns can cause tissue damage [10].However, it has been shown that those at high risk for LBP in the future have lower FMS scores [31], and those with LBP at the time of measurement and those at risk for LBP should be interpreted separately.It is possible that the LBP group in this study originally had a poor movement pattern, which caused LBP, and the pain subsequently changed the movement pattern in a positive direction.However, this remains unclear and requires verification in future studies.
Limitations
A limitation of this study is that the FMS did not include factors such as power, endurance, or change in direction [14].Therefore, different results may be obtained in the evaluation of movement patterns with respect to elements missing in the FMS.In this study, the LBP group was not subgrouped by direction of motion, region (e.g., high and left-right differences), or tissue.It is recommended that LBP be subgrouped and characterized rather than considered as a single condition [32].In this study, subgrouping may have produced different results, and further studies are needed.Furthermore, in this study, the assessment of muscle mass was made in rough body segments, such as the upper and lower limbs, and the relationship between the FMS and the size of each muscle could not be verified in detail.Similarly, mobility was evaluated only for active motion in the sagittal plane of the hip joint.Other directions of motion, passive motion, and mobility of other joints should be verified in the future.
Conclusions
This study examined the relationship between FMS scores, including subcategories, skeletal muscle mass, and hip mobility, in office workers with or without chronic LBP, and whether there were differences in each of the above items between office workers with or without chronic LBP.The FMS should focus not only on the total score but also on the subcategory scores because the total and subcategory scores show different relationships with skeletal muscle mass and hip mobility.In addition, the relationship between office workers with or without LBP differed.Office workers with chronic LBP may have different movement patterns and greater hip extension angles than those without LBP.
FIGURE 1 :
FIGURE 1: Percentage of participants who received a point reduction for each criterion on the Functional Movement Screen that included the word "spine" or "torso.NLBP: non-lower back pain; LBP: lower back pain
TABLE 2 : General participant characteristics
Values are presented as number of participants (percentage) or mean (standard deviation).LBP: lower back pain; NLBP: non-lower back pain; NRS: numerical rating scale; ODI: Oswestry Disability Index
TABLE 3 : Differences in each measure between the lower back pain and non-lower back pain groups
Values are presented as median (interquartile range) or mean (standard deviation).*Significant difference between groups (p < 0.05).LBP: lower back pain; NLBP: non-lower back pain; CI: confidence interval; FMS-FS: final score on the Functional Movement Screen; FMS-HS: higher-level movement pattern item score on the Functional Movement Screen; FMS-BS: basic mobility and stability movement pattern item scores on the Functional Movement Screen; FMS-TS: transitional movement pattern item score on the Functional Movement Screen; PHE: prone hip extension; ASLR: active straight leg raise; SMI: skeletal muscle mass index
TABLE 4 : Correlations between each measurement item in the lower back pain group
: prone hip extension; SMI: skeletal muscle mass index; FMS-FS: final score on the Functional Movement Screen; FMS-HS: higher-level movement pattern item score on the Functional Movement Screen; FMS-BS: basic mobility and stability movement pattern item scores on the Functional Movement Screen; FMS-TS: transitional movement pattern item score on the Functional Movement Screen; ASLR: active straight leg raise * Significant correlation (p<0.05).PHE
TABLE 5 : Correlation between each measurement item in the non-lower back pain group
* Significant correlation (p<0.05).PHE: prone hip extension; SMI: skeletal muscle mass index; FMS-FS: final score on the Functional Movement Screen; FMS-HS: higher-level movement pattern item score on the Functional Movement Screen; FMS-BS: basic mobility and stability movement pattern item scores on the Functional Movement Screen; FMS-TS: transitional movement pattern item score on the Functional Movement Screen; ASLR: active straight leg raise | 2024-07-19T15:18:30.801Z | 2024-07-01T00:00:00.000 | {
"year": 2024,
"sha1": "f68492fcd46f5fdff369063ecb1a7383e80b6849",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.7759/cureus.64721",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "51fa0cbf09af5fcad684d59791b2e40a16e5e9c3",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": []
} |
71716608 | pes2o/s2orc | v3-fos-license | An Improved Lightweight Two-Factor Authentication and Key Agreement Protocol with Dynamic Identity Based on Elliptic Curve Cryptography
With the rapid development of the Internet of Things, the problem of privacy protection has been paid great attention. Recently, Nikooghadam et al. pointed out that Kumari et al.’s protocol can neither resist off-line guessing attack nor preserve user anonymity. Moreover, the authors also proposed an authentication supportive session initial protocol, claiming to resist various vulnerability attacks. Unfortunately, this paper proves that the authentication protocols of Kumari et al. and Nikooghadam et al. have neither the ability to preserve perfect forward secrecy nor the ability to resist key-compromise impersonation attack. In order to remedy such flaws in their protocols, we design a lightweight authentication protocol using elliptic curve cryptography. By way of informal security analysis, it is shown that the proposed protocol can both resist a variety of attacks and provide more security. Afterward, it is also proved that the protocol is resistant against active and passive attacks under Dolev-Yao model by means of Burrows-Abadi-Needham logic (BAN-Logic), and fulfills mutual authentication using Automated Validation of Internet Security Protocols and Applications (AVISPA) software. Subsequently, we compare the protocol with the related scheme in terms of computational complexity and security. The comparative analytics witness that the proposed protocol is more suitable for practical application scenarios
Contributions and Organization
In order to fill the aforementioned gaps, we present an improved authentication protocol with a full security function. The contributions of this paper are following: (1) We present a supplementary cryptanalysis of Kumari et al.'s protocol and point out that it is still vulnerable to key-compromise impersonation attack and is unable to provide perfect-forward-secrecy. Moreover, we also remark that Nikooghadam et al.'s protocol is unable to provide perfect forward secrecy and is also vulnerable to off-line password guessing attack and key-compromise impersonation attack. (2) We establish a novel lightweight authentication protocol for SIP using ECC.
(3) By heuristic security analysis, we illustrate that the proposed protocol is immune to all known attacks. Moreover, the proposed protocol can provide more comprehensive security functions including perfect forward secrecy, dynamic identity, and anonymity, etc. (4) Via AVISPA software simulation verification, we show that the improved protocol is SAFE against active and passive attacks including replay and man-in-the-middle attacks under the Dolev-Yao model [31]. (5) According to BAN-Logic proof, we show that user and server can mutual authenticate successfully each other in the improved protocol. (6) Comparing with the relevant solutions, we remark that our protocol is more secure and suitable for application in the actual scene. The rest of this paper is organized as follows: attacker model and intractable problems are listed in Section 2. The protocol of Kumari et al. and its cryptanalysis is explained in Section 3. The protocol of Nikooghadam et al. and its cryptanalysis is provided in Section 4. The proposed scheme is presented in Section 5. The heuristic security analysis, simulation and security proof through AVISPA software and BAN-Logic are presented in Sections 6, 7 and 8, respectively. Security and performance comparisons are depicted in Section 9. Finally, the conclusion is summarised in Section 10.
Preliminaries
In this section, we introduce the capacities of the adversary of the authentication protocol. Some notations used in this paper are listed in Table 1.
Attacker model
According to [32][33][34][35], throughout this paper, we summarize the capacities of the attacker suitable for the whole paper as follows: (1) According to [33,34], if steals the smart card of user or is in the effective range of the smart card being attacked, may have the ability to obtain all datum stored in smart card by using the power-analysis technology. (2) In open channels, all datum transmitted on these channels are public. So has the capacity to eavesdrop, delete, modify, insert, replay, and block these messages on pubic channels.
(3) According to [32,35], can have the ability to guess identity and password simultaneously in polynomial time. Thus, can traverse all pairs of identity and password in dictionary space with in polynomial time.
(4) According to [32,35], can either steal password or get all datum from user's smart card, but not both. If they are compromised by simultaneously, then any two-factor authentication protocol is insecure. (5) When perfect forward secrecy [32,35] and key-compromise user impersonation attack are discussed, the long-term private key of the server can be leaked to . Since perfect forward security is the ultimate security, and key-compromise user impersonation attack is the ultimate attack, if an authentication protocol can both provide forward security and resist key-compromise user impersonation attack, it will be a better protocol. When assessing any attack, key-compromise user impersonation attack in particular, it is assumed that any adversary cannot get the verifiers and the private key of server simultaneously.
Intractable problems over ECC
Generally, let be a secure prime number and be a finite field, the elliptic curve equation in ECC is defined in the following form: ( , ): 2 = 3 + + ( ) over with 4 3 + 27 ≠ 0 ( ). Where , ∈ . Elliptic curve discrete logarithm problem (ECDLP): Let is a generator of ( , ) and = , where ∈ , it is almost impossible for PPT (probabilistic polynomial time adversary) to figure out the random number satisfying = . Elliptic curve computational Diffie-Hellman problem (ECCDHP): Let 1 , 2 ∈ ( , ), it is almost impossible for PPT to figure out ( 1 2 ) .
A brief introduction of Kumari et al.'s protocol
This part simply describes Kumari et al.'s protocol [28]. We omit the password changing phase of their protocol. The registration-phase, login and authentication phase are introduced as follows.
Login and authentication phase
In this part, ( ) and execute the following steps for login and authentication: (1) inserts his smart card into the card reader and inputs correct ,
Vulnerability analysis of Kumari et al.'s protocol
In this subsection, we prove that the protocol of Kumari et al. [28] can neither resist key-compromise-impersonation attack nor provide perfect-forward-secrecy, except the vulnerability pointed out by Nikooghadam et al. [30].
Perfect-forward-secrecy
According to the analysis of Nikooghadam et al. [30], if a legitimate user acts as an attacker and knows the long-term private key of , the malicious client obtains the session key between and by performing the following steps.
extracts the values { , , , , } of the 's smart card and intercepts the login request message { , ′, , , } and the respond message { , } to from .
Key-compromise-impersonation-attack
If a legitimate user acts as an attacker and compromises the long term secret key of , then executes the following steps to impersonate to .
Introduction and Cryptanalysis of Nikooghadam et al.'s Protocol
selects random numbers and . Subsequently, . Then, replaces with by itself. At last, sends the response 3 to .
Off-line password guessing attack
If gets the smart card of some user , then can obtain the useful datum { , , , (⋅) / (⋅), ℎ(⋅)} in and intercepts the request message{ , , }. Afterwards, is able to get the correct password and identity of as follows: (1) selects * , * as identity and password of in the identity space and password space .
uses * to decrypt the value of . If the decryption is failed, then repeats 1), 2) and 3) till the decryption becomes succussful. Otherwise, calculates * ( ) = ( || || || * ) and checks whether * = . If these are equal, it infers that * , * are the correct identity and password of user . By observing the above steps, we find that two guessing factors are used in login phase, that is, and .
is the decryption key of . On successful decryption, cotinues to verify the second guessing factor transmitted through open channel. Moreover, we can compute the computation time complexity of guessing attack as follows: (| | * | | * ( ℎ + )), where ℎ is the computaional cost for a hash fuction computation and is the computaional cost for symmetric encryption or decryption, | | and | | respectively denote the number of and the number of . Usally, | | ≤ | | ≤ 10 6 [32,36,37].
Because of the low entropy of identity and password, can successfully get the correct identity and password of user within a polynomial time.
Perfect-forward-secrecy
In the protocol of Nikooghadam et al. [30], if knows the long term secret key of , then can obtain the session key between and .
Key compromise user impersonation attack
If compromises the long-term secret key of , then is able to execute the following steps to impersonate to .
(8) On receiving the response message from , computes 3 * = ℎ( ′ ∥ ′ ∥ ′ ). Afterwards, verifies whether 3 * = 3 ′ . If these are not equal, terminates this session. Otherwise, calculates the session key = ℎ( ′ ∥ * ∥ ′ ) and believes that he has successfully established this session with the legimate user. Actually, is "the legimate user". To sum up, the adversary successfully impersonates the legitimate user to . Therefore, Nikooghadam et al.'s protocol fails to withstand such attack.
The Improved Protocol
According to the above cryptanalysis on Nikooghadam et al.'s protocol, first, the information { , } in smart card and the symmetric encryption key are used in the login request phase of their protocol, so that the attacker can perform off-line guessing. Second, their protocol does not employ public key cryptography, which is the key technology to preserve forward secrecy. Third, their protocol is incapable of resisting key-compromise-impersonation attack, because of lacking some secret number. However, the main aim of this part is to remove the weakness of Nikooghadam et al.'s protocol by using ECC and some tricks. And we present an improved lightweight authentication protocol using ECC. The improved protocol consists of four parts: initialization part, registration part, login and authentication part and password updating part. The registration part is depicted in Fig. 1. The login and authentication part is depicted in Fig. 2.
User
Server Registration part:
User
∕ Server Login and authentication part:
Password updating part
After and have completed the authentication and the session key is established, can renew his/her password at will. Firstly, inputs his identity , old password and new password . Then, we adopt a pattern that the smart card does not check the correctness of the login, but the correctness of the login is verified by the server; 2. according to [53], in order to obtain perfect forward secrecy, the improved protocol uses elliptic curve cryptography (ECC); 3. in order to resist key-compromise user impersonation attack, the server store a secret element in its database which cannot be leak to the adversary.
Preserve user anonymity & un-traceability
We suppose that the adversary has stolen 's smart card and has obtained all datum { , , , , (⋅) ∕ (⋅), (⋅)} . In the login process of , eavesdrops all transmitted message { 1 , 0 , 1 , 3 , 2 , 3 }. Since, these parameters are either protected by hash function or is computed by elliptic curve discrete logarithm cryptography, is unable to derive the identity from them in polynomial time. Moreover, those transmitted message are variable in every time communication. Therefore, the presented protocol can provide user anonymity & un-traceability.
Resist privileged insider attack
During the registration phase, the user sends { , ℎ( ∥ ∥ )} to . The password of is protected by hash function and the secret element , so the inside adversary cannot get the plaintext password of . Accordingly, the proposed scheme is immune to such attack.
Resist replay attack
In our proposed scheme, all transmitted message { 1 , 0 , 1 , 3 , 2 , 3 } in open channel are different for every communication. Once the adversary replays these message, the server or user can detect the problem. Therefore, it is impossible to perform the replay attack for the adversary in the improved protocol.
Resist stolen verifier attack
In our improved protocol, suppose that steals the verifier table stored in , however, still cannot perform any attack. Thereupon, the improved protocol can resistance against stolen-verifier-attack.
Resist off-line password guessing attack
Suppose that gets all elements stored in of . On one hand, is not able to guess the correct password of , since, there does not exist any verifying value in these parameters. On the other hand, if not only gets these parameters in smart card, but also intercepts the login request message { 1 , 0 , 1 }, then attempts to guess the password of . In the login request message, { 0 , 1 } be used as verifying values.
Afterwards, can choose identity and password from dictionary space and computes * = ⊕ ℎ( * ∥ ∥ * ) . However, if wants to calculate the corresponding verifier value { 0 , 1 }, he must know 2 = = 1 , which is only known to the user and server. Accordingly, cannot guess the correct password of by computing the corresponding verifying values. Therefore, our proposed protocol is resistant to off-line dictionary attack.
Resist key-compromise user impersonation attack
Suppose that if the long-term private element has been leaked to , and can impersonate the legal user to server, then it infers that the analyzed protocol is vulnerable to key compromise impersonation attack. In proposed protocol, to impersonate the legal user , must be able to figure out the forged login request message. Since, the random number of hasn't been leaked to , it implies that cannot get the correct value of = ( ∥ ∥ ) . Thereupon, has no way to forge the legal value of 1 = ( ∥ 2 ∥ ∥ ) and 3 = ( ∥ ∥ ∥ ) . Thus, the proposed protocol is immune to key compromise user impersonation attack.
Resist server impersonation attack
If wants to masquerade as , then must have to calculate a valid responding message { 3 , 2 } for . In proposed protocol, firstly, captures the login request message { 1 , 0 , 1 } and extracts the information { , , , , , (⋅) ∕ (⋅), (⋅)} in smart card. Then, selects two random numbers ′, ′ . To compute the valid message { 3 , 2 }, must know the value of { , 2 } that can compute . However, is unable to create 2 without the long-term private key of . Thus, cannot forge 3 or even 2 . According to above discussion, it is inferred that the improved protocol can be protected against the server impersonation attack.
Provide mutual authentication
During the login & authentication part of the improved protocol, is authenticated by by using the equations ( * ) =? ( ) and 1 * =? 1 . Subsequently, by using the equation 2 * =? 2 . According to the previous analysis, our improved protocol is immune to impersonation attack. Therefore, and can carry out authentication smoothly. That is to say, the proposed protocol addresses the requirements of mutual authentication.
Provide perfect forward security
Suppose the adversary can intercept any message over public channels and extracts the data in smart card by side-channel attack. In proposed protocol, though knows password of and the long-term private key of , still cannot calculate the session key = ( ∥ 2 ∥ ∥ ∥ ), because the key is protected by , and . Accordingly, the improved protocol can preserve perfect forward secrecy.
Security simulation of proposed protocol using AVISPA software
AVISPA [38] is a pushbutton software tool for the automated validation of internet security-sensitive protocols and applications, can simulate the formal security verification for the improved protocol. Here, we give the simulation of the improved protocol by using AVISPA tool that estimates whether our protocol is safe under the Dolev-Yao model [31]. Since AVISPA tool accepts High Level Protocol Specification Language (HLPSL), we firstly provide the HLPSL codes, which are provided in Figs. 3-5, for , , the session, goal and the environment, respectively. The analysis results of the proposed protocol are displayed in Figs. 6 and 7. From the simulation results of OFMC and CL-AtSe, it is inferred that that the proposed protocol is SAFE against active and passive attacks including replay and man-in-the-middle attacks under Dolev-Yao model.
BNA-Logic Proof of Proposed Protocol
Here, we give the security proof of the improved protocol using BAN-Logic [39]. We prove that can establish a session initial key with in the proposed protocol. First, some BAN-Logic notations are listed in Table 2. Second, some BAN-logic postulates are listed in Table 3, and the idealized form, security goals and initiative premises of the improved protocol are formally provided.
From Table 5, we observe that Chang et al. [26], Kumari et al. [28], Chaudhry et al. [29], Nikooghadam et al. [30], Chou et al. [40], Wen et al. [41]'s protocols are unable to provide perfect forward secrecy because of only using hash function and symmetric key cryptographic operations in their protocols. Among comparative literature, only our, Chaudhry et al. [29] and Mishra et al. [43]'s protocols can resist key-compromise impersonation attack. To summarize, all these compared literatures are more or less vulnerable to certain security vulnerabilities, except our and Mishra et al.'s protocol. According to Table 4 in login-authentication phase, while the proposed protocol executes only in 6.7217 . These illustrate that the improved protocol has better performance than the compared protocols. | 2019-03-09T14:01:52.223Z | 2019-02-27T00:00:00.000 | {
"year": 2019,
"sha1": "389d009947cb6ec1b04d8080efaf6d7463df9409",
"oa_license": null,
"oa_url": "http://itiis.org/digital-library/manuscript/file/22017/TIISVol13No2-27.pdf",
"oa_status": "GOLD",
"pdf_src": "Adhoc",
"pdf_hash": "24ac23570628abaedc27ba964934b8328e58299c",
"s2fieldsofstudy": [
"Computer Science"
],
"extfieldsofstudy": [
"Computer Science"
]
} |
133863208 | pes2o/s2orc | v3-fos-license | Water Resources in the Bratunac Municipality as an Opportunity for Irrigation in Agriculture
The aim of the investigation is to determine water resources available in the Bratunac municipality as an opportunity for irrigation in agriculture, one of the most important economic sectors in that municipality at the present time. The study area covers almost the whole Bratunac municipality and includes 20 of the total of 27 local communities. Research of the hydrogeological characteristics and the required quantities of water for irrigation in the studied local communities showed that in 10 local communities, irrigation can be provided using underground water withdrawn by means of excavated or drilled wells. Adequate water supply in many other local communities could be obtained from nearby surface water streams. In five local communities, the surface water from local rivers is not sufficient to ensure adequate water supply; therefore, an alternative solution consisting in the catchment of water from the Drina river has been proposed. The alternative solution for all local communities situated in the Glogovac river valley could consist in securing the required amounts of water from that water stream.
Introduction
Bosnia and Herzegovina, as well as many other countries in the region, is faced with the consequences of climate change and more frequent occurrence of droughts mainly affecting agricultural production, which in these circumstances becomes impossible without irrigation [1]. Bosnia and Herzegovina has experienced serious incidences of extreme weather events over the past two decades, causing severe economic losses [2]. It is estimated that in 2012 alone, the drought caused losses of over USD 1 billion in agricultural production and yield reduction up to 70% [3]. A similar situation regarding the increase in water requirements for irrigation in growing of plants in agriculture exists in the neighboring countries. Thus, e.g. in the neighboring Republic of Croatia the research results allow the conclusion that climatic changes in the last investigation period (1994 to 2003) caused increased water requirements of the crops grown, compared to the period from 1961 to 2003, and thereby also a higher water deficit in soil and an increased demand for providing larger quantities of suitable irrigation water [4]. The current climate change and the occurrence of frequent droughts that damage the yield of agricultural crops imposes the need for increasing water resources for irrigation in agriculture. In the course of the past 10 years, the Bratunac municipality has made a significat economic progress, mainly based on the agriculture, specifically on the production of raspberries. A favorable PBE IOP Conf. Series: Earth and Environmental Science 222 (2019) 012024 IOP Publishing doi: 10.1088/1755-1315/222/1/012024 2 geographical position, as well as the pedologic predisposition and the climate conditions with no extreme droughts make this region one of the most favourable areas for the raspberry cultivation in the Republika Srpska. Valuing this potential, the Ministry of Agriculture, Forestry and Water Management of Republika Srpska commissioned this research for the World Bank, which co-finances the irrigation project in Bratunac municipality together with the municipality of Bratunac.
Solving the irrigation problem in Bratunac municipality will enable a more intensive agricultural production, with raspberry growing as the main activity. An intensive agricultural production will bring an increase in production and, accordingly, the profits in this sector. The economic growth will enable the continuation of the irrigation development project. According to the calculation, the overall requirements for irrigation water in the Bratunac municipality equal 122.59 Ls -1 , and 43.22 Ls -1 out of that sum is intended to be provided through water supply from surface water; the remainging water supply will come from wells, with or without artificial recharge. The type and extent of the research need to be adapted to the conditions on the ground as well as the investor's requirements.
Taking into consideration that a groundwater source is the most important element of the watering system, a special attention has been paid to the well dimensioning so as to facilitate a maximum groundwater abstraction and provide for sufficient water amounts. It is emphasised that there is a possibility for two well pumps (a main and a reserve pump) to be installed in the wells.
Description of investigated area
The municipality of Bratunac covers an area of 293 km 2 in the eastern part of Bosnia and Herzegovina, in the Republika Srpska. This municipality area is situated between 44°01'34" and 44°16'39" N and 19°05'54" and 19°37'25" E ( Figure 1).
Its neighboring municipalities are: Srebrenica to the south-west, Milići to the north-west, and Zvornik to the north. The north-eastern boundary of the studied area is the river of Drina, whose 68 km long left bank forms the natural border of Bratunac municipality with the neighboring Republic of Serbia (the municipalities of Ljubovija and Bajina Bašta). The land along the Drina is a plain belt covering 30% of the total area of the municipality. The plain belt along the left bank of the Drina is at an altitude of about 174 m above sea level, which continues to the interior of the municipality and the hilly belt (70% of the total area of the municipality), with altitudes of 300 to 772 m above sea level.
The territory of the municipality belongs to an area of moderate continental climate with long and warm summers and cold winters, with heavy snowfall. The average annual temperature is 16°C. January is the coldest month, with an average temperature of 1 to 2°C, and July is the hottest month with an average temperature of 24°C. The average rainfall is 1,000 mm and is most abundant in spring and autumn. The city core, as well as the ravine belt along the Drina are surrounded by hills, which causes fogs to persist in the morning, which in turn adversely affects the cultivation of certain crops.
Geomorphological characteristics of the investigated area
The geomorphological evolution has determined the fluvial relief type and this applies to all rivers in the area. The karst type of relief is dominant in the Glogova mountain area which represents a topographic divide between the sub-catchment areas of the Glogovska river and the Kravička river. The river valleys are small and are characterized by faults, resulting in regular streamflow courses with the exception of some small meanders situated in the actual valleys. The rivers have torrent-type flows that have considerably influenced the relief shapes.
Geomorphological characteristics are a consequence of the geologic structure of the terrain and of the geomorphological processes that took part in its formation. Taking into consideration the geologic diversity of the studied area, various geomorphological processes are represented. The area is characterized by the hilly-mountainous type of relief ( Figure 2), with distinctive moderate relief forms found in the parts characterized by the presence of clastic sediments.
The following geomorphological processes can be found in the designated territory: fluvial, delluvial, and colluvial geomorphological. Fluvial process is a result of the the occurrence of numerous watercourses, such as: the Križevica, the Grabovička river, the Kravička river, the Glogovska river, the Jagodnja and the Slapašnička river. A delluvial geomorphological process results from sporadic diffuse flows. Delluvial deposits, as a cumulative form of this process, occur in the contour parts of all the streams, including the Drina. The thickness of these sediments is not large, measuring between 2 and 4 metres. The investigation is based on the analysis conducted as part of a previous hydrologic research [5][6], using available geologic [7][8] and hydrogeologic documentation [9][10] and a detailed hydrogeologic assessment obtained through field survey [11]. Locations where the hydrogeological potential for groundwater abstraction has been identified have been analyzed and the construction of wells for water exploitation is planned at these sites.
On the basis of the investigation of water resources, a proposal for water abstraction for the purposes of irrigation in agriculture has been made including economic justification of the proposed solutions regarding the irrigation project and an overview of costs of the wells' construction. The study area comprises almost the entire municipality of Bratunac as it covers 20 of the total of 27 local communities.
Hydrogeological features of the studied area
For the purpose of hydrogeological sectioning of the studied area, the following factors have been taken into account: the lithological composition of the charted units, tectonic configuration of the terrain, geomorphological characteristics, types of aquifers and their distribution, as well as their well capacity, the aquifer recharge conditions and the groundwater drainage. Based on the aforementioned factors, the following types of aquifers were identified in the studied area ( Figure 3): a continual water-level aquifer, i.e. an intergranular aquifer; a karst aquifer; a fracture aquifer; a hydrogeologic complex of mainly fracture porosity and partly provisionally "waterless" (insignificant) sectors of the terrain. The intergranular type of aquifer, i.e. the continual water-level aquifer, is formed in the alluvial and terrace sediments characterized by good permeability. Taking into consideration a relatively large area diffusion of these sediments, as well as their considerable thickness and good seepage characteristics, this aquifer is of great importance for groundwater exploitation. The Drina alluvium is expected to have a great water abundance, although the related watercourses have limited alluvial deposits of 3 m maximum, which reduce the obtainable water capacity. On the other hand, the effect the groundwaters would have on the regulation unit in prospect is not large, for the relevant groundwaters are of an unconfined type, and it is not anticipated that the groundwater hydrostatic pressure would have an impact on the future construction of the relevant river regulations. Drainage is achieved artificially, through wells or other tapping objects, as well as through outflow of water into a river. Principally, in the conditions of standard average discharge, all rivers function as the base level of erosion and all water gravitates towards those flows, which represent the main recipient on the relevant locality. In the segments where the Drina is eroding the right bank, the alluvial deposits are very thin, under 1 meter in thickness, and in some locations they are entirely absent. This was taken into account when the locations of the exploitation-production wells were determined, together with the required water capacity for each individual system.
The karst aquifer is formed in the Triassic carbonate sediments. The karst aquifer has a good permeability and is formed in a number of varieties of limestone and dolomitic limestone. This type of aquifer is most commonly found in terrains with carbonate rocks broken and cracked by tectonic movements [12].
The water-bearing properties of this aquifer type depend on the degree of karsting (a result of karstification) of limestone sediments. Since these sediments are located in the upper part of the catchment basin, beyond the planned regulations, they will not be discussed further; however, it is emphasised that the rivers mentioned herein exist on account of this aquifer, as it discharges through the springs that feed the related rivers.
The karst aquifer is located above Kravica, on the slopes of the Glogova mountain. Sources with capacity of 5-10 Ls -1 have been ascertained in the area of this aquifer. Those are contact karst springs that are situated in the upper part of the Kravička river catchment. Some of these karst springs are exploited as water supply for the population.
In addition to the karst aquifer, there is also a fracture type aquifer present in the terrain. This aquifer is being recharged through the infiltration of atmospheric precipitates, while the drainage is achieved through springs. The fracture type of aquifer is formed in igneous rocks, which are characterized by poor permeability and are chiefly limited to the right bank of the lower course of the Slapašnička river and the southern part of the research area. This, primarily, refers to dacite. The fracture type of aquifer is also formed in the Carbon complex of terrigenous rocks (sandstone, quartz breccia, conglomerates). In fracture porosity rocks, the yieldingness reduces with depth because the fractures, too, lessen with depth. Significant quantities of water can be abstracted from unconfined aquifers with a saturated thickness of 5-10 m, but the selection of a suitable abstraction system is the key to success [13].
The hydrogeological complex of mainly fracture porosity and partly provisionally "waterless" (insignificant) sectors of the terrain is represented by the most widely distributed rocks of the Paleozoic age. Hence, these sediments act as hydrogeologic isolators, but it is also possible that some minor groundwater quantities could be present within them and could be used as local water supply. This is also attested by the presence of a number of minor springs (up to 1 Ls -1 ) throughout this area. Additionally, these sediments represent a barrier for waterflows in the karst aquifer, so that springs with significant groundwater capacity appear on the contact points of these sediments and the Triassic limestone. Also belonging to provisionally "waterless" sectors of the terrain, and of poor permeability, are the delluvial sediments, which can store minor groundwater quantities; consequently, water from this aquifer can be used for individual water supplies. A field prospection indicated a large number of wells in these sediments, but the wells are of small depth and capture a surface weathering crust of carbonate rocks. The yieldingness of those water wells ranges from 5 to 10 m 3 day -1 , which is not sufficient to include them in the irrigation system.
Existing water supply objects
During detailed field survey, special attention was paid to existing wells and surface water intakes as the possible water supply for irrigation purposes.
Existing surface water supply objects
The current means of water intake are based on individual solutions used for a limited area of 0.1-1 ha, and rarely for larger areas. The intake consists in a primitive water gate made of sacks or boulders. A pump bucket is lowered into the accumulation formed by those water gates and water is pumped into cisterns placed in the highest parts of the orchards. From these water reservoirs, water flows through pipes to land parcels where the plants (raspberry) grow.
Another method to supply water consists in the creation of small water-accumulations in the lateral streams, i.e. tributaries of the rivers. In that manner, the accumulations of a few cubic meters are formed and water is transported using gravity to plastic or metal tanks situated on the actual plantations. This method is viable only for small land parcels because their capacity is very small, and it can provide up to 10-15 m 3 day -1 .
A frequently used method for obtaining water is to construct wells at the actual river banks, by extending the well below the river bed, thus allowing water to directly discharge into the well. A water capacity of 100-150 m 3 day -1 can be obtained using this method. A disadvantage of this intake method is that rivers in Bratunac are of torrential character as the catchment area mainly consists of poor permeability rocks; consequently, surface outflow from the catchment is prevailing. The discharge oscillation is large, resulting in considerable lateral erosion of the river banks, which endangers these systems.
Another disadvantage of this scheme consists in significant water impurities and water shortages as the current demand requires the construction of numerous systems, which exceed the stream's water capacity. 7 3 to 9 m, rarely more. Taking into consideration the geological configuration and the hydrogeologic function, the well depth would not provide significant water quantities.
Existing
The excavated wells in carbonate rocks are an optimal solution for water intake because the crust is an environment unable to accumulate larger water quantities; therefore, an aquifer formed in the weathering crust is very thin and an excavated well enables the intake of minor water quantities of around 5-10 m 3 day -1 . Some excavated wells have already been specifically constructed for the purpose of irrigation.
Except for the public water source in Bjelovac, all other tapping objects were established without any previous hydrogeological research, so they mainly consist of wells that are located in suboptimal hydrogeological predisposition zones and some supply little to no water.
The optimum groundwater intake in the carbonate complex is provided by excavated wells, provided that their location is determined so as to be set on the points of contact of two components where the groundwater circulation is the largest. Wells in these deposits can be excavated even in the parts of terrain which morphologically incline the waters to drain into one zone (amphitheatral forms).
Since the excavated wells are constructed as "imperfect" wells, their bottoms (hole backs) should be suspended above the aquifer zone bottoms; a bottom is an active part of the filter.
Well locations of these characteristics cannot provide sufficient water quantities for irrigation in the designated zones because the greatest demand for watering exist in the hilly parts of the research area, which are made of carbonate rocks. For that reason, the research is not directed towards the investigation of the hydrogeologic potentials of the carbonate complex as the probability that the area could be supplied with sufficient groundwater quantities is low and this would jeopardize the whole irrigation project.
Four such wells are located in sections without water and some are very deep. On the other hand, all the wells are equipped with pumps exceeding the capacity of the wells, which means they pump out all the accumulated water in 5-10 minutes, where the recharge lasts up to 6 hours.
Among all the investigated wells, the best one was situated in Podčauš; it has been excavated on private land but serves as a public water source for the purpose of irrigation. This well is situated on the bank of the river Križevica, about 10 m from the regulated river flow (on the right bank).
The well's yield was 0.4 Ls -1 , which is around 34.5 m 3 day -1 . Unfortunately, even this capacity is not sufficient for the irrigation in Podčauš.
Drilled wells
Field survey has also revealed the presence of drilled wells. The purpose of their construction was to reach potential deeper aquifers and thus provide the required amounts of water.
The hydrogeological investigation was followed by the micro-locations where the drilled wells were evident. The wells are situated in the zones of micro-depressions or in the flaws zones (in the rock cracks), where groundwater circulation is the largest, but their yieldingness is also very small. A well of 60 m in depth has a capacity of around 0.2 Ls -1 . This amount of water is, indeed, sufficient for a household, but it is insignificant in terms of quantities projected for irrigation purposes. Moreover, a 32 m well in the local community of Ježeštica is already equipped with a pump and hydrophore, as well as solar panels, but its yieldingness is only 0.15 Ls -1 . This well is also equipped with a much more powerful pump than the well capacity can support, which means that if exploitation using this pump should continue, it might lead to a decline in the well's capacity. Furthermore, another test-drilling was performed in Ježeštica, but since the results were negative the borehole was not turned into a well.
A well drilling of 125 m in depth in Triassic limestone was performed in the local community of Kravice. Water was found 60 m below the surface and testing determine the well's capacity of 0.3 Ls -1 . All springs of a constant type within the zones of inhabited places are captured for the purpose of supplying drinking water. Concerning the fact that drinking water is a priority in any water supplying process, those springs are not taken into consideration as a potential water source for irrigation.
Proposals for the water supply in accordance with the requirements of the projected irrigation system in the Bratunac municipality
Securing sufficient water quantities for irrigation is the biggest problem of every watering system and it sets the requirements on the technical solution for the irrigation system. The optimal solution is to utilize groundwater, but there the available quantity is often insufficient. The basic goal of the study is to try and solve the water deficit problem through the use of groundwater. The ground waters, generally, are of a better quality than surface waters, which is a the advantage. Through the analysis of the indicated hydrogeological characteristics, and by calculation of the required water for irrigation during field research at the potential areas for irrigation in 10 local communities, it was determined that the problem of water supply for irrigation can be solved by groundwater intake from wells. In 5 local communities, the surface water from the local water-streams are not sufficient to provide adequate water quantity for irrigation needs and the survey offered an alternative solution by catching water from the Drina river. And for all local communities located in the Glogovska river valley the alternative solution in supplying the required amount of water for irrigation is from that, Glogovska water stream.
Proposed water supply from surface water streams
All waters in the research area belong to the Drina river catchment area, with the exception ofthe zone of the local community of Glogova-Magašići where the catchment area is divided into two subcatchments, one belonging to the Kravička river and the other belonging to the Glogovska river. The area on the actual divide represents a waterless terrain with no options to provide the required water capacities in the local community's territory. Bearing in mind that these are insufficiently studied river basins that were not the subject of long-term hydrological observations for the purpose of defining water intakes, we relied on the "Preliminary analysis of the river network in the Bratunac municipality for the purpose of defining the possibility of constructing the flow of hydroelectric power plants" [6]. This analysis defines the river network, sub-basins and minimal river flows: the Kravička river, the Slapašnica, the Jagodnja-Žljebac, the Mlečanska river-Fakovići, the Loznička river, the Grabovička river and the Glogovska river (Table 1). Table 1. Water flow measuring results using the hydrometric wing, in a multi-year minimum [6].
No.
Name of the watercourse Rate of flow (m 3 s -1 ) 1 Kravička river 0.075 2 Slapašnica 0.038 3 Jagodnja-Žlijebac 0.050 4 Mlečanska River Fakovići 0.068 5 Loznička river 0.031 6 Grabovička river 0.095 7 Glogovska river 0.038 The potential surface water abstraction sites, as well as the assessment of the available water quantities required to sustain the biological minimum in the rivers, have been defined on the basis of the data shown in Table 1.
Taking into consideration that the local water-streams have distinctly torrential character, the water abstraction from those surface water-streams is proposed, by using so called "Tyrolean intake", which implies enable water intake from the river bed bottom during the hydrological minimum when the water demand is the highest. Table 2 shows a list of the local communities that will be supplied with water from surface waterstreams. It is emphasized that two local communities: Glogova and Glogova -Magašići, are connected to one tapping object which abstracts water from the Glogovska River.
Proposed water supply from wells
As has already been stated, the aim of the hydrogeological field investigation was to provide maximum quantities of water through wells. Unfortunately, it has been verified that there was no valid hydrogeological potential to open a groundwater source in all of the local communities. This is partly a result of the fact that, so far, the supply of drinking water has not been adequately ensured in those areas; it follows that irrigation is only a secondary priority. Wells are planned to be built in the alluvial plain of the Drina River. For the local communities requiring greater water quantities, the well locations are defined so as to be able to ensure artificial recharge and thus the required amounts of water. Table 3 presents an estimation of the water supply that can be obtained from projected wells based on the research. According to the projected solutions, the total exploitation of groundwater in the Bratunac municipal area corresponds to an irrigation capacity of 79.37 Ls -1 . The well situated in the local community of Glogova-Magašići can provide only 0.2 Ls -1 , which is very low compared to the capacity required. For this reason, an intake the water into the irrigation system by the surface water supply from the Glogovska River was proposed. The proposed solution will not be entirely sufficient, but it provides a reasonable alternative at the moment.
Explanation of the planned solutions for irrigation water supply in the Bratunac municipality
Extreme climate events in Bosnia and Herzegovina have become increasingly frequent. Out of the last twelve years, six were very dry to extremely dry: 2003, 2007, 2008, 2011, 2012 and 2013 [14].
In that context, the actual water capacity for the projected irrigation system in the Bratunac municipality is estimated at around 122.59 Ls -1 . Relying on the study of the terrain, a part of the system will be supplied with water from the surface streams while the remaining part part from the wells.
The design was not preceded by a prospection drilling so that all the wells with small hydrogeological potential or with a more significant water demand were envisaged to feature a hydraulic connection with the river Drina through drainage channels, or with the Kravička River in the case of Konjević Polje -Pobuđe area.
The project encompasses the regeneration of a well situated at the old water source in "Lamele" locality and putting that well into operation as part of four irrigation systems.
Concerning the locations of the wells, an attention was paid to place them on municipal land, at locations which satisfy the requirements on the hydrogeological potential.
Care was also taken to position the wells as close as possible to the central point of the irrigation zone system, in order to have pipelines as short as possible; additionally, a particular attention was paid to ensure their proximity to electric power sources to supply them with electricity.
The wells are designed in a way that allows maximum capture of groundwater. All wells are projected to facilitate two well pumps (one operational, one reserve).
On average, the Drina river high water mark lies 10 m above the well openings, so it is not feasible to raise the well openings above those elevation marks. Table 4 shows the overview of the costs of the wells' construction and restoration of the well in "Lamele" locality. The costs of the channels for artificial wells recharge are included.
Economic justification of the proposed solutions for the irrigation project
Considering the exceptional quality of raspberries from the Bratunac area, the wholesale purchase price of raspberries is guaranteed at and above 3.5 BAMkg -1 . This price makes raspberries a very profitable commercial crop. An area of around 0.1 ha planted with raspberries gives a usual yield of 1,500 kg of raspberries, whichgiven the price of 3.5 BAMkg -1gives the revenue of 52,500 BAMha -1 . This price must cover the costs of hoeing, nutrition, pruning and picking, which are estimated at 1.5 BAMkg -1 or 22,500 BAMha -1 . This calculation indicates profit from raspberry production in the Bratunac municipality equal to 30,000 BAMha -1 . The initial investment into the development of the raspberry plantations is not included in the price. When raspberry groves are irrigated according to professional instructions and the plants are provided with humidity they lacks in dry periods, the yield increases up to 25,000 kgha -1 . According to the above calculation, the revenue per 0.1 ha used to grow raspberries equals 87,500 BAMha -1 ; with expenses deducted, this corresponds to 65,000 BAMha -1 .
In view of the facts stated above, the turnover from the raspberry production utilising the described irrigation system increases by 100%. Investments into drip irrigation systems at land parcels contianing the raspberries plantations were not included in the calculation.
The income from raspberry production constitutes a significant economic potential for the Bratunac municipality and it should to be supported through the implementation of the project.
Conclusions
Agricultural production and primarily the production of raspberries forms the basis for the economic development of the Bratunac municipality.
The study of water resources in the Bratunac municipality and their availibility for irrigation in raspberry production is presented in this paper.
The results of research which has included 20 of the total of 27 local communities in Bratunac showed that in 10 local communities, irrigation can be provided using underground water abstraction by means of excavated or drilled wells. In many other local communities, water supply could be ensured by using water from surface water streams flowing through these local communities. In five of the local communities, surface water from local rivers is not sufficient to provide adequate water supply, so an alternative solution consisting in catching water from the Drina river has been proposed. The alternative solution for all local communities situated in the Glogovac river valley consists in water supply by catching the required water quantities from that water stream. | 2019-04-27T13:12:47.475Z | 2019-01-21T00:00:00.000 | {
"year": 2019,
"sha1": "7f9ec3de9c29256916e5e681f0ac9576fbb1da06",
"oa_license": null,
"oa_url": "https://doi.org/10.1088/1755-1315/222/1/012024",
"oa_status": "GOLD",
"pdf_src": "IOP",
"pdf_hash": "b8b3bb6c382afd1d1d235dc0d17d2d846631b1f8",
"s2fieldsofstudy": [
"Environmental Science"
],
"extfieldsofstudy": [
"Environmental Science",
"Physics"
]
} |
17700810 | pes2o/s2orc | v3-fos-license | Model Checking of Boolean Process Models
In the field of Business Process Management formal models for the control flow of business processes have been designed since more than 15 years. Which methods are best suited to verify the bulk of these models? The first step is to select a formal language which fixes the semantics of the models. We adopt the language of Boolean systems as reference language for Boolean process models. Boolean systems form a simple subclass of coloured Petri nets. Their characteristics are low tokens to model explicitly states with a subsequent skipping of activations and arbitrary logical rules of type AND, XOR, OR etc. to model the split and join of the control flow. We apply model checking as a verification method for the safeness and liveness of Boolean systems. Model checking of Boolean systems uses the elementary theory of propositional logic, no modal operators are needed. Our verification builds on a finite complete prefix of a certain T-system attached to the Boolean system. It splits the processes of the Boolean system into a finite set of base processes of bounded length. Their behaviour translates to formulas from propositional logic. Our verification task consists in checking the satisfiability of these formulas. In addition we have implemented our model checking algorithm as a java program. The time needed to verify a given Boolean system depends critically on the number of initial tokens. Because the algorithm has to solve certain SAT-problems, polynomial complexity cannot be expected. The paper closes with the model checking of some Boolean process models which have been designed as Event-driven Process Chains.
Introduction
In the field of Business Process Management during the last two decades several languages have emerged which are recommended for the modelling and execution of business processes. Examples are the languages EPC, BPEL, BPMN, YAWL or several components of UML [KNS1992, AND2003, OMG2009, AH2005, BRJ2005]. While some of these languages like EPCs (Event-driven Process Chains) or UML (Unified Modeling Language) are often used in commercial projects, others stay mainly in the academic domain.
With the term Boolean process model we denote a model of the control flow of a process, which employs rules of propositional logic to describe the branching of the control flow. A simple example is an alternative specified by an XOR-rule or an OR-rule. All languages mentioned above have constructs to model the activities of a process. The languages support the necessary process primitives sequence, iteration, alternative and parallelism. Some languages can even more, they are able to model distributed process states. These languages are used therefore to design Boolean process models.
The languages enjoy different degrees of formalization. In general their syntax is well-defined but often the semantics is ambiguous or lacks completeness. Therefore several authors have undertaken the effort to translate these languages to a reference language with a well-defined semantics. In most cases Petri nets or transitions systems are used as reference language.
Petri nets have been invented at around 1960. They had a formal semantics right from the beginning. The language of Petri nets does not only support the design of a static process model. Due to their token concept Petri nets are capable to simulate also the temporal development of a process, its runs. The language has enough expressive power to serve as a reference language for EPCs, BPEL, BPMN and the process languages of UML. The language YAWL (Yet Another Workflow Language), based on Petri nets too, extends ordinary Petri nets by constructs to deal with process patterns involving multiple instances, advanced synchronisation patterns, and cancellation patterns, see sect. 4.3 in [AH2005].
Safeness of a Boolean system follows easily from the safeness of its skeleton. The skeleton is obtained by forgetting all colours of the Boolean system. A strongly connected skeleton is a T-system. The verification of T-systems is a well-established task. Much more difficult is the question of liveness of a Boolean system. High-performance algorithms to check liveness of a problem have to circumvent the state explosion problem. The number of reachable states increases exponentially with the size of the system. Strongly connected Boolean systems arise from T-systems by adding Boolean expressions as guard formulas to specify the different firing modes of the transitions. To check liveness of a safe Boolean system we proceed as follows: First we translate the behaviour of the Boolean system to formulas from propositional logic. Then we check the satisfiability of these formulas. What is closer related than these two procedures?
Our approach is an example of model checking. This method follows the principle to formalize "system enjoys property" as "system's semantics is model of formula" [Esp1994]. In general the formulas in question have to be taken from modal logic. However, to analyze liveness of Boolean systems it is sufficient to employ propositional logic only, which is an elementary theory. No use of any modal operator is necessary.
Model checking of a safe, strongly connected Boolean system starts with applying prefix theory to the skeleton. One obtains a finite complete prefix of its unfolding. By adding colours the prefix extends to a Boolean net. One has to consider a finite set of base markings on it and to check deadlock freeness and liveness for each of the resulting base processes.
We have implemented our model checking algorithm by a java program and tested its performance on a standard notebook with 2.53 GHz. Our implementation of the model checking algorithm shows a performance of some seconds per model with about 25 Boolean transitions and 30 places. Of course this result is not yet comparable to the time range of milliseconds reported in [FFJ2009]. At this stage the performance bottleneck is our simple SAT-solver written on the basis of the resolvent algorithm. Of course the SAT-problem is NP-complete, nevertheless the first step to enhance the performance of our model checking implementation would be to link one of the SATsolvers from the SAT research community. The EPC of the process "Loan request" has been taken from Fig. 1 in [MA2008] and slightly adapted. In a similar form it has been considered before in Abb. 4.31 from [Rum1999]. The process is described in [MA2008] as follows: "The start event loan is requested signals the start of the process and the precondition to execute the record loan request function. After the post-condition request is recorded, the process continues with the function conduct risk assessment after the XOR-join connector. The subsequent XOR-split connector indicates a decision. In case of a negative risk assessment, the function check client assessment is performed. The following second XOR-split marks another decision: in case of a negative client assessment the process ends with a rejection of the loan request; in case of a positive client assessment, the conduct risk assessment function is executed a second time under consideration of the positive client assessment. If the risk assessment is not negative, there is another decision point to distinguish new clients and existing clients. In case of an existing client, the set up loan contract function is conducted. After that, the AND-split indicates that two activities have to be executed: first, the sign loan contract function; and second, the offer further products subsequent process [...]. If the client is new, the analyze requirements function has to be performed in addition to setting up the loan contract. The OR-join waits for both functions to be completed if necessary. If the analyze requirements function will not be executed in the process, it continues with the subprocess immediately [...]." While the process starts with a unique event "Loan is requested" it ends with one or more of the three events "loan request is rejected" (E1), "loan contract is completed" (E2) and "client got further offer" (E3). E.g., not both events E1 and E2 can happen. The modeller intended either E1 or the combination of E2 and E3 as possible final events. The process comprises a loop which is executed whenever the client is assessed positively but his requested loan is considered too risky. Note the subtle logic of the connectors after the function "conduct risk assessment: Either the event "negative risk assessment" happens or the event "positive risk assessment". In the latter case, the event "requester is new client" may occur in addition.
The rest of the paper is structured as follows. Section 2 recalls some fundamental concepts from the theory of ordinary Petri nets, in particular their prefix theory. Section 3 introduces the class of Boolean systems, a subclass of coloured Petri nets. We will use Boolean systems as a reference language for Boolean process models in general and EPCs in particular. Section 4 introduces the colouring of prefixes and the base processes of a safe Boolean system. We present a model checking algorithm as the main result of our paper. We apply the results in section 5 to the verification of EPCs. The paper continues in section 6 with comparing our method to the methods above proposed for the verification of EPCs. The paper ends with an outlook to future research.
We assume that the reader is familiar with the theory of ordinary Petri nets.
Ordinary Petri nets and their processes
For the convenience of the reader and to fix the notation we recall some fundamental concepts from the theory of ordinary Petri nets, see also [DE1995].
A finite ordinary Petri net is a pair ( ) comprises a finite set P of places, a disjoint finite set T of transitions and a set ( ) ( ) . The path is 1 , a path from 1 x to 2 x and a path from 2 x to 1 x exists.
For a net N the firing rule defines the firing of a transition: A transition T t ∈ is enabled at a marking µ of N iff each place from its pre-set ( ) t pre is marked at µ with at least one token.
Being enabled, t may occur or fire. Firing t yields a new marking ' µ , which results from µ by consuming one token from each pre-place of t and by producing one token on each post-place of t ; this is denoted by ' has a reachable marking which enables t . A Petri net is bounded iff there exists a natural number which bounds from above the token content of every place at every reachable marking. If the bound can be chosen equal to 1 , then the Petri net is named safe.
A simple class of ordinary Petri nets are marked synchronization graphs or T-systems. They are important for the present investigation because T-systems will be the skeletons of strongly connected Boolean systems introduced in Chapter 3.
A useful means to control all reachable states of a Petri net is the concept of its unfolding and the corresponding prefix theory. For the convenience of the reader we recall now some relevant definitions and properties; see also [Esp1994, EH2008].
, be a net and let . The nodes 1 x and 2 x are in conflict, denoted x t x t belonging to the reflexive and transitive closure of F . For An occurrence net is a net , the set of elements E B y ∪ ∈ such that ( ) x y, belongs to the transitive closure of K is finite. Elements of E are called events (German: Ereignis), elements of B conditions (German: Bedingung) and K is named the causal dependency relation (German: Kausalitätsbeziehung). If in addition also then the occurrence net is called a causal net.
Because ON is acyclic the relation K is a partial order on E B ∪ , which we denote by p . Its reflexive and transitive closure is denoted by A maximal co-set ' B with respect to set inclusion is called a cut of ON .
Causal and occurrence nets are the technical means to abstract from the concept of an occurrence sequence with a well-determined order of firing its transitions to the concept of a process, which does no longer distinguish between occurrence sequences differing only by the interleaving of their transitions. A further step is the introduction of branching processes which represent in compact form a set of alternative processes. And the final step is to prove the existence of a unique maximal branching process which is named the unfolding of the original Petri net.
Definition (Processes and branching processes)
Consider a Petri net ( ) i) A branching process ( ) A transition T t ∈ occurs in the process pr iff ( ) net is a branching process if ON is a causal net.
ii) On the set of all branching processes of ( ) Each safe Petri net has an unfolding which is uniquely determined up to isomorphism [Eng1991]. In general the unfolding is an infinite net. But the unfolding of a safe Petri net always has finite complete prefixes [McM1995]. They serve as a substitute for the unfolding, because they represent each reachable marking and the firing of each transition, which can occur in the original Petri net.
Definition (Complete prefix)
Consider the unfolding N Unf pr → : is complete iff • for every reachable marking µ of ( ) It is the first condition in Definition 2.3, which will be relevant for the model checking algorithm in Chapter 4. The first condition assures that each reachable marking of the Petri net is already reachable by a subprocess of a complete prefix.
Boolean Systems
In the following we denote by BOOLE the set of all formulas from propositional logic over a fixed alphabet. In particular, these formulas contain the logical connectors AND ( ∧ ), XOR ( • ∨ ), OR ( ∨ ) and NOT ( ¬ ). We denote by the two-element set of truth values. We will often use high or the cipher 1 as a synonym for true and low or the cipher 0 as a synonym for false.
A Boolean net arises from an ordinary net with unbranched places by adding • to each transition of the ordinary net a Boolean formula as guard formula which specifies different firing modes of the transition • and to each place of the ordinary net a second colour of low tokens.
Boolean systems are a simple class of coloured Petri nets. For the purpose of the present paper we do not need to enter into the general theory.
Definition (Structure of a Boolean System and skeleton)
comprising: • An ordinary net • a place annotation, which annotates each place P p ∈ with the set Boole , • an arc annotation , which maps each arc F a ∈ to a Boolean • and a transition annotation , which maps each transition T t ∈ to a formula ( ) annotate the incoming arcs of t and the is named a Boolean transition, the guard formula determines the logical type of the transition.
, the number of high tokens, and N ∈ 2 n , the number of low tokens. The set of places with non-zero token content
Forgetting all colours induces a canonical morphism of Petri nets
In case a place has both an outgoing and an ingoing arc, we will always annotate both arcs with the same variable. In case a place has exactly one ingoing and exactly one outgoing arc, the arc annotation will be positioned in figures inside the place. A transition BN t ∈ with a unique pre-place and a unique post-place is named an unary transition. Besides its low binding an unary transition has a unique high binding. Transitions with either two pre-places and a unique post-place or with a unique pre-place and two post-places are named respectively closing or opening binary transitions. Without loss of generality we will often restrict to Boolean systems with only binary and unary transitions. For the purpose of verification we can even skip the unary transitions.
Definition 3.1 requires that the initial marking of a Boolean system BS comprises at least one high token. Otherwise no action would take place in the process represented by BS .
Note.
Readers interested in the formal definition of a Petri net morphism are referred to [Weh2006].
Example (Boolean system)
i) Figure 3 shows the scheme of a binary Boolean transition and explains its guard formula and the resulting binding elements.
Figure 3: Binary Boolean transitions with pre-and post-places and arc-annotations
The column "Bindings" in Table 1 looks ahead to Definition 3.3.
Logical type
Guard formula
Table 1: Guard formulas and bindings of binary Boolean transitions
Each guard formula is valid for both the opening and the closing Boolean transition of the given logical type.
ii) Figure 4 shows a Boolean system The initial marking µ marks the place 1 A with one high token and the place 2 A with one low token. The Boolean system contains an XOR-loop, a loop which is entered by an opening XOR-transition and left by a closing XOR-transition. Boolean systems also allow loops with different logical transitions like OR or AND. Note that each elementary circuit in the Boolean system from Figure 4 is marked with a single token. Different from a transition in ordinary Petri net a Boolean transition may have different firing modes. Each firing mode is named a binding element.
Definition (Binding elements of a Boolean net)
Consider a Boolean net
Definition (Firing rule of a Boolean system)
An enabled binding element ( ) b t, may occur. Its occurrence or firing yields a new marking 1 µ : It results from µ by consuming a high token from each pre-place and a low token from each pre-place with 0 = i x , and by producing at each post-place and a low-token . This is denoted by has a well-defined low binding consumes a low token from each pre-place of t and creates a low token at each post-place of t . Firing the low binding is interpreted as skipping the action represented by the transition. Because the initial marking 0 µ of a Boolean system contains at least one high token and because BN is faithful with respect to activation, each reachable marking µ of BS also contains at least one high token.
The concept of occurrence sequences which has been introduced for ordinary Petri nets in Section 2 generalizes to a Boolean system , of binding elements such that We denote by k µ µ σ → 0 the fact that firing σ yields the marking k µ . If . BS is high-live iff each transition is high-live.
iii) A transition BS t ∈ is dead iff no reachable marking of BS enables any binding , is enabled at dead µ . The Boolean system BS is free from synchronization deadlocks iff no reachable marking is a synchronization deadlock.
Liveness of a binding element ( ) , is a much stronger condition than high-liveness of the corresponding transition BS t ∈ . Theorem 3.11 will make precise the equivalence of highliveness and the absence of synchronization deadlocks. However, verifying that all binding elements of a transition are live requires more refined methods from model checking, see Proposition 4.8, ii). Our correctness criterion for Boolean systems is well-behavedness in the sense of Definition 3.6.
3.6
Definition (Well-behavedness) A Boolean system BS is well-behaved iff it is safe and live with respect to all its high bindings; otherwise BS is named ill-behaved.
We will see that verification of safeness is the easy part. Any discrete Petri net morphism
3.7
Lemma (Deriving saveness of a Boolean system) marks a place with more than a single token, the same holds true for BS , q. e. d.
The lifting problem considers the converse situation.
Definition (Lifting property of a morphism)
A Petri net morphism To decide if a morphism has the lifting property is not an easy task in general. For the skeleton of a Boolean system Lemma 3.9 solves the lifting problem.
Lemma (Lifting property of the skeleton)
and assume that the initial place 0 p is high-marked at µ . Then skel σ has a lift σ to BS containing a sequence ( ) ( ) Proof. We may assume that skel σ is a single transition
Corollary (Skeleton of a safe and high-live Boolean system)
A safe and high-live Boolean system BS is strongly connected. Its skeleton skel BS is live and safe. Proof. High-liveness implies that BS is free of synchronization deadlocks. According to , of BN and an occurrence sequence Non-deadness of a bounded and strongly connected free-choice system implies its liveness, see Theor. 4.31 in [DE1995]. We derive a similar property for Boolean systems as a consequence from the lifting property of the skeleton. Theorem 3.11 is a slight generalization of a result of Genrich and Thiagarajan who proved the statement for Boolean systems with only AND-or XORtransitions, see Theor. 2.12 and Lemma 3.10 in [GT1984].
Model checking of Boolean systems
In the present chapter we combine the theory of finite complete prefixes with the propositional logic of Boolean transitions to derive a model checking algorithm for Boolean systems BS , see Algorithm 4.9. Our investigation is based on the skeleton morphism skel BS BS skel → : which has been introduced in Definition 3.1, iv). We first apply the prefix theory to skel BS . This task is greatly facilitated by the fact that all places of skel BS are unbranched. As a consequence, all branching processes of skel BS , in particular the unfolding and finite complete prefixes, are processes already. iii) The set of reachable base markings of BON originating from 0 µ is the smallest set ReachBase of base markings of BON with the following properties: • On the level of ordinary The concept of the successor graph of reachable base markings permits us to split arbitrary occurrence sequences of a safe Boolean system BS into occurrence sequences of fixed length.
Definition (Colouring of a process and reachable base markings)
Their length depends only on the choice of a finite complete prefix of an unfolding of skel BS . Each of these fragmented occurrence sequences of bounded length can be studied with one of the base processes.
Remark (Permutation of occurrence sequences)
We recall that an enabled transition of a T-system loses its firing concession only by firing itself. This fact applies to the skeleton of a Boolean system BS and generalizes to binding elements of BS : Consider a fixed Boolean transition BS t ∈ . An enabled binding element ( ) ( ) , loses its concession only by firing itself or by the firing of another enabled binding element of the same transition t . As a consequence, Lemma 3.24 in [DE1995] about the permutation of occurrence sequences in T-systems applies mutatis mutandis also to occurrence sequences of BS .
Lemma (Characterization of reachability)
The corresponding base processes ( ) ( ) ( ) And the assumed reachability of μ in ( ) b BON µ , provides an occurrence sequence The catenation of all occurrence sequences from above is an occurrence sequence of BS which leads to µ . a series of formulas from propositional logic, which represent certain behavioural properties of the system, see Table 2 and Definition 4.6. The satisfiability of these formulas is equivalent to the presence or absence of these properties. The column "Property" in Table 2 looks ahead to Proposition 4.8.
Name Definition Context Property
Marking formula and Boolean . For a marking µ , with marks each place of BN with at most one token, we define its marking formula as • For a binding element ( ) ( ) , of a Boolean transition t of BN we define its enabling formula with respect to ( ) µ the enabling marking of ( ) b t, considered as a binding element of BON .
Remark (Reachability and satisfiability)
With the notations of Definition 4.6: According to Lemma 4.5 the reachability of ( ) 1 cov µ in the Apparently, binary opening transitions do not have any synchronization deadlocks. As a consequence their deadlock formula is the constant false . Table 3 shows the deadlock formulas of binary closing Boolean transitions of different logical type. Their arc annotations refer to Figure 3. The column "Distinguished enabling formula(s)" will be referred to when explaining Algorithm 4.9.
Logical type Deadlock formula Distinguished enabling formula(s)
is not satisfiable.
ii) If BS is free from synchronization deadlocks, then BS is live with respect to all its highbindings iff for each transition BS t ∈ and each high binding element ( ) ( ) is a disjunction of reachability formulas, it is satisfiable iff at least one of these reachability formulas is satisfiable. A reachability formula like
BON
. Therefore the statement follows from Remark 4.7.
ad ii) We assume that the binding ( ) when the latter is considered a binding element of BN , q. e. d.
The main result of the paper is the following Algorithm 4.9 for the verification of a Boolean system with a live and safe skeleton. Output: • List of Boolean transitions of BS which suffer a synchronizing deadlock.
• List of transitions of BS which are not live with respect to all their high bindings. However, this equality does not hold for any minimal finite complete prefix. v) Only for closing Boolean transitions of logical type AND, XOR, AND_XOR and XOR_AND (see Table 3) Algorithm 4.9 has to investigate possible synchronization deadlocks. The check is performed as satisfiability check according to Proposition 4.8, i).
vi) After verifying that the Boolean system BS is free of synchronization deadlocks we can apply the lifting Lemma 3.9. As a consequence, from each reachable marking of BS a marking is reachable, which marks the pre-place of a given opening transition of BS with a high token. The marking therefore enables all high-bindings elements of the transition.
Similarly, each high binding element of a closing transition of logical type AND or XOR is live: Always a marking is reachable, which marks a given pre-place of the transition with a high-token, the other pre-place with a second token and such that the transition is not in a synchronization deadlock.
Only for closing transitions of logical type AND_XOR, XOR_AND and OR a separate investigation is needed. Table 3 shows those enabling formulas ( ) b t Enabl , which Algorithm 4.9 has to check according to Proposition 4.8, ii).
Application to EPCs
One of the first questions, which comes up when checking a given EPC for correctness, is: • Which are the boundary events of the EPC?
An EPC must be either without any boundary nodes or it must be bounded by events, having at least one in-event and one out-event. That's a syntactic property which can be easily checked. Inevents are initial or triggering events. But in case of loops also inner initial events may exist. Similarly are out-events the terminal or goal events of the process. And in case of loops also inner terminal events may exist. The situation becomes more difficult when the EPC has more than one single in-event or more than one single out-event. In that case the second question is: • Which combinations of in-events and which combinations of out-events are intended by the modeller of the EPC?
This question can no longer be answered by a syntactical analysis. In general it cannot even be answered by a semantical analysis. Instead the answer must be known before any semantical analysis can start. Sometimes the boundary events of the EPC are annotated by process indicators referring to processes at the next higher level of a hierarchical process model. Then the possible combinations of the boundary events derive top-down from the possible event combinations within the process model one level higher. But often such a model is lacking. To clarify the intention of the modeller in this case, one can use an algorithm to generate a proposal for the possible event combinations. The algorithm applies mirror reflexion to both the first logical connectors after the in-events and the last logical connectors before the out-events. However, if the modeller is not a hand and his intention cannot be read off from the name of the events, the model checker himself has to make an educated guess.
After these two questions have been answered, the verification of the EPC continues with adding a start/end-connection: We introduce a separate event "start/end" and connect this distinguished event by arcs and logical connectors to all intended combinations of start events. Similar we connect all intended combinations of end-events by logical connectors and arcs with the distinguished event. After this kind of short-circuiting the resulting EPC should be strongly connected. Otherwise the structure of the EPC is considered to be faulty. All following steps of the verification will presuppose a short-circuited, strongly connected EPC.
Most easy is the verification of AND/XOR EPCs. These are EPCs with logical connectors of type AND or XOR only. To define the semantics of AND/XOR-EPCs and for their verification no Boolean systems are necessary. Instead the EPC translates to a free-choice system: Events translate to places, functions to transitions, while logical connectors of type AND translate to transitions and logical connector of type XOR translate to places. Possibly some additional unbranched places or transitions have to be introduced for syntactical reasons. Each place which represents a start event gets marked with a token. The resulting ordinary Petri net is a free-choice system. It defines the free-choice semantics of the EPC [Aal1999]. Algorithms to verify liveness and safeness of free-choice systems resulting from AND/XOR EPCs are well-established, see Theor. 4.2 in [ES1990] or Theor. 5.8 in [DE1995].
Of course the free-choice semantics of an AND/XOR-EPC can also be obtained from its Boolean semantics, which results from translating the EPC into a Boolean system: Events translate to Boolean places, functions into unary Boolean transitions, while logical connectors of type AND and XOR translate to Boolean transitions of the corresponding logical type. Each in-event produces a high token on the corresponding place. In addition low tokens have to be added such that the skeleton is live and safe. If the resulting Boolean system BS is restricted to the flow of high tokens then the free-choice system high BS , which defines the free-choice semantics, is obtained, see [SW2010]. Now we address general EPCs with logical connectors of arbitrary type. The type may differ from AND or XOR.
5.1
Example (Closing OR-connector) The example from Figure 8 shows an EPC with a closing OR-connector. The modeller has provided the EPC with a single in-event and a single out-event. Therefore it is straightforward for the model checker to short-circuit the EPC.
Due to the OR-join the EPC from Figure 8 does not translate to a free-choice system as long as the difference between XOR and OR is respected. However, after translating the OR-connector to a Boolean transition of logical type OR the Boolean semantics of this EPC is well-defined. In addition Algorithm 4.9 verifies that the resulting Boolean system is well-behaved.
We are now returning to our running example "Loan request" from Figure 1. We collect all steps for its verification that we have developed in this paper.
Example (Loan request)
The EPC "Loan request" from Figure 1 has a logical connector of type different from AND or XOR. The verification of the EPC proceeds along the following steps: • Translation of the EPC into a strongly-connected Boolean system Algorithm 4.9 outputs that the Boolean system from Figure 4 is ill-behaved: The Boolean system is deadlock free, but its closing Boolean transition 4 t of logical type OR is not live with respect to all its high-bindings. After changing its logical type to AND_XOR the Boolean system becomes well-behaved, see Table 5. Table 4 and Table 5 the meaning of the German words is "nicht = not", "und = and". The authors of [MA2008] have employed a subtle logical construct to provide the EPC with an opening AND_XOR-connector. They used a combination of two XOR-and one opening ANDconnector, Figure 9 (left hand side). To close the alternative the authors did not use the formally analogous construction with two XOR-and one closing AND-connector, Figure 9 (middle). The closing construction would have been erroneous, because it does not synchronize the decisions made at the indicated two XOR-splits in Figure 9 (middle). Instead the authors use one OR-join to close the AND_XOR alternative Figure 9 (left hand side). But one of the three firing modes of the closing OR will never be activated.
Different from the authors of [MA2008] we therefore consider the EPC from Figure 4 ill-behaved.
In accordance with the above model checking result we propose to model the EPC with a pair of AND_XOR-alternatives like in Figure 9 (right hand side).
As a final example we consider an EPC from [DVV2006] proposed by the authors as a visualization of their method of EPC verification.
Example (EPC describing the EPC verification process)
The EPC from Figure 10 illustrates the verification procedure from [DVV2006] and reproduces Fig. 2 from [DVV2006]. The EPC exemplifies the difficulty for the model checker to short-circuit a given EPC. What are the intended initial events, which are the intended final events of the EPC from Figure 10? Figure 10: EPC describing the EPC verification process, see [DVV2006] The EPC has three in-events "EPC ready to be verified" (S1), "Possible combinations of initial Events" (S2) and "Allowed final Markings" (S3) as well as five out-events "EPC is correct and executable" (E1), "EPC can be correct. Further investigation necessary" (E2), "EPC is incorrect, Problem has to be solved" (E3), "Initial Events are known" (E4) and "Possible final Markings known" (E5). From their annotation the reader cannot read off all intended combinations. But the translation of the EPC to a Petri net in [DVV2006], Fig. 3, achieved by the modellers themselves shows, that surprisingly some of these events are not intended as boundary events at all. The pair of events E4 and S2 as well as E5 and S3 seem to be annotations intended as hints for the human reader. The two components of each pair should therefore better be linked by two functions "t" and "u": Therefore we assume that the EPC intended by the modeller looks like Figure 11. It has a single in-event (S1) and an XOR-combination of the three out-events E1, E2 and E3. Figure 11: Intended EPC describing the EPC verification process The OR-connector named 1 OR in Figure 11 decomposes into two binary OR-connectors as shown in Figure 12.
OR_1
OR_11 OR_12 Figure 12: Decomposition of the OR-connector 1 OR from Figure 11 When the authors from [DVV2006] translate the EPC to a resembling Petri net, they transform the OR-join 11 OR in Figure 12 to a closing transition and the OR-join 12 OR to a closing place. And they transform the XOR-join 1 XOR from Figure 11 to a closing place too. The authors do not give any justification for these transformations. In particular they do not explain why they skip the difference between the OR-join and the XOR-join. Figure 13 shows the binary Boolean system which results from short-circuiting and translating the EPC from Figure 11. Some unary transitions have been skipped in order to focus on the control flow.
The final events E1, E2 and E3 of Figure 11 translate to the places A_19, A_20 and A_21 of Figure 13. The initial event S1 translates to the marked place at the beginning of arc A_0. The two OR-joins 11 OR and 12 OR from Figure 12 translate respectively to the Boolean transitions 6 t and 7 t of logical type OR, while the XOR-join 1 XOR from Figure 11 translates to the Boolean transition 10 t of logical type XOR.
Conclusion and Outlook
We start comparing our proposal for the model checking of Boolean process models with the results of the related papers named in the introduction from section 1, see Table 8. In our opinion the main differences between the methods from No. 1 to 5 compared to method 6 are the following: • Method 6 considers a Boolean process model as a high-level construct and uses the high-level language of Boolean systems from the class of Coloured Petri nets. The other methods, which also use Petri nets as a reference language, always employ ordinary Petri nets.
We think that the branching of the control flow as logical AND-, XOR-, OR-, AND_XORand other types of splits and joins cannot be adequately modelled by low-level constructs.
Apparently each high-level Petri net can be flattened into an ordinary Petri net. But during this step much information gets lost which better should be kept together.
• Different from method 3, which is the only other model checking method from Table 8, method 6 uses model checking on high-level Petri nets. In our opinion high-level systems should be checked with high-level methods -as long as it is possible. For Boolean systems even the elementary means of propositional logic are sufficient.
• In our approach from [LSW1998] the semantics of an EPC is defined as the semantics of the corresponding Boolean system. Due to the concept of low-tokens the semantics is the usual Petri net semantics which is well-defined for each type of logical constructor.
To the best of our knowledge we do not know other correct semantics of EPC constructs like the OR-join or the AND_XOR-join. We are well aware of different proposals in the literature, but often the proposed semantics is a global semantics and therefore seems at risk of the "vicious circle" [Kin2006].
• Tool support for method 6 is under construction. At present we are working on an interface between the tools CPN and Eclipse, in order to export Boolean system from CPN to Eclipse, the run-time environment of our implementation of Algorithm 4.9. We plan to link also a fully developed SAT-solver.
The theory of branching processes has been generalized from its original target of ordinary Petri nets to branching processes of high-level Petri nets [KK2003]. Different from this approach, which is based on low-level occurrence nets, our concept of a base process from Definition 4.1 studies a given high-level system by means of another high-level net, which is the colouring of a low-level occurrence net. | 2011-05-03T11:49:18.000Z | 2011-05-03T00:00:00.000 | {
"year": 2011,
"sha1": "5dc30e80c499c949d08d687a01d79a4b6b1bd7e7",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "5dc30e80c499c949d08d687a01d79a4b6b1bd7e7",
"s2fieldsofstudy": [
"Computer Science"
],
"extfieldsofstudy": [
"Computer Science",
"Mathematics"
]
} |
15448 | pes2o/s2orc | v3-fos-license | Angioimmunoblastic T-cell Lymphoma Associated with IgA Nephropathy.
Few cases of IgA nephropathy with angioimmunoblastic T-cell lymphoma (AITL) have been reported. We herein present the case of a 79-year-old Japanese man with AITL and IgA nephropathy. The patient presented with generalized edema, fatigue, and fever. Laboratory investigations revealed polyclonal gammopathy with a high level of IgA, microscopic hematuria, proteinuria, and some other immunological abnormalities. Computed tomography revealed generalized lymphadenopathy. A diagnosis of AITL and IgA nephropathy was made based on inguinal lymph node and renal biopsies. Following chemotherapy for AITL, the patient's edema, microscopic hematuria, and proteinuria were alleviated. These findings indicate that IgA nephropathy may occur in AITL patients.
Introduction
Angioimmunoblastic T-cell lymphoma (AITL) is a rare tumor that accounts for 1% of all lymphoma cases. It is characterized by the loss of the lymphoid architecture, the presence of pleomorphic cellular infiltrates, and the proliferation of microvasculature in the lymph nodes. Patients usually present with fever, generalized lymphadenopathy, skin rash, polyclonal hypergammaglobulinemia, Coombspositive anemia, thrombocytopenia, and hypocomplementemia (1). Renal involvement is rare in patients with AITL. However, in some cases involving proteinuria (2), nephrotic syndrome (3), acute renal failure (4,5), and membranous nephropathy (6) have been reported. However, IgA nephropathy with AITL has never been reported. We herein report the case of a Japanese man presenting with AITL and IgA nephropathy.
Case Report
A 79-year-old Japanese man was admitted to our department with generalized edema, fatigue, fever, and weight gain of one week in duration. Two years prior to admission, the patient was diagnosed with IgM-κ monoclonal gammopathy. Follow-up examinations that were performed in another hospital once every six months showed no signs of progression. Aside from monoclonal gammopathy, the patient had a history of diabetes mellitus, hypertension, and benign prostatic hyperplasia. The patient had no family history of renal disease, leukemia, or lymphoma. Amlodipine and sitagliptin were prescribed and were taken on a regular basis. He did not report any recent changes in medications or their dosages, and he experienced no other systemic symptoms.
On physical examination, his blood pressure was 140/88 mmHg, his pulse rate was 100/min, his respiratory rate was 24/min with an O2 saturation of 95% on room air, and his body temperature was 37.1°C. Generalized lymphadenopathy and edema were detected. The results of a cardiovascular examination were normal, and auscultation revealed decreased bilateral breath sounds in the lower lung fields. An abdominal examination was unremarkable with no obvious hepatosplenomegaly; a neurological examination was also unremarkable.
Computed tomography revealed the swelling of the cervical, axillary, upper mediastinal, abdominal, and inguinal lymph nodes. In addition, hepatosplenomegaly and thoracoabdominal fluids were observed. A histopathological examination of inguinal lymph node and renal biopsy specimens and bone marrow aspiration was performed for further evaluation.
As shown in Fig. 1, the near-complete effacement of the normal lymph node architecture was observed; this was associated with marked vascular proliferation and aggregates of medium-sized atypical lymphoid cells in the inguinal lymph nodes. These atypical lymphoid cells showed clear to pale cytoplasm and had convoluted nuclei with dispersed chromatin. The immunophenotype of these cells was CD3+, CD20-, CD5+, CD4+, and CD8-. Although the cells were negative for CD10 expression, some of these cells also expressed C-X-C motif chemokine ligand 13. CD21 immunohistochemistry highlighted the expansion of follicular dendritic cells. Epstein-Barr virus-encoded small RNA (EBER) in situ hybridization revealed the marked infiltration of Epstein-Barr virus-positive B cells. The distribution of these EBER+ cells was consistent with the distribution of CD20+ cells and not consistent with the distribution of CD3+ cells. Although some EBER+ and CD20+ B-cells were medium to large in size, the majority were small. In addition, the abnormal morphology of the EBER+ and CD20+ B-cells and the architecture of these proliferated cells did not extend the range of reactive B-cell proliferation. Although no chromosomal abnormalities were detected, T-cell receptor rearrangement was found using Southern blotting. These findings were consistent with the diagnosis of AITL. A histological evaluation of the bone marrow aspirate revealed the lymphoma cell involvement.
Renal biopsy showed malignant lymphoma invasion and an IgA nephropathy pattern (Fig. 2). Light microscopy showed the focal infiltration of small to medium sized lym-phoid cells, with convoluted nuclei in the periglomerular and peritubular regions. The majority of these lymphoid cells demonstrated immunoreactivity to CD3+, interspersed with some CD20+ cells. Some EBER+ cells were also detected in the interstitium. An immunofluorescence examination revealed mesangial deposits of IgA and complement component 3. Small, dense deposits in the mesangial matrix were visualized using electron microscopy. The diagnosis of AITL and IgA nephropathy was made based on the clinical and histopathological findings and the results of the laboratory examinations. Before the diagnosis, the patient was initially treated with prednisolone (60 mg) per day. Five days later, the patient's edema was alleviated and his urinary protein level decreased to 0.58 g/gCr, but the hematuria persisted. After the diagnosis of AITL was made, steroid therapy was administered, followed by chemotherapy with intravenous cyclophosphamide (750 mg/m 2 ), pirarubicin, (50 mg/m 2 ) and vincristine (1.4 mg/m 2 ); oral prednisolone (60 mg per day) was administered from days 1 to 5; the duration of one treatment cycle was 21 days. Following the completion of six chemotherapy courses, the patient's symptoms gradually improved, and the generalized lymphadenopathy and edema were alleviated. Following the completion of chemotherapy, the patient's immunoglobulin and serum creatinine levels, proteinuria, and hematuria normalized. Repeated computed tomography examinations showed that the lymphadenopathy and other abnormal findings were resolved. This patient was considered to have achieved complete remission; no subsequent recurrences of lymphadenopathy or edema have been observed.
Discussion
We described the case of a patient with AITL with concomitant IgA nephropathy. We propose that these two diseases could be pathologically associated from two points of view. First, the serum IgA in AITL patients can be pathogenic and may cause extranodal involvement. The phenomenon of excessive serum IgA production in AITL patients has also been reported and is believed to result from the excessive differentiation of IgA-plasmablasts, which is induced by transforming growth factor-β1 and interleukin-21, which are released by neoplastic T follicular helper cells (7). Furthermore, elevated serum IgA levels have been shown to be a novel prognostic factor in patients with AITL, although the underlying mechanism remains to be clarified (8,9). With respect to the direct pathological contribution of IgA in AITL patients, several IgA-related extranodal diseases have been reported to be associated with AITL. These include IgA-related leukocytoclastic vasculitis (10,11), atypical linear IgA dermatosis (12), and IgA pemphigus (13). These findings suggest that AITL patients may produce pathogenic IgA, which may result in extranodal diseases, including nephropathy.
Second, there was chronological coincidence in the duration of the urinary abnormalities and the systemic lymphadenopathy in the present case. This patient's microscopic hematuria, proteinuria, and lymphadenopathy were observed at the same time, and these abnormalities gradually diminished with the repeated chemotherapy. We therefore assumed that the patient's IgA nephropathy was associated with AITL.
It is possible that the patient had idiopathic IgA nephropathy because the high circulating levels of IgA alone could not have caused the disease in the majority of these patients. Elevated circulating levels of IgA have, however, been reported in some patients with IgA nephropathy (14). Furthermore, we did not have any direct evidence of the relationship between the IgA deposits in the mesangium and circulating IgA. However, since mesangial IgA is probably derived from a circulating pool of pathogenic IgA (15), and since the polyclonal immune activation observed in many multisystemic autoimmune diseases like AITL could cause IgA nephropathy (14), we could not deny the association between IgA nephropathy and AITL. The further understanding of this relationship requires an investigation to determine the precise pathological mechanisms underlying the tissue damage that is caused by serum IgA in AITL patients.
The direct invasion of the kidney, in association with AITL, also seemed to be a possible cause of the microscopic hematuria, proteinuria, and renal failure that were ob-served in the present case; however, the pathological findings of the kidney biopsy in differed from those of a previously reported case of AITL with direct kidney invasion (16). Moreover, kidney enlargement, which is frequently seen in cases with the massive infiltration of lymphoid cells into the renal parenchyma (17,18), was not observed in our patient. We therefore believe that IgA nephropathy was the main disease in our patient.
The severity of IgA-related extranodal disease in patients with AITL may be a useful marker for evaluating the efficacy of chemotherapy. In fact, the grade of microscopic hematuria gradually decreased and finally disappeared after the completion of six courses of chemotherapy, which was paralleled by a reduction in tumor volume. Other cases of extranodal disease associated with IgA in AITL have also shown a gradual response to chemotherapy (10)(11)(12)(13). Furthermore, the reappearance of purpura was observed along with the recurrence of AITL in a case of IgA-associated leukocytoclastic vasculitis in AITL (10). Considering the simplicity of the evaluation methods for patients with reported extranodal disease, such as screening for microscopic hematuria or skin lesions, the recognition of IgA-related extranodal disease in patients with AITL seems to be important in the follow-up of AITL.
In conclusion, AITL can be associated with IgA nephropathy. As several IgA-related extranodal diseases seem to be useful markers of the severity of AITL, clinicians should assess IgA-related lesions in patients with AITL.
The authors state that they have no Conflict of Interest (COI). | 2018-04-03T01:54:55.019Z | 2017-01-01T00:00:00.000 | {
"year": 2017,
"sha1": "9334c7002093a0b7eb51e363df04ee97f94be40e",
"oa_license": "CCBYNCND",
"oa_url": "https://www.jstage.jst.go.jp/article/internalmedicine/56/1/56_56.7315/_pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "9334c7002093a0b7eb51e363df04ee97f94be40e",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
218627810 | pes2o/s2orc | v3-fos-license | Diels-Alder cycloadditions in water for the straightforward preparation of peptide-oligonucleotide conjugates
The Diels-Alder reaction between diene-modified oligonucleotides and maleimide-derivatized peptides afforded peptide–oligonucleotide conjugates with high purity and yield. Synthesis of the reagents was easily accomplished by on-column derivatization of the corresponding peptides and oligonucleotides. The cycloaddition reaction was carried out in mild conditions, in aqueous solution at 37°C. The speed of the reaction was found to vary depending on the size of the reagents, but it can be completed in 8–10 h by reacting the diene-oligonucleotide with a small excess of maleimide-peptide.
INTRODUCTION
In the past three decades, chemists involved in the preparation of synthetic oligonucleotide analogs suitable for use in the control of gene expression have introduced modifications in virtually every part of oligonucleotide chains [see, for instance (1)(2)(3)].
Among such modifications, the covalent attachment of peptides to oligonucleotides has received considerable attention because of the potential applications of peptideoligonucleotide conjugates, such as the development of more effective oligonucleotide-based technologies. Peptides have been covalently linked to oligonucleotide chains for several purposes: with the aim of preparing nucleases (4)(5)(6), for the introduction of reporter groups (7), for the study of DNAprotein interactions (8), to investigate the molecular requirements for enzyme activity (9), to evaluate how metals or metal-based drugs behave when DNA and proteins are in close proximity (10,11), to increase the specificity of RNAinteracting oligonucleotides (12) and to facilitate the transport of antisense oligonucleotides through cell membranes (13)(14)(15)(16)(17). Linking peptides to oligonucleotides has also been described as rendering oligonucleotides more resistant to exonucleases (18)(19)(20), and, in the case of cationic peptides, as accelerating duplex formation (21). Positively charged and hydrophobic peptides stabilize short duplexes (22,23), but their influence on the thermal stability of duplexes with more than 15 bases is much weaker (12,24). Triplex stabilization by appending cationic peptides has also been described (25).
Since both peptides and oligonucleotides are biomolecules extremely rich in functional groups, finding reaction conditions that yield structurally defined conjugates rather than mixtures of products is far from being trivial. This problem has been tackled either by protecting non-participating functional groups, or by modifying the biomolecules with additional functional groups expected to drive the reaction as desired (26,27) [see also references (28)(29)(30) for examples of recent papers on the development of methodology for the synthesis of peptide-oligonucleotide conjugates].
The Diels-Alder reaction is a very attractive methodology for the conjugation of biomolecules, since it is fast and efficient in aqueous media (31)(32)(33) in addition to being chemoselective. It has been used, for instance, for the modification of peptides (34), to link covalently carbohydrates to proteins (35), for the immobilization of oligonucleotides on glass surfaces (36), and, most often, for the labelling of DNA and RNA fragments with biotin or fluoresceine derivatives (37)(38)(39)(40)(41)(42). The Diels-Alder approach, which involves a diene and a dienophile not present in any biomolecule, allows for a chemoselective reaction without the need for protecting groups. Here, we describe the use of the Diels-Alder cycloaddition in water for the preparation of peptideoligonucleotide conjugates incorporating all the nucleobases and most trifunctional amino acids. These conjugates were obtained by the reaction between an acyclic diene linked to the oligonucleotide chain and a maleimide-derivatized peptide ( Figure 1). The online version of this article has been published under an open access model. Users are entitled to use, reproduce, disseminate, or display the open access version of this article for non-commercial purposes provided that: the original authorship is properly and fully attributed; the Journal and Oxford University Press are attributed as the original place of publication with the correct citation details given; if an article is subsequently reproduced or disseminated not in its entirety but only in part or as a derivative work this must be clearly indicated. For commercial re-use, please contact journals.permissions@oxfordjournals.org
A [Vydac C18]-filled glass column (22 · 2 cm) was used for medium pressure liquid chromatography (MPLC), using aqueous and ACN solutions containing 0.1% TFA in peptide purification, and 0.05 M ammonium acetate and 1:1 ACN/H 2 O mixtures in oligonucleotide purification. Elution was carried out by connecting a piston pump to the mixing chamber of a gradient-forming device and to the top of the glass column. The mixing chamber of the gradientforming device was the flask containing solvent A, which was connected through a stopcock to the flask containing solvent B.
Sephadex G-25 was used for gel filtration (elution with 0.05 M aq. ammonium acetate).
Matrix-assisted laser desorption ionization time-of-flight (MALDI-TOF) mass spectrometric analysis was carried out using a Voyager-DERP (Applied Biosystems) instrument and the following conditions: oligonucleotides and conjugates: trihydroxyacetophenone/ammonium citrate, negative mode, linear (unless otherwise indicated); peptides: 2,5dihydroxybenzoic acid, positive mode, reflector. Calculated monoisotopic mass values for neutral compounds are indicated in all cases. The exact mass spectrometric characterization data were obtained using an Agilent 1100 LC/MS-TOF. Electrospray mass spectrometric analysis was carried out using a Micromass ZQ instrument.
The amounts of isolated oligonucleotides and conjugates were determined spectrophotometrically, and peptides were quantified by amino acid analysis after acid hydrolysis (6 M HCl for 1.5 h at 160 C).
Solution synthesis of diene-TT
T-3 0 -O-Pac. To a solution of 2.0 g of 5 0 -O-(4,4 0 -dimethoxytrityl)-2 0 -deoxythymidine (3.7 mmol) and 2.8 g of phenoxyacetic anhydride (9.8 mmol) in 50 ml of anh. THF was added 1.5 ml of pyridine (18.6 mmol), and the mixture was stirred overnight (Pac ¼ phenoxyacetyl, THF ¼ tetrahydrofuran). Removal of the solvent at reduced pressure afforded a yellowish oil, which was dissolved in DCM (75 ml) and extracted twice with a 10% aq. NaHCO 3 solution (100 ml, 2·) (DCM ¼ dichloromethane). The aqueous phase was re-extracted with 50 ml of DCM, and the combined organic phases were dried over anh. Na 2 SO 4 , filtered and evaporated to dryness. The resulting oil was redissolved in the minimum amount of AcOEt and precipitated with cold hexanes. After centrifugation, 5 0 -O-DMT-T-3 0 -O-Pac was obtained as a white solid (1.82 g, 73% yield). The fully protected thymidine derivative (1.82 g, 2.7 mmol) was dissolved in an 8:2 DCM/MeOH solution, and cooled in an ice bath. A solution of 2.28 g of p-toluenesulfonic acid (12 mmol) in the same solvent was added, and the mixture was stirred for 1 h at 0 C. The reaction was quenched by addition of 100 ml of a 10% aq. NaHCO 3 solution. The aqueous phase was extracted with DCM (60 ml), and the combined organic phases were dried over anh. Na 2 SO 4 , filtered, and evaporated to dryness. The 3 0 -protected thymidine derivative was purified by silica gel column chromatography, eluting with DCM/AcOEt mixtures of increasing polarity, and obtained as a white solid (415 mg, 42% yield). ACN with a cannula, and the mixture was stirred for 1.5 h. A total of 1.5 ml of a 6.0 M solution of tBuOOH in decane was added (9.0 mmol), and after 30 min stirring, the mixture was diluted with 50 ml of DCM and extracted with a 2.5% aq. NaHCO 3 solution (50 ml, 2·) and brine (50 ml). The organic solution was dried over anh. Na 2 SO 4 , filtered, and evaporated to dryness, which afforded a yellowish oil. The DMT group was removed from the fully protected dimer as described above, and T-P (
Solid-phase synthesis of diene-oligonucleotides
Oligonucleotides were assembled on CPG at the 1 mmol-scale following standard procedures (phosphite triester approach). No changes in the synthesis cycle were made for the incorporation of the diene-phosphoramidite. Final deprotection was carried out by reaction with conc. aqueous ammonia, either overnight at 55 C (diene derivatives of oligodeoxynucleotides CATGGCT and GATCTAAAAGACTTT) or for 4 h at room temperature (T 8 and T 15 derivatives).
Synthesis of maleimide-peptide-NH 2
All the amino acids of the peptide sequence were subsequently coupled on a Rink amide p-methylbenzhydrylamine resin (44) (100 mg, loading: 69 mmol NH 2 /mg) following the standard procedures of solid-phase peptide synthesis (10 min treatment with 20% piperidine in N,N-dimethylformamide and reaction with 3 equivalent of Fmoc-amino acid and DCC for 1-1.5 h were used for the deprotection and coupling steps, respectively), with the addition of 3 equivalent of HOBt for the incorporation of 3-maleimidepropanoic acid, and in all couplings in the octapeptide assembly
Diels-Alder conjugation reactions
Reaction between diene-TT and maleimide-dipeptides. Diene-TT and each of the maleimide-peptides were dissolved in water and mixed to give 1 mM solutions containing a 1:1 molar ratio of the reagents (35 nmol). The reaction mixtures were stirred at 37 C, and the progress of the reaction was monitored by HPLC. A total of 90-95% conjugate was formed in reaction times ranging between 4 and 14 h, as assessed from the relative areas of the diene-dinucleotide and the conjugate on the HPLC profile. Two main products, corresponding to the two diastereomeric forms of the target conjugate, were formed in all cases, and showed different retention times upon HPLC analysis (Figure 2A). For MALDI-TOF MS characterization (products isolated by HPLC), see Table 1.
Reaction between maleimide-GTSKLNYL-NH 2 and larger diene-oligonucleotides. The cycloaddition reaction was also carried out in water at 37 C, and monitored by HPLC and MALDI-TOF MS. The molar ratio and reaction time were adjusted in each case depending on the kinetics of the process. The two conjugates were purified by HPLC and characterized by mass spectrometry (Table 1).
The reaction between maleimide-KETAAAKFERQHMD-SSTSAA-OH and diene-T 8 was repeated two more times at a higher scale, using 1:1 molar ratios of peptide and oligonucleotide, and the mixture analyzed by HPLC only after 20 h. Since the conversion of diene-oligonucleotide to peptideoligonucleotide conjugate after that time was not quantitative, an amount of maleimide-peptide equimolar to the amount of unreacted diene-oligonucleotide was added, and the mixture was left to react for a further 4 h. The reaction mixture was lyophilized, and the target conjugate was isolated, in all cases, after gel filtration through Sephadex G-25. The following results were obtained: Second assay: 156 nmol-scale, 0.08 mM, 94% conjugation yield after 20 h [HPLC analysis, Figure 2D, (b)]. Purification: gel filtration. Conjugation and purification yield: 59%.
Anal. HPLC: linear gradient from 10 to 40% of B in 30 min, t R 19.7 min. For mass spectrometric characterization, see Table 1.
Maleimide-KETAAAKFERQHMDSSTSAA-OH + diene-T 15 . Equimolar amounts (190 nmol) of the two reagents were mixed in water (0.18 mM) and stirred at 37 C. The conjugation yield after 8 h was 56% [HPLC analysis, Figure 2E, (a)]. An amount of maleimide-peptide equimolar to the amount of unreacted diene-oligonucleotide was added, and the mixture was left to react for a further 8 h [ Figure 2E, (b)]. The target conjugate was obtained after purification by gel filtration. Some fractions required additional repurification to remove small amounts of accompanying diene-oligonucleotide (HPLC, PRP column, linear gradient from 15 to 35% of B in 30 min, A: 0.05 M NH 4 AcO, B: ACN/H 2 O 1:1, 2 ml/min). Conjugation and purification yield: 62%. Anal. HPLC: linear gradient from 10 to 40% of B in 30 min, t R 19.3 min. For mass spectrometric characterization, see Table 1.
RESULTS AND DISCUSSION
As stated above, the goal of this study was to prepare peptideoligonucleotide conjugates using the Diels-Alder cycloaddition. The target conjugates were easily obtained by reaction between diene-oligonucleotides and maleimide-peptides. This alternative is much simpler than synthesizing maleimideoligonucleotides and diene-peptides, since the maleimide moiety is not stable to the ammonia treatment used for the final deprotection of oligonucleotides (46), and the diene would not resist the acidic conditions in which peptide permanent protecting groups are removed. In all cases, derivatization was carried out after chain assembly following standard protocols, thus yielding 5 0 -modified oligonucleotides and peptides with the maleimide group linked to the N-terminal.
Several peptides and oligonucleotides with different composition and length were synthesized for this study (Table 1). This allowed a variety of conjugates to be obtained, from dipeptide-dinucleotide conjugates to conjugates incorporating 20mer peptides and 15mer oligonucleotides.
Peptide chains were assembled by solid-phase synthesis using Fmoc/tBu-protected amino acid derivatives (47,48). The commercially available 3-maleimidepropionic acid was coupled onto the N-terminal of the immobilized peptide chains. Peptide elongation on a Rink amide resin (44) afforded maleimide-peptides with a C-terminal carboxamide, and the 2-chlorotrityl chloride resin (45) was used to obtain the maleimide-peptide-OH. Deprotection of the immobilized maleimide-peptides was carried out by treatment with TFA ( Figure 3) in the presence of the appropriate scavengers. Maleimide-peptides were isolated by medium pressure liquid chromatography and characterized by MALDI-TOF MS and amino acid analysis.
The diene-derivatized TT dinucleotide was synthesized in solution at the milligram-scale (Figure 4) (see Materials and Methods for details), to obtain the amount of material required for the preliminary conjugation assays with the different maleimide-dipeptides. The other dieneoligonucleotides were assembled on controlled pore glass beads using standard phosphite triester methodology (49). After the subsequent incorporation of the different nucleosides, the phosphoramidite derivative of 3,5hexadiene-1-ol, previously prepared following described procedures (43,40), was coupled onto the 5 0 end ( Figure 5). Deprotection of diene-oligonucleotides was accomplished by treatment with conc. aq. ammonia, at room temperature for thymidine-containing chains, and at 55 C for oligonucleotides incorporating the four nucleobases. HPLC and mass spectrometric analysis of the diene-oligonucleotide crudes showed that no measurable side reactions took place upon incorporation of the phosphoramidite-derivatized 3,5hexadien-1-ol onto the oligonucleotide chains, which indicates that the diene group remains stable to oxidation with either tBuOOH or aq. iodine. Diene-oligonucleotides were purified by medium pressure liquid chromatography. Characterizationon was carried out by MALDI-TOF mass spectrometric analysis, and purity was confirmed by reversed-phase HPLC. Although it is generally observed that longer oligonucleotide chains have higher retention times, the retention time of diene-TT was higher than that of diene-CATGGCT, and diene-T 8 and diene-T 15 had virtually the same retention time (see Materials and Methods for details). This is probably related to the presence of the hydrophobic diene substituent, whose specific contribution to the chromatographic behavior of the molecule is higher in the shortest diene-modified oligonucleotides.
The Diels-Alder conjugation reaction was carried out simply by mixing aqueous solutions of the diene-modified oligonucleotide and maleimide-derivatized peptide, and stirring the resulting solution at 37 C. The progress of the reaction was monitored by reversed-phase HPLC (see Figure 2) and MALDI-TOF mass spectrometry (analysis in the positive mode may allow detection of maleimide-peptide in the reaction mixture, but the mass spectrometric analysis is usually conducted in the negative mode to allow detection of diene-oligonucleotides and conjugates). The target peptide-oligonucleotide conjugates were formed in all cases and were easily isolated after gel filtration through Sephadex G-25 or reversed-phase HPLC. The conjugate structure was confirmed by MALDI-TOF mass spectrometric analysis ( Table 1).
The first assays were carried out with the maleimidedipeptides and diene-TT (1:1 molar ratio). The reaction was clean and fast in a few hours, as shown by HPLC analysis at different reaction times. The four crudes contained two main products (Figure 2A), which were isolated by HPLC and characterized by MALDI-TOF MS. In all cases, the two products had the same mass, that of a Diels-Alder adduct. Only the diastereomers of these small conjugates could be separated by HPLC, but not those of the larger ones ( Figure 2B-E). We surmise that these products are the two diastereomeric conjugates resulting from the endo addition, which is usually preferred over the exo. Detailed structural analysis of the stereochemistry of these adducts is beyond the scope of this paper. Both diastereomeric products can be useful for biological applications.
It was also observed that the kinetics of the reaction depended on the nature of the peptide, the order of reactivity being KG>SG>AG>>DG (see HPLC traces after 4 h in Figure 2A). These data indicate that the reaction rate increases when favourable interactions between the two moieties, such as those between the negatively charged oligonucleotide and a positively charged peptide, can be established. The polar hydroxyl group of the serine side chain also had a positive effect, and the slowest reaction rate was found when two negatively charged chains were brought together into the same conjugate.
The second set of experiments involved an octapeptide (maleimide-GTSKLNYL-NH 2 ) and two oligonucleotides containing the four nucleobases (diene-CATGGCT and diene-GATCTAAAAGACTTT). These conjugations were carried out using 2:1 peptide/oligonucleotide ratios. The HPLC monitoring of these experiments showed that, as expected, the rate of the reaction varied inversely with the size of the diene reagent. A total of 70% conjugation yield was achieved in 2.5 h when the diene was linked to the 7mer oligonucleotide, and in 6 h in the case of the 15mer oligonucleotide ( Figure 2B and C). The reaction between maleimide-GTSKLNYL-NH 2 and diene-GATCTAAAAGACTTT was nearly quantitative in 1.5 h when a 5:1 peptide/oligonucleotide ratio was used.
The last group of experiments was performed with a 20mer peptide (maleimide-KETAAAKFERQHMDSSTSAA-OH) and two oligothymidines, diene-T 8 and diene-T 15 . To evaluate whether large conjugates could be obtained in good yields with the lowest cost in reagents, the diene and the dienophile were reacted in 1:1 molar ratio. The HPLC analysis of these reactions showed that the conjugation process was not complete in 20-24 h, the reaction rate, again, showing an inverse dependence on the reagents' size (see details in Materials and Methods). One way to drive these reactions to completion was to increase the reaction time. However, it has been suggested that undesired reactions between the free amines of the amino acid side chains and the maleimide group of the peptide moiety may take place at prolonged reaction times (50). Moreover, although this side reaction is faster in basic conditions, some maleimide can be lost by hydrolysis (51). Therefore, extending the reaction time over a period of days did not seem the best choice, and we decided to add the required extra amount of fresh maleimide-peptide (equimolar amount with respect to unreacted oligonucleotide, as assessed by HPLC analysis). A further 4 h was enough to complete the conjugation process for the diene-T 8 , and 8 h in the case of diene-T 15 .
The reaction between the 20mer peptide and diene-T 8 was repeated three times. The HPLC analysis after 20 h showed that either the diene-oligonucleotide had been consumed, or that the crude contained some 5-10% of unreacted diene. As stated above, in all cases the reaction was completed in 4 h after the required amount of peptide was added. These results show the reproducibility of the method.
The cycloaddition reaction between two diene moieties was not detected in any case. This is as expected, since the energy of activation of this process is much higher, and hence this reaction is much slower than the cycloaddition between the diene and the maleimide groups.
In summary, the results presented here demonstrate the utility of the Diels-Alder reaction for the easy, side-reaction free preparation of peptide-oligonucleotide conjugates under mild conditions. Diene-oligonucleotides and maleimidepeptides can be easily prepared using standard solid-phase methodologies. Cycloaddition reactions in water are clean, fast and chemoselective, and they allow the straightforward synthesis of large peptide-oligonucleotide conjugates, containing any nucleoside or trifunctional amino acid, which are otherwise difficult to obtain (26,27). Certain adjustments in the methodology described here will be required if cysteine is to be included in the peptide sequence. The reaction between free thiols and maleimide groups, which has been exploited for the preparation of peptide-oligonucleotide conjugates (26), may compete with the desired Diels-Alder cycloaddition, yielding side products in which peptide chains are linked to each other as a result of Michael additions. Furthermore, the thiols of two peptide molecules may react to give disulfide-linked peptide dimers. To avert the production of complex mixtures during conjugation, we intend to carry out the cycloaddition using maleimidepeptides with a protected cysteine residue, and to unmask the thiol group after the conjugation has taken place.
Work is in progress to extend the use of the Diels-Alder reaction for the preparation of different types of bioconjugates, including cysteine-containing peptide-oligonucleotide hybrids. | 2019-08-17T00:54:06.125Z | 2006-02-14T00:00:00.000 | {
"year": 2006,
"sha1": "e1f46192936ddbc0a47887b662b5e42d6a9255fa",
"oa_license": "CCBYNC",
"oa_url": "https://academic.oup.com/nar/article-pdf/34/3/e24/6375775/gnj020.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "9a8979067f4d9340fbf603550fd2e4741f3c68b9",
"s2fieldsofstudy": [
"Chemistry",
"Biology"
],
"extfieldsofstudy": []
} |
44867197 | pes2o/s2orc | v3-fos-license | Effect of Batch Annealing Temperature on Microstructure and Resistance to Fish Scaling of Ultra-Low Carbon Enamel Steel
Zaiwang Liu 1,2, Yonglin Kang 1,*, Zhimin Zhang 2 and Xiaojing Shao 2 1 School of Materials Science and Engineering, University of Science and Technology Beijing, Beijing 100083, China; lzwbeijing2007@163.com 2 Shougang Research Institute of Technology, Beijing 100043, China; 2001zhimin@163.com (Z.Z.); shaoxiaojing@shougang.com.cn (X.S.) * Correspondence: kangylin@ustb.edu.cn; Tel.: +86-10-6233-2983
Introduction
Ultra-low carbon steels were produced to enamel products, such as bathtubs, kitchen utensils, and decorative panels due to their extraordinary deep drawability [1,2].Enamel coatings have been widely applied for the protection of steel products due to their excellent engineering properties, such as corrosion protection, resistance to heat and abrasion, hygiene and ease of cleaning [3].Fish scaling is one of the most dangerous defects in the production of enameled steel products.Studies have found that it is hydrogen which plays a key role in the formation of the fish scaling.The resistance to fish scaling of enamel steel is usually evaluated by the hydrogen permeation test, and the hydrogen permeation value (TH value) is an important parameter characterizing the resistance to fish scaling.High TH value means good resistance to fish scaling.TH value should be larger than 6.7 min/mm 2 to ensure satisfactory resistance to fish scaling [4].
The resistance to fish scaling can be improved by increasing the number of hydrogen traps.Hydrogen traps are generally classified as reversible and irreversible traps depending on their binding energy with hydrogen atoms [5,6].A reversible trap is one from which a hydrogen atom can easily jump out of due to fluctuations in thermal energy [7].It is known that grain boundaries, dislocations, vacancies, and microvoids have low binding energy with hydrogen atoms and are considered as reversible traps.Hydrogen atoms in these sites are diffusible, and these traps have an influence on the effective hydrogen diffusivity.Irreversible traps are sites with high binding energy, and thus the trapped hydrogen is considered as non-diffusible [8].(Ti, Nb)(C, N), TiC, TiN, NbC, VC and non-metallic inclusions are considered as irreversible traps because of their high binding energy.
Ti is usually added to ultra-low carbon steel to improve the resistance to fish scaling.The main irreversible hydrogen traps in ultra-low carbon Ti-bearing steel are TiN, TiC, TiS and Ti particles which influence hydrogen diffusivity obviously [2].There are several factors which affect the precipitation behavior of titanium precipitates, i.e., chemical composition, finishing temperature, coiling temperature, annealing temperature and so on.It is reported that Ti and S content will affect the type and fraction of precipitates [9].Mo can also influence the precipitation behavior of TiC particles [10].It was found that low finishing temperature was beneficial to the occurrence of strain-induced precipitation of TiC [11].Kim et al. [12] and Xu et al. [13] found that interphase precipitation took place at high coiling temperature, while dispersed precipitation was formed at low coiling temperature.The annealing temperature will influence the size, distribution and number of precipitates [14,15].The characteristics of precipitates will influence their binding energy and hydrogen storage capacity, so the annealing temperature will affect the resistance to fish scaling of Ti-bearing steel.Much is still unknown about the effect of batch annealing temperature on microstructure and resistance to fish scaling of ultra-low carbon Ti-bearing enamel steel.It is essential to perform relevant research work to promote the application of ultra-low carbon Ti-bearing enamel steel.
Materials and Methods
The experimental steel used in this study was produced by Shougang Group, and the chemical composition is listed in Table 1.The slab was reheated to 1250 • C for 2 h, and then hot rolled to a sheet of 5 mm at finishing temperature of 900 • C. The sheet was water cooled to coiling temperature of 700 • C.After acid pickling, the sheet was cold rolled to 0.8 mm in thickness.The cold rolled sheets were batch annealed at 580, 630, 680, and 730 • C for 5 h, and the schematic of the batch annealing process is shown in Figure 1.These batch annealing temperatures were chosen to obtain different sized ferrite grains and precipitates, so the effects of ferrite grain boundaries and precipitates on resistance to fish scaling can be investigated by a hydrogen permeation test.Ti is usually added to ultra-low carbon steel to improve the resistance to fish scaling.The main irreversible hydrogen traps in ultra-low carbon Ti-bearing steel are TiN, TiC, TiS and Ti4C2S2 particles which influence hydrogen diffusivity obviously [2].There are several factors which affect the precipitation behavior of titanium precipitates, i.e., chemical composition, finishing temperature, coiling temperature, annealing temperature and so on.It is reported that Ti and S content will affect the type and fraction of precipitates [9].Mo can also influence the precipitation behavior of TiC particles [10].It was found that low finishing temperature was beneficial to the occurrence of straininduced precipitation of TiC [11].Kim et al. [12] and Xu et al. [13] found that interphase precipitation took place at high coiling temperature, while dispersed precipitation was formed at low coiling temperature.The annealing temperature will influence the size, distribution and number of precipitates [14,15].The characteristics of precipitates will influence their binding energy and hydrogen storage capacity, so the annealing temperature will affect the resistance to fish scaling of Ti-bearing steel.Much is still unknown about the effect of batch annealing temperature on microstructure and resistance to fish scaling of ultra-low carbon Ti-bearing enamel steel.It is essential to perform relevant research work to promote the application of ultra-low carbon Ti-bearing enamel steel.
Materials and Methods
The experimental steel used in this study was produced by Shougang Group, and the chemical composition is listed in Table 1.The slab was reheated to 1250 °C for 2 h, and then hot rolled to a sheet of 5 mm at finishing temperature of 900 °C.The sheet was water cooled to coiling temperature of 700 °C.After acid pickling, the sheet was cold rolled to 0.8 mm in thickness.The cold rolled sheets were batch annealed at 580, 630, 680, and 730 °C for 5 h, and the schematic of the batch annealing process is shown in Figure 1.These batch annealing temperatures were chosen to obtain different sized ferrite grains and precipitates, so the effects of ferrite grain boundaries and precipitates on resistance to fish scaling can be investigated by a hydrogen permeation test.The metallographic specimens and tensile specimens were cut along the transverse direction.Microstructure observations were conducted on an optical microscope (Leica, Wetzlar, Hesse, Germany).The precipitates in specimens were extracted on carbon replicas and examined byTecnai G 2 F20 transmission electron microscope (TEM) (FEI, Houston, TX, USA) with an energy dispersive spectrometer (EDS).The dog-bone tensile specimens have 12.5 mm width and 80 mm gauge length.A tensile test was performed on a computer material test (CMT) 5105 testing machine (MTS, Shenzhen, Guangdong, China) at room temperature, and the crosshead speed was 3 mm/min.The metallographic specimens and tensile specimens were cut along the transverse direction.Microstructure observations were conducted on an optical microscope (Leica, Wetzlar, Hesse, Germany).The precipitates in specimens were extracted on carbon replicas and examined byTecnai G 2 F20 transmission electron microscope (TEM) (FEI, Houston, TX, USA) with an energy dispersive spectrometer (EDS).The dog-bone tensile specimens have 12.5 mm width and 80 mm gauge length.A tensile test was performed on a computer material test (CMT) 5105 testing machine (MTS, Shenzhen, Guangdong, China) at room temperature, and the crosshead speed was 3 mm/min.
The hydrogen permeation specimens with 50 mm × 80 mm × 0.8 mm were prepared.Both sides of the specimens were ground by 1200 grit abrasive paper, and then the specimens were washed Metals 2017, 7, 51 3 of 9 by acetone and rinsed by distilled water.The hydrogen permeation test was conducted at room temperature on a Devanathan-Stachursky type cell [16].The hydrogen permeation setup was composed of two parts separated by the specimen into the cathodic cell and anodic cell.The anodic cell was filled with 0.2 N NaOH solution, and a constant anodic potential 200 mV was applied.Once the background current was less than 0.1 µA/cm 2 , the hydrogen permeation test was started [17], and the cathodic cell was filled with 0.5 N H 2 SO 4 + 0.22 g/L H 2 NCSNH 2 solution immediately.The charging current density was maintained at 1 mA/cm 2 , which was low enough to avoid damage to the steel sheet.In the test, hydrogen is produced electrolytically in the charging cell by cathodic polarization [18].H 2 NCSNH 2 facilitated hydrogen pick-up by promoting the breakdown of molecular hydrogen.High-purity nitrogen gas was purged prior to cathodic polarization and during the entire test to remove the dissolved oxygen which would contribute to the anodic current [19].The measured anodic current was proportional to the hydrogen flow rate out of the specimen [20], and it was recorded by using an automatic data-acquisition system.Once the measured anodic current reached a steady-state, the hydrogen permeation test can be finished.
The hydrogen permeation time (t b ) was determined by plotting the cumulative anodic current through the specimen and extrapolating its asymptote to intercept with the horizontal axis [21].The hydrogen permeation value can be calculated by the following formula: where TH is the hydrogen permeation value, t b is the hydrogen permeation time in minutes, d is the sheet thickness in mm.
Microstructure
The optical micrographs of experimental steel batch annealed at different temperatures are presented in Figure 2. As can be seen in Figure 2a, the shear bands (indicated by arrows) along the rolling direction and small recrystallized grains were observed at a batch annealing temperature of 580 • C, which reveals that partial recrystallization has occurred at this temperature.At an annealing temperature of 580 • C, there are a large number of dislocations in the shear bands due to the incomplete recrystallization.When the annealing temperature is higher than 630 • C, the microstructures are equiaxed ferrite grains, which mean that full recrystallization has occurred in the annealing process.
Metals 2017, 7, 51 3 of 9 The hydrogen permeation specimens with 50 mm × 80 mm × 0.8 mm were prepared.Both sides of the specimens were ground by 1200 grit abrasive paper, and then the specimens were washed by acetone and rinsed by distilled water.The hydrogen permeation test was conducted at room temperature on a Devanathan-Stachursky type cell [16].The hydrogen permeation setup was composed of two parts separated by the specimen into the cathodic cell and anodic cell.The anodic cell was filled with 0.2 N NaOH solution, and a constant anodic potential 200 mV was applied.Once the background current was less than 0.1 μA/cm 2 , the hydrogen permeation test was started [17], and the cathodic cell was filled with 0.5 N H2SO4 + 0.22g/L H2NCSNH2 solution immediately.The charging current density was maintained at 1 mA/cm 2 , which was low enough to avoid damage to the steel sheet.In the test, hydrogen is produced electrolytically in the charging cell by cathodic polarization [18].H2NCSNH2 facilitated hydrogen pick-up by promoting the breakdown of molecular hydrogen.High-purity nitrogen gas was purged prior to cathodic polarization and during the entire test to remove the dissolved oxygen which would contribute to the anodic current [19].The measured anodic current was proportional to the hydrogen flow rate out of the specimen [20], and it was recorded by using an automatic data-acquisition system.Once the measured anodic current reached a steady-state, the hydrogen permeation test can be finished.
The hydrogen permeation time (tb) was determined by plotting the cumulative anodic current through the specimen and extrapolating its asymptote to intercept with the horizontal axis [21].The hydrogen permeation value can be calculated by the following formula: where TH is the hydrogen permeation value, tb is the hydrogen permeation time in minutes, d is the sheet thickness in mm.
Microstructure
The optical micrographs of experimental steel batch annealed at different temperatures are presented in Figure 2. As can be seen in Figure 2a, the shear bands (indicated by arrows) along the rolling direction and small recrystallized grains were observed at a batch annealing temperature of 580 °C, which reveals that partial recrystallization has occurred at this temperature.At an annealing temperature of 580 °C, there are a large number of dislocations in the shear bands due to the incomplete recrystallization.When the annealing temperature is higher than 630 °C, the microstructures are equiaxed ferrite grains, which mean that full recrystallization has occurred in the annealing process.The average size of ferrite grain is measured by the line intercept method, and the effect of the batch annealing temperature on the average size of ferrite grain is shown in Figure 3.The average size of ferrite grain increased when the batch annealing temperature increased from 630 to 730 • C.There is not a large number of dislocations in the specimens annealed at 630, 680 and 730 • C due to the full recrystallization.The average size of ferrite grain is measured by the line intercept method, and the effect of the batch annealing temperature on the average size of ferrite grain is shown in Figure 3.The average size of ferrite grain increased when the batch annealing temperature increased from 630 to 730 °C.There is not a large number of dislocations in the specimens annealed at 630, 680 and 730 °C due to the full recrystallization.The precipitates of experimental steel annealed at different temperatures exhibit the same features by TEM observations.As shown in Figure 4, there are TiC, TiN and Ti4C2S2 particles in all the specimens.Figure 4a shows typical TEM morphologies of TiN and TiC.A large cubic particle contains Ti, C and N (as shown in Figure 4c), which indicates that the particle is TiN due to its coarse and cubic shape; the peak characteristic of the C element can be ignored because of the carbon extraction replicas.Large TiN particles formed during the solidification process; they are rare in the specimens, and their sizes change a little at different annealing temperatures.A small sized elliptical particle contains Ti and C (as shown in Figure 4d), which is considered as TiC.In experimental steel, TiC is finer and has a denser distribution than TiN and Ti4C2S2.The EDS spectrum of the particle in Figure 4b shows that the atomic ratio of Ti to S is close to 2 (as shown in Figure 4e), and the particle can be identified as Ti4C2S2 [9].The distribution of Ti4C2S2 is very inhomogeneous, often in the shape of strings or clusters (as shown in Figure 4b).The precipitates of experimental steel annealed at different temperatures exhibit the same features by TEM observations.As shown in Figure 4, there are TiC, TiN and Ti 4 C 2 S 2 particles in all the specimens.Figure 4a shows typical TEM morphologies of TiN and TiC.A large cubic particle contains Ti, C and N (as shown in Figure 4c), which indicates that the particle is TiN due to its coarse and cubic shape; the peak characteristic of the C element can be ignored because of the carbon extraction replicas.Large TiN particles formed during the solidification process; they are rare in the specimens, and their sizes change a little at different annealing temperatures.A small sized elliptical particle contains Ti and C (as shown in Figure 4d), which is considered as TiC.In experimental steel, TiC is finer and has a denser distribution than TiN and Ti 4 C 2 S 2 .The EDS spectrum of the particle in Figure 4b shows that the atomic ratio of Ti to S is close to 2 (as shown in Figure 4e), and the particle can be identified as Ti 4 C 2 S 2 [9].The distribution of Ti 4 C 2 S 2 is very inhomogeneous, often in the shape of strings or clusters (as shown in Figure 4b).
The chemical composition of ultra-low carbon enamel steel is similar to that of ultra-low carbon Ti-bearing steel, but the former contains higher Ti and S contents to ensure that sufficient Ti 4 C 2 S 2 particles can be formed [22].TiS was not observed in all the specimens.Figure 5 is a schematic illustration of the stability of various Ti compounds in Interstitial-Free (IF) steels as a function of the precipitation temperature [23].The stability of the precipitates mainly depends on the temperature and the chemical composition.After the formation of TiN, TiS precipitation is likely to take place.However, the stability of TiS is quite low, and TiS decomposes during hot rolling and the coiling process [23].TiS may also change to Ti 4 C 2 S 2 during the annealing process [9,24].Because TiN particles are large and rare, their influence on hydrogen diffusion behavior can be ignored, and the main irreversible hydrogen traps are fine TiC and coarse Ti 4 C 2 S 2 .TiC and Ti 4 C 2 S 2 are both considered as precipitating in the batch annealing process [25].The effect of batch annealing temperature on average sizes of TiC and Ti 4 C 2 S 2 particles is shown in Figure 6.It can be seen that the average sizes of TiC and Ti 4 C 2 S 2 particles increased with increasing the batch annealing temperature, which means that TiC and Ti 4 C 2 S 2 particles coarsened in the annealing process.The chemical composition of ultra-low carbon enamel steel is similar to that of ultra-low carbon Ti-bearing steel, but the former contains higher Ti and S contents to ensure that sufficient Ti4C2S2 particles can be formed [22].TiS was not observed in all the specimens.Figure 5 is a schematic illustration of the stability of various Ti compounds in Interstitial-Free (IF) steels as a function of the precipitation temperature [23].The stability of the precipitates mainly depends on the temperature and the chemical composition.After the formation of TiN, TiS precipitation is likely to take place.However, the stability of TiS is quite low, and TiS decomposes during hot rolling and the coiling process [23].TiS may also change to Ti4C2S2 during the annealing process [9,24].Because TiN particles are large and rare, their influence on hydrogen diffusion behavior can be ignored, and the main irreversible hydrogen traps are fine TiC and coarse Ti4C2S2.TiC and Ti4C2S2 are both considered as precipitating in the batch annealing process [25].The effect of batch annealing temperature on average sizes of TiC and Ti4C2S2 particles is shown in Figure 6.It can be seen that the average sizes of TiC and Ti4C2S2 particles increased with increasing the batch annealing temperature, which means that TiC and Ti4C2S2 particles coarsened in the annealing process.
Mechanical Properties and Resistance to Fish Scaling
The relationship between the mechanical properties and batch annealing temperature of ultralow carbon enamel steel is shown in Figure 7.At the annealing temperature of 580 °C, the yield strength and tensile strength are obviously higher than those of experimental steel annealed at other temperatures; this is because of the incomplete recrystallization at 580 °C.At annealing temperatures of 630, 680, and 730 °C, the yield strength, tensile strength, elongation, n-value and r-value do not vary very much.That is to say, the mechanical properties of experimental steel change a little when recrystallization has finished.The hydrogen permeation curves (charge quantity vs. time curves) are shown in Figure 8.The TH values of experimental steel annealed at 580, 630, 680 and 730 °C are calculated to be 30.0± 3.4, 18.6 ± 1.1, 13.5 ± 1.7 and 10.4 ± 1.6 min/mm 2 , respectively.As shown in Figure 9, the TH value decreases with increasing batch annealing temperature.
Mechanical Properties and Resistance to Fish Scaling
The relationship between the mechanical properties and batch annealing temperature of ultra-low carbon enamel steel is shown in Figure 7.At the annealing temperature of 580 • C, the yield strength and tensile strength are obviously higher than those of experimental steel annealed at other temperatures; this is because of the incomplete recrystallization at 580 • C. At annealing temperatures of 630, 680, and 730 • C, the yield strength, tensile strength, elongation, n-value and r-value do not vary very much.That is to say, the mechanical properties of experimental steel change a little when recrystallization has finished.
Mechanical Properties and Resistance to Fish Scaling
The relationship between the mechanical properties and batch annealing temperature of ultralow carbon enamel steel is shown in Figure 7.At the annealing temperature of 580 °C, the yield strength and tensile strength are obviously higher than those of experimental steel annealed at other temperatures; this is because of the incomplete recrystallization at 580 °C.At annealing temperatures of 630, 680, and 730 °C, the yield strength, tensile strength, elongation, n-value and r-value do not vary very much.That is to say, the mechanical properties of experimental steel change a little when recrystallization has finished.The hydrogen permeation curves (charge quantity vs. time curves) are shown in Figure 8.The TH values of experimental steel annealed at 580, 630, 680 and 730 • C are calculated to be 30.0± 3.4, 18.6 ± 1.1, 13.5 ± 1.7 and 10.4 ± 1.6 min/mm 2 , respectively.As shown in Figure 9, the TH value decreases with increasing batch annealing temperature.
The TH value is related to the hydrogen diffusion coefficient; high TH value means low hydrogen diffusivity.Both reversible traps and irreversible traps can reduce the hydrogen diffusion coefficient.At annealing temperatures of 630, 680 and 730 • C, there are a few dislocations in the specimen due to the recrystallization, so the main reversible hydrogen traps are ferrite grain boundaries, while the main irreversible hydrogen traps are TiC and Ti 4 C 2 S 2 particles.With the annealing temperature increasing, the average size of ferrite grain and the mean sizes of TiC and Ti 4 C 2 S 2 particles increase.The ferrite grain boundary is a kind of reversible trap; it traps hydrogen atoms in the hydrogen charging process and contributes significantly to hydrogen trapping (TH value) [26,27].The hydrogen storage capacity of the grain boundaries is generally proportional to the grain boundary area [28].The growth of ferrite grain leads to the reduction of the grain boundary area, which is one reason for the reduction of the resistance to fish scaling.Takahashi et al. [29] have observed the hydrogen trapping sites of nano-sized TiC by using three dimensional atom probe (3DAP) for the first time.They revealed that the broad interface between the matrix and TiC was the main trapping site.Wei et al. [30] found that incoherent TiC particles have higher binding energy than that of coherent TiC particles, but they are not able to trap hydrogen during cathodic charging at room temperature due to its high energy barrier for trapping.Small TiC and Ti 4 C 2 S 2 particles at low annealing temperature have more interfaces, which will result in the decrease of hydrogen diffusivity.The coarsening of precipitates is the other reason for the reduction of resistance to fish scaling.Yuan [2] also indicated that small precipitates can lead to low hydrogen diffusivity.Because irreversible traps always lead to a great decrease in hydrogen diffusivity [31], a great number of small precipitates are favorable to the enhancement of the resistance to fish scaling of enamel steel.The TH value is related to the hydrogen diffusion coefficient; high TH value means low hydrogen diffusivity.Both reversible traps and irreversible traps can reduce the hydrogen diffusion coefficient.At annealing temperatures of 630, 680 and 730 °C, there are a few dislocations in the specimen due to the recrystallization, so the main reversible hydrogen traps are ferrite grain boundaries, while the main irreversible hydrogen traps are TiC and Ti4C2S2 particles.With the annealing temperature increasing, the average size of ferrite grain and the mean sizes of TiC and Ti4C2S2 particles increase.The ferrite grain boundary is a kind of reversible trap; it traps hydrogen atoms in the hydrogen charging process and contributes significantly to hydrogen trapping (TH value) [26,27].The hydrogen storage capacity of the grain boundaries is generally proportional to the The TH value is related to the hydrogen diffusion coefficient; high TH value means low hydrogen diffusivity.Both reversible traps and irreversible traps can reduce the hydrogen diffusion coefficient.At annealing temperatures of 630, 680 and 730 °C, there are a few dislocations in the specimen due to the recrystallization, so the main reversible hydrogen traps are ferrite grain boundaries, while the main irreversible hydrogen traps are TiC and Ti4C2S2 particles.With the annealing temperature increasing, the average size of ferrite grain and the mean sizes of TiC and Ti4C2S2 particles increase.The ferrite grain boundary is a kind of reversible trap; it traps hydrogen atoms in the hydrogen charging process and contributes significantly to hydrogen trapping (TH At an annealing temperature of 580 • C, there are much more dislocations and grain boundaries in the specimen due to the incomplete recrystallization.Dislocation is another kind of reversible trap, and it can reduce the hydrogen diffusion coefficient of steel [32].The decrease of the hydrogen diffusion coefficient means the increase of the TH value.More dislocations can lead to higher TH value.The presence of a large number of dislocations is the reason why the TH value is higher than those of experimental steel annealed at other temperatures.
Conclusions
(1) The main irreversible hydrogen traps of ultra-low carbon enamel steel are fine TiC and coarse Ti 4 C 2 S 2 particles.The reversible hydrogen traps are grain boundaries and dislocations.The mean sizes of TiC and Ti 4 C 2 S 2 particles increase with increasing the batch annealing temperature.(2) Both reversible and irreversible traps will influence the resistance to fish scaling.The resistance to fish scaling can be enhanced with increasing the number of reversible and irreversible traps.
The resistance to fish scaling decreases with increasing the batch annealing temperature, which is caused by the growth of ferrite grain and the coarsening of TiC and Ti 4 C 2 S 2 particles.
Figure 1 .
Figure 1.Schematic of the batch annealing process of ultra-low carbon enamel steel.
Figure 3 .
Figure 3.Effect of the batch annealing temperature on the average size of ferrite grain of ultra-low carbon enamel steel.
Figure 3 .
Figure 3.Effect of the batch annealing temperature on the average size of ferrite grain of ultra-low carbon enamel steel.
Figure 5 .
Figure 5. Schematic illustration of the stability of various Ti compounds in Interstitial-Free (IF) steels as a function of the precipitation temperature.
Figure 5 .
Figure 5. Schematic illustration of the stability of various Ti compounds in Interstitial-Free (IF) steels as a function of the precipitation temperature.
Figure 6 .
Figure 6.Effect of the batch annealing temperature on the average sizes of TiC (a) and Ti4C2S2 (b).
Figure 7 .
Figure 7. Relationship between the mechanical properties and batch annealing temperature of ultralow carbon enamel steel.(a) Strength and Elongtation.(b) n and r value
Figure 6 .
Figure 6.Effect of the batch annealing temperature on the average sizes of TiC (a) and Ti 4 C 2 S 2 (b).
Figure 5 .
Figure 5. Schematic illustration of the stability of various Ti compounds in Interstitial-Free (IF) steels as a function of the precipitation temperature.
Figure 6 .
Figure 6.Effect of the batch annealing temperature on the average sizes of TiC (a) and Ti4C2S2 (b).
Figure 7 .
Figure 7. Relationship between the mechanical properties and batch annealing temperature of ultralow carbon enamel steel.(a) Strength and Elongtation.(b) n and r valueThe hydrogen permeation curves (charge quantity vs. time curves) are shown in Figure8.The TH values of experimental steel annealed at 580, 630, 680 and 730 °C are calculated to be 30.0± 3.4, 18.6 ± 1.1, 13.5 ± 1.7 and 10.4 ± 1.6 min/mm 2 , respectively.As shown in Figure9, the TH value decreases with increasing batch annealing temperature.
Figure 7 .
Figure 7. Relationship between the mechanical properties and batch annealing temperature of ultra-low carbon enamel steel.(a) Strength and Elongtation; (b) n and r value
Figure 9 .
Figure 9.Effect of the batch annealing temperature on the TH value of ultra-low carbon enamel steel.
Figure 9 .
Figure 9.Effect of the batch annealing temperature on the TH value of ultra-low carbon enamel steel.
Figure 9 .
Figure 9.Effect of the batch annealing temperature on the TH value of ultra-low carbon enamel steel.
Table 1 .
Chemical composition of experimental steel (in wt %).
Table 1 .
Chemical composition of experimental steel (in wt %). | 2017-05-07T09:34:06.371Z | 2017-02-09T00:00:00.000 | {
"year": 2017,
"sha1": "9566df0417aa98191dc1b9a6eef02c58b0bd899a",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2075-4701/7/2/51/pdf?version=1486955221",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "9566df0417aa98191dc1b9a6eef02c58b0bd899a",
"s2fieldsofstudy": [
"Materials Science"
],
"extfieldsofstudy": [
"Materials Science"
]
} |
219476867 | pes2o/s2orc | v3-fos-license | Level and ecological risk assessment of heavy metals in old landfill in Bayelsa state, Nigeria
This study assessed the ecological fate of heavy metals within the vicinity of an area formally used as dump site in Igbogene, Bayelsa state. Soil auger was used to collect samples at 0 20 cm depth at 50, 100 and 150 m distances from the four cardinal points viz: North east and west and south east and west. The soil samples were sieved, ashed, digested and analyzed using atomic adsorption spectrometry. The heavy metals results ranged from 646.73 to 715.33 mg/kg (iron), 59.30 to 73.05 mg/kg (manganese), 83.20 to 114.18 mg/kg (zinc), 10.67 to 15.95 mg/kg (copper), 7.70 to 9.64 mg/kg (chromium), 11.56 to 14.48 mg/kg (cadmium), 10.09 to 13.86 mg/kg (lead), 4.57 to 6.33 mg/kg (nickel) and 3.52 to 4.92 mg/kg (vanadium). Statistically there was no significant variation (p>0.05) across the various distances for each of the metals studied, but apparent decline in values exist as the distance away from the landfill increased. In addition, each of the metals showed positive significant correlation with each other at p<0.01. Cluster analysis revealed two main clusters. These are samples from each of the latitude directions, southern direction (east and west) and northern direction (east and west). Pollution indices were higher in sample obtained from the southern direction (west and east) compared to northern area (west and east) but generally it ranged from no pollution to moderate pollution. Positive quantification of contamination indicates that pollution due to anthropogenic activities occurred in few instances. The ecological risk index showed low risk/fate of the heavy metals studied area.
INTRODUCTION
Environmental problems appear to be on the increase globally, which undoubtedly impacts on environmental sustainability. Components of the environment mostly affected are the soil or land, air, water and sediments. These components are largely influenced by anthropogenic activities and to a lesser extent by natural processes (Izah and Angaye, 2016). Several human activities contribute to environmental pollution including poor waste management and effluent discharge during industrial processes.
Wastes are typically generated in different processing units including food production such as oil palm and cassava processing (Nnaji and Uzoekwe, 2018;Izah and Ohimain, 2015), markets (Ben-Eledo et al., 2017a, b), and manufacturing/ processing units. Wastes typically exist in the form of liquid that is effluents and solid. Of the entire waste stream, municipal solid wastes, which is either from domestic or industrial unit, is a source of concern to environmentalist. Basically, the domestic source of wastes are from household which include food remain, laundry etc. On the other hand, wastes from food vendors/ restaurants, auto-mechanic shops, medical facilities, schools, construction, industries are classified as commercial wastes. Wastes can also be classified based on the noxiousness to the environment and its associated biota, biodegradation potentials, physical nature or characteristics and based on source (Nnaji and Uzoekwe, 2018).
The soil receives most of the solid waste stream that are generated from human activities. By their nature some are easily degradable wastes such as food remains through the activities or indigenous microbes, while several others may be recalcitrant to degradation (such as pesticides) or better still non-biodegradable (such as glass). As such wastes have the tendency to alter the characteristics of the receiving soil. The alteration depends mainly on the exposure rate, toxicity and other factors such as climatic, physical, chemical, soil porosity, pH, temperature, organic matter, moisture and indigenous microbes that may cause transformation of the waste.
Municipal solid wastes are mainly managed through open dumping, landfill system, incineration, composting and recycling. Among these methods of household solid waste management landfill is the commonest. In the landfill heavy metals have been detected (Amadi et al., 2012;Njoku, 2014;Anikwe and Nwobodo, 2001;Oluseyi et al., 2014;Buteh et al., 2013;Akinbile, 2012). Heavy metals such as mercury, arsenic, cadmium and lead that do not have any known biological functions, while the essential ones such as chromium, copper, zinc, manganese, iron etc have biological function to living organisms but could be detrimental when their concentration exceed certain limit.
These heavy metals have different chemical species and their transport behaviour is quite different depending on the substrate such as soil, plant, water, and environmental factors such as temperature and pH. Heavy metals have the tendency to persist in the environment for long. Ecological risk assessment and pollution indices are some tools used in assessing the fate of metals in the environment. Therefore, this study is aimed at assessing the level and ecological risk assessment of heavy metals around a landfill in Igbogene, community Bayelsa State, Nigeria.
Study area
Igbogene is one of the adjoining communities that make the Bayelsa State capital in Yenagoa local Government area of Bayelsa State. Authors have attributed the region as a sedimentary basin (Kigigha et al., 2018;Aghoghovwia et al., 2018). A tributary of Nun Uzoekwe and Richard 33 River known as Epie creek passes through the community. Like many other parts of Bayelsa State, the creek is a major receipt of solid wastes from human activities especially among household close to the surface water. Two major climatic conditions are common in the study area: wet season (which originally start from April and end in October) and dry season (which start from November and end in March of the following year). The relative humidity and atmospheric temperature of the area have been reported to be around 50-95% and 28 ±6°C, respectively all year round.
Sampling techniques
The soil sample were collected from east and west of southern and northern area of the various distances (50m, 100m, 150m) from the old dumpsite. The samples were collected with soil auger at 0-20 cm depth. The samples were packaged and labeled accordingly before being transported to the laboratory for analyses.
Sample preparation and analysis using atomic absorption spectrophotometer
The samples were air dried and sieved in 2.0 mm mesh. About 2 g of the sample was placed in a clean crucible and placed in a muffle furnace pre-heated at 200°C for 30 min, and then ashed for 4 h at 480°C. The sample was removed from the furnace and cooled. The sample was further digested with concentrated nitric acid by adding 2 ml of 5 M HNO 3 and evaporated to dryness on a sand bath. The sample was again placed in a cooled furnace and heated at 400°C for 15 min. The cooled sample was moistened with four drops of distilled water. 2ml of concentrated HCl was added and the sample evaporated to dryness and removed, thereafter, 5ml of 2M HCl added and the container swirled. The solution was filtered using Whatman filter paper No. 42 and transferred to a 25ml volumetric flask. The filtrated solution was made up to mark. The solution was aspirated into atomic absorption spectrophotometer (Model: PyeUnicam 969) and concentrations of various metals measured at varying wavelengths (Isaac and Kerber, 1971;Aigberua, 2015).
Quality assurance and quality control
The reagents used were of analytical grade. The reproducibility and reliability of the measurements were ensured by calibrating instruments used and procedural blanks determined.
Environmental risk assessment
Pollution indices and ecological risk was used to ascertain the risk associated with metals in the study area. The background or reference value was the geometric mean of the data which have been widely used in ecological risk assessments (Izah et al., 2017a, b, c;Bhutiani et al., 2017;Aghoghovwia et al., 2018;Uzoekwe and Aigberua, 2019). The factors considered in this study include contamination factor, degree of contamination, pollution load index, index of geoaccumulaton, quantification of contamination and ecological risk index. The contamination factor (CF) and contamination degree was calculated based on the methods previously developed by Hakanson (1980) and applied by Bhutiani et al. (2017). The obtained values were classified based on the following criteria viz: CF < 1 (low contamination); 1 ≤ CF < 3 (moderate contamination); 3 ≤ CF < 6 (considerable contamination); CF ≥ 6 (very high contamination) for contamination factor and CD < 8 (low risk); 8 ≤ CD < 16 (moderate risk); 16 ≤ CD < 32 (considerable risk); CD > 32 (very high risk) for degree of contamination. Pollution load index was calculated based on the method previously described by Tomlinson et al. (1980) and have been applied by Bhutiani et al. (2017). The result values were ranked as PLI < 1 (no pollution); 1 < PLI < 2 (moderate pollution); 2 < PLI < 3 (heavy pollution); 3 < PLI (extremely heavy pollution).
Statistical analysis
SPSS version 20 was used to carry replicate and the result values were presented as mean ± standard error. One-way analysis of variance was carried out at p=0.05, and Duncan statistics was used showed significant difference between the various locations. Spearman rho correlation matrix was used to show the relationship between the metals. Hierarchical cluster analysis of the heavy metals concentration and sampling point was carried out at Euclidean Distance.
RESULTS AND DISCUSSION
The levels of heavy metals in an area formerly used landfill in Igbogene community of Bayelsa State, Nigeria is presented in Table mg/kg, respectively (chromium);6.33±2.11, 5.18±1.67 and 4.57±1.70 mg/kg, respectively (nickel) and 4.92±1.52, 4.03±1.17 and 3.52±1.31 mg/kg, respectively for vanadium. Statistically, there was no significant deviations (p>0.05) across the three distances for each of the heavy metals. However, apparent difference exists for each of the metals concentration in the various distances, which decreases as the distances away from the old landfill increased. Basically heavy metals are metalloids that have relatively high densities (which is about 5 times greater than that of water), atomic weights, or atomic numbers greater Izah and Angaye, 2016). These heavy metals have been reported in soil from diverse anthropogenic activities . Possibly, due to the ability of vegetation to accumulate heavy metals, various levels has similarly been reported in plants (Izah and Aigberua, 2017;Ogamba et al., 2017Ogamba et al., , 2015. Variation in concentration of heavy metals have been reported around municipal solid waste dumpsite in different locations in Nigeria including Imo state (Amadi et al., 2012), Ebonyi state (Njoku, 2014;Anikwe and Nwobodo, 2001), Lagos state (Oluseyi et al., 2014), Bauchi state (Buteh et al., 2013) and Ondo state (Akinbile, 2012). The variation in these authors' reports with the values recorded in this study is due to the type of waste in the dumpsite, age and frequency of use of the dumpsite as well as the geology of the area. This is because most of the heavy metals have the tendency to occur naturally in our environment. The absence of significant variation and apparent decline in heavy metal concentration at distances away from the dumpsite suggest that metals have leached into the soil; metals being mobile in the soil and thus leaching is a common occurrence and plays a role in determining the fate of metal in the environment. Table 2 shows the Spearman rho correlation matrix of heavy metals concentration in old landfill in Igbogene, Bayelsa State, Nigeria. All the metals showed positive significant relationship with each other at p<0.01. This is an indication that the metals in the soil studied may have come from similar source, which is an indication of significant relationship . The positive relationship of the metals indicates common sources, mutual dependence and identical behavior during transport (Jiang et al., 2014;Izah et al., 2017a). Figure 1 shows the hierarchical cluster analysis of the heavy metals concentration from landfill in Igbogene, Bayelsa state, Nigeria based on dependent variables. Two major clusters were formed with cluster 1 comprising nickel, vanadium, copper, lead, cadmium, chromium, manganese and zinc with equal distances, and cluster 2 with only iron. Figure 2 shows hierarchical cluster analysis of the heavy metals concentration from the landfill in Igbogene, Bayelsa state, Nigeria based on locations. Two main cluster was formed with cluster 1 comprising of North East and North West samples, while cluster 2 consist of South West and South East samples. For contamination factor (CF): CF < 1 (low contamination); 1 ≤ CF < 3 (moderate contamination); 3 ≤ CF < 6 (considerable contamination); CF ≥ 6 (very high contamination).
Again within this cluster, sub-cluster was also formed. Basically with a major cluster close distances is an indication of significant relationship (Guan et al., 2014;Izah et al., 2017). Based on the formation of cluster with respect to cardinal points (South and North) there is an indication of the similar mobility pattern of heavy metals in the soil. Table 3 shows the contamination factor, degree of contamination and pollution load index of the heavy metals from old landfill area in Igbogene, Bayelsa state, Nigeria. The contamination factor ranged from low contamination (CF < 1) to moderate contamination (1 ≤ CF < 3). The contamination factor for samples from North east and west showed low contamination factor except for copper in 50 m distance of North West, nickel and vanadium at 50 m, 100 and 150 m of North west that showed moderate contamination. The samples from the south east and west showed moderate contamination except for zinc, copper, nickel and vanadium at 150m of south east. The contamination degrees were in the range of low risk (CD < 8) to considerable risk (16 ≤ CD < 32). The contamination degree for samples in the north across the various distances depicts low risk except for 50m North West distance that showed moderate risk (8 ≤ CD < 16). All samples from the South depict moderate risk except for 50 m for south west distance that showed considerable risk. The pollution load index showed no pollution (PLI < 1) to moderate pollution (1 < PLI < 2). However, north east and west at the varying distances showed low pollution, while the south east and west at the varying distances showed moderate pollution. From the indices, the northern region had lower concentration of heavy metals compared to the southern area for each of the heavy metals including iron (Figure 3 Figure 11). The trend suggests the mobility pattern of heavy metals in the area. The pollution indices (pollution load index, degree of contamination and contamination factor) observed in this study is in consonance with the work of other authors (Aghoghovwia et al., 2018;Bhutiani et al., 2017;. For degree of contamination (CD): CD < 8 (low risk); 8 ≤ CD < 16 (moderate risk); 16 ≤ CD < 32 (considerable risk); CD > 32 (very high risk); For pollution load index (PLI): PLI < 1 (no pollution); 1 < PLI < 2 (moderate pollution); 2 < PLI < 3 (heavy pollution); 3 < PLI (extremely heavy pollution). Table 4 shows the index of geoaccumulation of heavy metals from old landfill area in Igbogene, Bayelsa State, Nigeria. The index of geoaccumulation ranged from no contamination (Igeo ≤ 0) to moderate contamination (0 <Igeo ≤ 1). For the all the heavy metals studied at varying distances at the norther area, the index of geoaccumulation showed no contamination. However, index of geoaccumulation were moderate at 50m distances at south east direction for copper, chromium and cadmium; 100m distance at south east direction for chromium and cadmium; 50 m distance at south west direction for zinc, copper, chromium, cadmium, lead, nickel and vanadium; 100m at south west direction for copper, chromium, cadmium and lead; and 150 m at south west direction for chromium, cadmium and lead. This is an indication mobility pattern of heavy metals in the study area. The trend of index of geoaccumulation in this study had some similarity with the work of other authors (Bhutiani et al., 2017;Izah et al., 2017c). Igeo ≤ 0 (uncontaminated), 0 <Igeo ≤ 1 (uncontaminated to moderately contaminated), 1 <Igeo ≤ 2 (moderately contaminated), 2 <Igeo< 3 (moderately to heavily contaminated), 3 <Igeo< 4 (heavily contaminated), 4 <Igeo< 5 (heavily to extremely contaminated), Igeo ≥ 5 (extremely contaminated). For the southern area, the quantification of contamination was positive except for few metals (zinc, copper, lead, nickel and vanadium at 150 m of south east direction) that showed negative quantification of contamination. Authors have reported that positive quantification of contamination suggest pollution due to anthropogenic sources (Bhutiani et al., 2017;Izah et al., 2017c;Aghoghovwia et al., 2018;Uzoekwe and Aigberua, 2019). Table 6 shows the ecological risk assessment of heavy metals from old landfill area in Igbogene, Bayelsa State, Nigeria. The ecological risk was in the range of low risk (Er< 40) to moderate risk (Er 40 ≤ Er< 80). Basically samples from the both southern and northern direction showed low ecological risk for the heavy metals (manganese, zinc, copper, chromium, cadmium, lead and nickel) except for cadmium in 50, 100 and 150 m in south west and 50 and 100m in south east direction. Furthermore, the ecological risk showed low risk (ERI <150). The trend of cadmium observed in this study is in accordance with previous works by authors Aghoghovwia et al., 2018;Zhu et al., 2012;Uzoekwe and Aigberua, 2019;Todorova et al., 2016). The concentration of cadmium in the studied area is high, whichis an indication of the effect of anthropogenic activities.
Conclusion
Management of municipal waste is problematic in many developing nations. The commonest means of managing wastes is through dumping in a landfill which is set ablaze during the dry season. In some area, the landfill is moved to another location probably due to developmental works/ activities. This study assesses the ecological risk of heavy metals in area formerly used as landfill in Igbogene, Bayelsa State, Nigeria. The study found that the concentration of the individual metals apparently increased as distance away from the dumpsite increased. In addition, it was found that various metal levels, pollution indices were higher in sample obtained from the southern direction (west and east) compared to northern area (west and east), which gives insight into metal mobility pattern in the area. On the overall, the positive quantification of contamination suggests pollution due to anthropogenic activities in the area, while the ecological index suggests low risk/ fate.
CONFLICT OF INTERESTS
The author has not declared any conflict of interests.
ACKNOWLEDGEMENT
The author is grateful to Macgil Environmental and Engineering Co. Ltd for their AAS used during this analysis and also appreciate Mr. Egeonu Ama of Chemistry Department, Federal University, Otuoke for his assistance in collecting the samples. | 2020-05-21T09:15:53.543Z | 2020-05-31T00:00:00.000 | {
"year": 2020,
"sha1": "fd35d3912ce966de55acb759e22e0359aab68165",
"oa_license": "CCBY",
"oa_url": "https://academicjournals.org/journal/JECE/article-full-text-pdf/27A314A63690.pdf",
"oa_status": "GOLD",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "53c46ef9b6186cd3936e809efb23cd6f0c20f228",
"s2fieldsofstudy": [
"Environmental Science"
],
"extfieldsofstudy": [
"Environmental Science"
]
} |
196553160 | pes2o/s2orc | v3-fos-license | Assessment of drug and poison information centers in Saudi Arabia
Abstract Background: Drug Information Centers (DICs) and Drug and Poison Information Centers (DPICs) in Saudi Arabia are pharmacy-based departments that provide drug information services for prescribers and or public. We sought to evaluate their current role in handling poisoning cases. Methods: We conducted a cross-sectional survey of all DICs and DPICs in Riyadh and included 17 potential respondent centers. We developed a brief questionnaire with nine questions about DICs and DPICs resources. Results: The response rate was 82%. Most responding centers provide service only during daytime hours. Three provide services on weekends, and five have staff on-call after business hours. Handling poisoning cases is not available in five centers and found to be minimal among all other centers. Conclusion: DICs and DPICs provide limited poison information services in Saudi Arabia. In accord with the current Vision 2030 reform effort, establishing comprehensive poison control center services is a necessity for the health care system in the Kingdom of Saudi Arabia.
Introduction
Drug Information Centers (DICs) and Drug and Poison Information Centers (DPICs) in Saudi Arabia are pharmacy-based departments that function within their own institutions, mainly hospitals, and provide drug information services for prescribers and or public. In addition to that, DICs and DPICs may have a role in, but not limited to, drugs evaluation (formulary and non-formulary), off-label use of medications, education and training, research, quality improvement of drug usage to improve safety and cost effectiveness, and adverse drug reactions monitoring.
The basis of DICs and DPICs is the provision of independent drug information [1]. The World Health Organization (WHO) refers to the DICs as tools to disseminate unbiased drug information and thereby promote the rational use of drugs [1,2]. While it seems that DICs and DPICs in Saudi Arabia are well established within their institutions to serve the main purpose of their existence, their role in handling poisoning cases and toxicological aspects of drugs and toxins is not clear. We believe that medical and clinical toxicology as a field of practice is underserved within the Kingdom's huge health care system and lack the presence of well-structured and functioning poison control centers.
The WHO directory of poison centers, as of 30 September 2017 [3], lists 5 centers in the Kingdom (4 in the capital Riyadh city and 1 in Dammam city) 2 of which are actually DPICs. The role of the WHO listed poison centers and the standards of their practice are not within the scope of this study but the extreme lack of medical toxicology support and services in Saudi Arabia is noticeable and in fact, questioning their role in providing such an important health care service. We sought to evaluate the current role of DICs and DPICs in handling poisoning cases in the Kingdom of Saudi Arabia with the aim to establish an understanding of the available resources that can potentially be utilized in the future to promote medical and clinical toxicology services.
Study design and sampling frame
We conducted a cross-sectional survey-based study of all DICs and DPICs in Riyadh, the capital city of the Kingdom of Saudi Arabia during the period between February and June 2018. We included and anticipated 17 potential respondent centers.
Questionnaire design
We developed a multi-item questionnaire with an open-ended question format. The questions (Table 1) were relevant to the study objectives and expected to be answered reliably in a survey-based study design.
We introduced our study and tested survey questions for clearness, feasibility, and reliability by telephone with potential respondent centers.
Survey administration and data collection
We sent the survey questionnaire via e.mail to all 17 centers with up to three follow up contacts. Responses received were manually entered in a predesigned data collection sheet which captured the following data: name of the center, the year of establishment, contact phone number, working hours, defined roles, number of employed staff and their qualifications, availability of services to health care providers and to the public, resources and databases utilized, and an average estimate of number of poisoning cases served per month, if any.
The enrollment in the survey was voluntary and the data collected did not contain any patients' related information. In addition to that, there were no questions in the survey that inquired about any confidential institution related information.
The Institutional Review Board of King Fahd Medical City, Riyadh, approved the study.
Results
The response rate to the survey was 82%. We received responses from 14 out of 17 potential respondent centers. All questions in the survey were answered by the responding centers. The names that were used officially by the respondent centers include drug information center (7 out of 14), drug and poison information center (6 out of 14), and drug information and medication utilization evaluation center (1 out of 14). The oldest center was established in 1978 and the most recent center was established in 2017. The working hours are variable but ranges from 7:30 am or 8 am as a starting time to 3:30 pm, 4 pm, or 5 pm as a closing time. The after hours, weekends, and availability of on call-person coverage were variable but very few centers (3 out of 14) are providing round-the-clock (24/7) services through an on-call staff. The services provided are only available to health care providers in 3 centers; however, they are available for both health care providers and the public in all other centers. Handling requests or questions related to poisoning cases is not available in 5 centers and one of those centers defer all poisoning cases to a poison control center, completely independent department, available within that specific health care institution. However, other centers may occasionally receive and respond to poisoning cases. The estimated average number of cases per month is low and ranges from 1 case per month in some centers and may reach 12-18 cases per month in another center. Table 2 presents the descriptive statistics for the available services.
In terms of number of staff employed and their qualifications, Table 3 presents detailed description. There is no medical toxicologist available to handle questions related to poisoning cases. However, pharmacists with toxicology subspecialty qualifications are available as part of the team in 4 centers.
Discussion
The nomenclature used for DICs and DPICs is overlapping among surveyed centers and their involvement in handling poisoning cases. DICs have no "poison" word included in their assigned names and still some are responding to poisoning cases questions. On the other hand, DPICs have the word "poison" but some are not involved in any poisoning cases. It would be very important for any future initiative promoting toxicology services to start with recommendations regarding nomenclature that describes the actual role of any drug and/or poison center. The toxicology field in Saudi Arabia is still not well developed and structured and therefore, reliable labeling of centers that potentially may contribute in advancing toxicology services can eliminate any potential confusion in the future. Our study is the first, up to our knowledge, to evaluate DICs and DPICs readiness and availability in handling poisoning cases, and therefore, can provide the groundwork for upcoming research and initiatives interested in promoting medical and clinical toxicology services in Saudi Arabia. Although we do not expect DICs to perform in the capacity of poison control centers, we suggest that DICs and DPICs may advance toxicology resources by training physicians, pharmacists, and nurses in toxicology and poison control services. Masters' degree programs in Pharmacology and Toxicology are available in Saudi Arabia for pharmacists who are interested in advancing their toxicology qualifications. To our knowledge, there are no current available local training programs in toxicology for physicians. Medical Toxicologists go abroad for board certification or clinical fellowship training. They can provide medical director role, bring the clinical knowledge and experience, and provide clinical training for medical trainees.
Poisoned patients present to Emergency Departments at all times, night and day. Optimal and timely care requires 24/7 access to expert toxicology information services. The current staffing for Saudi DICs and DPICs leaves patients and physicians without ready access to poison information during evenings, nights, and weekends. Networking and collaboration among those centers may mitigate part of the problem (e.g. staff shortage) in providing services during the underserved periods where an on-call staff can answer requests and questions from collaborating institutions' health care providers.
The limitations in our study include the observational nature of the study design and the small sample size drawn from the DICs and DPICs in a single metropolitan area. We chose this sample in part because the services in the capital city likely represent the best level of services in the Kingdom. Another limitation in our sampling, we did not include the other 3 poison control centers listed in the WHO directory of poison centers, as of 30 September 2017 [3]. Our rationale was to include a homogenous sample that can be studied efficiently in an observational study method where the questions and the level of expected standards are almost comparable. The 2 DPICs listed in the WHO directory [3] as poison control centers met the inclusion criteria of our sample by being labeled as DPICs rather than poison control centers in their used nomenclature. However, results from all included centers imply the fact that none of them meet the WHO or the American Association of Poison Control Centers (AAPCC) requirements and standards for poison control centers. For example, one of the founding requirements is the presence of a medical toxicologist as a medical director for the poison control center, and our results showed that none of the surveyed centers has met this requirement.
Our study was not designed to assess the presumed 5 poison control centers as listed in the WHO directory [3] from the compliance and operations perspective for the WHO and AAPCC requirements for poison control centers. Future research should consider evaluating those centers and identify the potential measures of improvement in their mission and operations to serve the purpose of their existence. The overall impression from our point of view that they do not serve at the level they are supposed to be at, and this is likely due to multiple factors and the complexity in the governing structure for the health care system in Saudi Arabia.
We believe that providing timely and efficient toxicology care support and information cannot be achieved without investing time, personnel, financial resources to establish a nationwide poison control center or regional poison control centers. Poison information should be accessible round-the-clock (24/ 7) and available for both health care providers and public. Until we get there, DICs and DPICs can contribute greatly to advance and promote toxicology care by improving their employees' qualifications, recruiting poison information specialists, connecting with medical toxicologists, increasing the time coverage for their services; and networking and collaborating with other centers to compliment their services which can positively impact patient care. Such an impact and contribution are timely aligned with the current and massive national reform (Vision 2030), including the advancement and privatization of the health care sector. Vision 2030 is an ambitious national reform program that extends to all vital sectors and promotes prosperity, progress and stability of the economy and the society.
Conclusion
DICs and DPICs in Saudi Arabia have a limited role in handling poisoning cases largely because they have limited work hours. In accord with the current Vision 2030 reform effort, establishing comprehensive poison control center services is a necessity for the health care system in the Kingdom of Saudi Arabia.
Disclosure statement
No potential conflict of interest was reported by the authors. | 2019-07-15T22:29:41.431Z | 2019-01-01T00:00:00.000 | {
"year": 2019,
"sha1": "fcb86879668a01efc72fd2d6cfe414b61997519e",
"oa_license": "CCBYNC",
"oa_url": "https://www.tandfonline.com/doi/pdf/10.1080/24734306.2019.1624410?needAccess=true",
"oa_status": "GOLD",
"pdf_src": "TaylorAndFrancis",
"pdf_hash": "64c462a29e510ad391ab9a5e6f493b8138b65126",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
253735251 | pes2o/s2orc | v3-fos-license | QCD Equation of State of Dense Nuclear Matter from a Bayesian Analysis of Heavy-Ion Collision Data
Bayesian methods are used to constrain the density dependence of the QCD Equation of State (EoS) for dense nuclear matter using the data of mean transverse kinetic energy and elliptic flow of protons from heavy ion collisions (HIC), in the beam energy range $\sqrt{s_{\mathrm{NN}}}=2-10 GeV$. The analysis yields tight constraints on the density dependent EoS up to 4 times the nuclear saturation density. The extracted EoS yields good agreement with other observables measured in HIC experiments and constraints from astrophysical observations both of which were not used in the inference. The sensitivity of inference to the choice of observables is also discussed.
The properties of dense and hot nuclear matter, governed by the strong interaction under quantum chromo dynamics (QCD), is an unresolved, widely studied topic in high energy nuclear physics.First principle lattice QCD studies, at vanishing and small baryon chemical potential, predict a smooth crossover transition from a hot gas of hadronic resonances to a chirally restored phase of strongly interacting quarks and gluons [1,2].However, at high net baryon density i.e., large chemical potential, direct lattice QCD simulations are at present not available due to the fermionic sign problem [3].Therefore, QCD motivated effective models as well as direct experimental evidence are employed to search for structures in the QCD phase diagram such as a conjectured first or second order phase transition and a corresponding critical endpoint [4][5][6].Diverse signals had been suggested over the last decades [7][8][9][10][11], but a conclusive picture has not emerged yet due to lack of systematic studies to relate all possible signals to an underlying dynamical description of the system, both consistently and quantitatively.
Recently, both machine learning and Bayesian inference methods have been employed to resolve this lack of unbiased quantitative studies.A Bayesian analysis has shown that the hadronic flow data in ultra relativistic heavy-ion collisions at the LHC and RHIC favors an EoS similar to that calculated from lattice QCD at vanishing baryon density [12].In the high density range where lattice QCD calculations are not available, deep learning models are able to distinguish scenarios with and without a phase transition using the final state hadron spectra [13][14][15][16][17].
This work presents a Bayesian method to constrain quantitatively the high net baryon density EoS from data of intermediate beam energy heavy-ion collisions.A recent study has attempted such an analysis by a rough, piecewise constant speed of sound parameterization of the high density EoS [18].In this study, a more flexible parameterization of the density dependence of the EoS is used in a model which can incorporate this density dependent EoS in a consistent way and then make direct predictions for different observables.
In this work, the dynamic evolution of heavy-ion collisions is entirely described by the microscopic Ultrarelativistic Quantum Molecular Dynamics (UrQMD) model [19,20] which is augmented by a density dependent EoS.This approach describes the whole system evolution consistently within one model.No parameters besides the EoS itself are varied here.
UrQMD is based on the propagation, binary scattering and decay of hadrons and their resonances.The density dependent EoS used in this model is realized through an effective density dependent potential entering in the nonrelativistic Quantum Molecular Dynamics (QMD) [7,21,22] equations of motions, ṙi = ∂H ∂p i , ṗi = − ∂H ∂r i .
Here H = i H i is the total Hamiltonian of the system including the kinetic energy and the total potential energy V = i V i ≡ i V n B (r i ) .The equations of motion are solved given the potential energy V , which is related to the pressure in a straightforward manner [23].
Here, P id (n B ) the pressure of an ideal Fermi gas of baryons and is the single particle potential.Evidently, the potential energy is directly related to the EoS and therefore the terms potential energy and EoS are interchangeably used in this letter.
This model assumes that only baryons are directly affected by the potential interaction [24].A much more detailed description of the implementation of the density dependent potential can be found in [23,25].Note that this method does yield for bulk matter properties, strikingly similar results as the relativistic hydrodynamics simulations when the same EoS is used [25].
To constrain the EoS from data, a robust and flexible parameterization for the density dependence of the potential energy that is capable of constructing physical equations of state (EOSs) is necessary.For densities below twice the nuclear saturation density (n 0 ), the EoS is reasonably constrained by the QCD chiral effective field theory (EFT) calculations [26,27], data on nuclear incompressibility [28], flow measurements at moderate beam energies [7,[29][30][31] and Bayesian analysis of both neutron star obervations and low energy heavy-ion collisions [32].This work focuses on the high density EoS, particularly on the range 2n 0 -6n 0 , which is not well understood yet.Therefore, the potential energy V (n B ) is fixed for densities up to 2n 0 by using the Chiral Mean Field (CMF) model-fit to nuclear matter properties and flow data in the low beam energy region [23].For densities above 2n 0 , the potential energy per baryon V is parameterized by a seventh degree polynomial: where h=-22.07MeV is set to ensure that the potential energy is a continuous function at 2n 0 .This work constrains the parameters θ i and thus the EoS, via Bayesian inference using the elliptic flow v 2 and the mean transverse kinetic energy ⟨m T ⟩ − m 0 of mid rapidity protons in Au-Au collisions at beam energy √ s NN ≈ 2 − 10 GeV.[40][41][42].Important, sensitive observables such as the directed flow [9,43] are then used to cross check the so extracted EoS.The choice of proton observables (as proxy to baryons) is due to the fact that interesting features in the EoS at high baryon density and moderate temperatures are dominated by the interactions between baryons and protons form the most abundant hadron species, actually measured in experiments, for beam energies considered in present work.Further details on the choice of data and calculation of flow observables are given in appendix A, which includes Ref. [44].
The experimental data D = {v exp 2 , ⟨m T ⟩ exp − m 0 } are used to constrain the parameters of the model θ = {θ 1 , θ 2 , ..., θ 7 } by using the Bayes theorem, given by P (θ|D) ∝ P (D|θ)P (θ). ( Here P (θ) is the prior distribution, encoding our prior knowledge on the parameters while P (D|θ) is the likelihood for a given set of parameters which dictates how well the parameters describe the observed data.Finally, P (θ|D) is the desired posterior which codifies the updated knowledge on the parameters θ after encountering the experimental evidence D.
The objective is to construct the joint posterior distribution for the 7 polynomial coefficients (θ) based on experimental observations, for which Markov Chain Monte Carlo (MCMC) sampling methods are used.For an arbitrary parameter set, the relative posterior probability up to an unknown normalisation factor is simply given by the prior probability as weighted by its likelihood.To evaluate the likelihood for a parameter set, the v 2 and the ⟨m T ⟩ − m 0 observables need to be calculated by UrQMD.The MCMC method then constructs the posterior distribution by exploring the high dimensional parameter space based on numerous such likelihood evaluations.This requires numerous computationally intensive UrQMD simulations which would need unfeasible computational resources.Hence, Gaussian Process (GP) models are trained as fast surrogate emulators for the UrQMD model, to interpolate simulation results in the parameter space [12,[45][46][47].Cuts in rapidity and centrality that align with that of the experiments are applied on UrQMD data to create training data for the GP models.The constraints applied to generate the physical EoSs to train the models, the performance of the GP models and other technical details can be found in appendix B.
The prior on the parameter sets is chosen as Gaussian distributions with means and variances evaluated under physical constraints.More details on the choice of the priors are given in appendix C. The log-likelihood is evaluated using uncertainties from both the experiment and from the GP model.The prior, together with the trained GP-emulator, experimental observations and the likelihood function are used for the MCMC sampling by employing the DeMetropolisZ [48, 49] algorithm from PyMC v4.0 [50].
Closure tests.In order to verify the performance of the Bayesian inference method described above, two closure tests are performed.The first test involves constructing the posterior using v 2 and ⟨m T ⟩ − m 0 , simulated with the experimental uncertainties from UrQMD for a specific but randomly chosen EoS.The inference results are then compared to the known 'ground-truth'.Figure 1 shows the posterior constructed in one such test for a random input potential.The black curve in the plot is the 'ground-truth' input potential while the color contours represent the reconstructed probability density for a given value of the potential V (n b ).Two specific estimates of the 'ground-truth' potential are highlighted in the figure besides the posterior distribution of the potential.These are the Maximum A Posteriori (MAP) estimate, which represents the mode of the posterior distribution as evaluated via MCMC and the 'MEAN' estimate as calculated by averaging the values of the sampled potentials at different densities.The comparison of the MAP and the MEAN curves with the 'ground-truth' shows that the reconstruction results from the Bayesian Inference are centered around the 'ground-truth' EoS and the sampling converges indeed to the true posterior.
From the spread of the posterior it can be seen that the EoS in the closure test is well constrained up to densities 4n 0 for the observables used in the present study.For densities from 4n 0 up to 6n 0 the generated EoSs have larger uncertainties.However, the mean potentials follow closely the true potential.
The second closure test is done in order to determine the sensitivity of the inference to the choice of the observational data.Hence, the procedure is similar to the previous test, except that the ⟨m T ⟩ − m 0 values for √ s NN = 3.83 and 4.29 GeV are not used in this test to estimate the posterior.When these two data points are excluded, the agreement of the 'ground-truth' EoS with the MAP and MEAN estimates decreases considerably for densities greater than 4n 0 .This indicates that these data points are crucial indeed for constraining the EoS at higher densities.Further details about these closure tests, and the sensitivity on excluding different data points, can be found in appendices D, E and F. There, also a comparison of the prior and posterior probability distributions is shown to highlight the actual information gain obtained through the Bayesian inference.
Results based on experimental data: The results of sampling the posteriors by using experimental data, for the two cases, with and without the ⟨m T ⟩ − m 0 values at √ s NN = 3.83 and 4.29 GeV, are shown in figure 2. The upper panel corresponds to using 15 experimental data points while the lower panel shows the results without the two ⟨m T ⟩ − m 0 values.The data as used in this paper do well constrain the EoS, for densities from 2n 0 to 4n 0 .However, beyond 4n 0 , the sampled potentials have a large uncertainty and the variance is significantly larger for the posterior extracted from 13 data points.Beyond densities of about 3n 0 , the posterior extracted using 13 data points differs significantly from the posterior extracted using all 15 points.This is quite different from our closure tests, where the extracted MAP and MEAN curves did not depend strongly on the choice of the data points used.This indicates a possible tension within the data in the context of the model used.
To understand this significant deviation which appears when only two data points are removed, the MAP and MEAN EoS resulting from the two scenarios are implemented into the UrQMD model to calculate the v 2 and the ⟨m T ⟩ − m 0 values which are then compared with the experimental data which were used to constrain them.Figure 3 shows the MAP and MEAN curves together with 1-sigma confidence intervals from the posterior.Both results, with different inputs, fit the v 2 data very well ex- cept for the small deviation at the high energies.The fit is slightly better when the ⟨m T ⟩ − m 0 values at the lowest energies are removed.At the same time, using all data points results in larger ⟨m T ⟩ − m 0 values for both the MAP and MEAN curves.The bands for ⟨m T ⟩ − m 0 are much broader than the bands for v 2 .Yet, the uncertainty bands clearly support the differences in the fit portrayed by the MEAN and MAP curves.The model encounters a tension between the ⟨m T ⟩ − m 0 and the v 2 data.This tension may either be due to a true tension within the experimental data, or due to a shortcoming of the theoretical model used to simulate both the ⟨m T ⟩ − m 0 and the v 2 data at high beam energies for a given equation of state.It should also be noted that at higher beam energies the contributions from the mesonic degrees of freedom to the equation of state becomes more dominant which may make an explicitly temperature dependent equation of state necessary.
Finally, the extracted EoS can be tested using various observables like differential flow measurements (see appendix G, which include Refs.[51][52][53][54][55]) or different flow coefficients.The slope of the directed flow dv 1 /dy at mid rapidity are calculated using the reconstructed MEAN and MAP EoSs.The results together with available experimental data are shown in figure 4. The dv 1 /dy prediction closely match the experimental data, especially at the higher energies, for the MEAN EoS extracted from all 15 data points.The 1-sigma confidence intervals are indicated as colored bars.It is shown only for one beam energy due to the high computational cost.It can be seen that at high energies, in the 13-points case, the prediction clearly undershoots the data while in the 15-points case, the experimental data lies at the border of the 1-sigma band.The reconstructed EoSs for all other energies are consistent with the dv 1 /dy data though it was not used to constrain the EoSs.
To relate the extracted high density EoS to constraints from astrophysical observations, the squared speed of sound (c 2 s ) at T = 0 is presented for the MEAN EoSs as a function of the energy density in Figure 5, together with a contour which represents the constraints from recent Binary Neutron Star Merger (BNSM) observations [60] [61].The speed of sound, as the derivative of the pressure is very sensitive to even small variations of the potential energy.The c 2 s values estimated from all data points show overall agreement with the c 2 s constraints from astrophysical observations and predicts a rather stiff equation of state at least up to 4n 0 .In particular, both the astrophysical constraints (see also [62]) and the EoS inference in the present work gives a broad peak structure for c 2 s .This is compatible with recent functional renormalization group (FRG) [63] and conformality [64] analyses.However, if only the 13 data points are used, the extracted speed of sound shows a drastic drop, consistent with a strong first order phase transition at high densities [8,9].This is consistent with the softening phenomenon observed for ⟨m T ⟩ − m 0 data shown in Figure .3. In order to give an estimate of the uncertainty on the speed of sound, we have calculated the speeds of sound for 100000 potentials which lie within the 68% credibility interval of the coefficients, however excluding those which lead to acausal equations of state for densities below 4.5 n0.
Conclusion.
Bayesian inference can constrain the high density QCD EoS using experimental data on v 2 and ⟨m T ⟩ − m 0 of protons.Such an analysis, based on HIC data, can verify the dense QCD matter properties extracted from neutron star observations and complements astrophysical studies to extract the finite temperature EoS from BNSM merger signals as well as constrain its dependence on the symmetry energy.
A parametrized density dependent potential is introduced in the UrQMD model used to train Gaussian Process models as fast emulators to perform the MCMC sampling.In this framework, the input potential can be well reconstructed from experimental HIC observables available already now from experimental measurements.The experimental data constrain the posterior constructed in our method for the EoS, for densities up to 4n 0 .However, beyond 3n 0 , the shape of the posterior depends on the choice of observables used.As a result, the speed of sound extracted for these posteriors exhibit obvious differences.The EoS extracted using all available data points is in good agreement with the constraints from BNSMs with a stiff EoS for densities up to 4n 0 and with-out a phase transition.A cross check is performed with the extracted potentials by calculating the slope of the directed flow.Here, a MEAN potential extracted from all 15 data-points gives the best, consistent description of all available data.The inferences encounter a tension in the measurements of ⟨m T ⟩ − m 0 and v 2 at a collision energy of ≈4 GeV.This could indicate large uncertainties in the measurements, or alternatively the inability of the underlying model to describe the observables with a given input EoS.Note, that the data are from different experiments that have been conducted during different time periods.The differences in the acceptances, resolutions, statistics and even analysis methods of experimental data makes it difficult for us to pin down the exact sources of these effects.
Tighter constraints and fully conclusive statements on the EoS beyond density 3n 0 require accurate, high statistics data in the whole beam energy range of 2-10 GeV which will hopefully be provided by the beam energy scan program of STAR-FXT at RHIC, the upcoming CBM experiment at FAIR and future experiments at HIAF and NICA.It is noted that, when approaching higher beam energies, which would be important in extending the constraints to higher temperatures and/or densities, the currently used transport model needs to incorporate further finite-temperature and possible partonic matter effects together with relativistic corrections, which we leave for future studies.Further effort should be put into the development and improvement of the theoretical models to consistently incorporate different density dependent EoSs for the study of systematic uncertainties [65].In future, the presented method can also be extended to include more parameters of the model as free parameters for the Bayesian inference, which would also require more and precise input data.In addition, other observables such as the higher order flow coefficients and v 1 can be incorporated into the Bayesian analysis, if permitted by computational constraints, for a more comprehensional constraint of the EoS in the future.The GP emulators are trained on a set of 200 different parameter sets, each with a different high density EoS and the performance of these models is then validated on another 50 input parameter sets.15 different GP models are trained, each one predicting one of the observables (v 2 for 10 collision energies + ⟨m T ⟩ − m 0 for 5 collision energies).The trained GP models can be evaluated by comparing the GP predictions with the "true" results of UrQMD simulations.The performance of the GP models in predicting the v 2 and ⟨m T ⟩ − m 0 observables for 50 different EoSs in the validation dataset are shown in figures 8 and 9 respectively.As evident in these plots, the GP models can accurately predict the simulated observ- ables, given the polynomial coefficients.Hence, the GP models can be used as fast emulators of UrQMD during the MCMC sampling.All the posterior distributions presented in this work are constructed by 4 different MCMC chains.Each chain generates 25000 samples after 10000 tuning steps.
Appendix C: The prior In the following we will explain the choice of the prior distributions which is used as starting point of the Bayesian inference.Technically speaking, the prior distribution of parameters θ i are chosen as Gaussian distributions whose means and variances are estimated from the randomly sampled EoSs, under physical constraints, used in the training of the Gaussian Process Emulators.These constraints were introduced to ensure numerically stable results in training the GP models.To create such a robust training dataset, different physics constraints were applied as discussed in appendix B. These constraints eliminate some of the wildly fluctuating and superluminal EoSs from the training data.
To ensure that the prior in the analysis is broad enough to reflect an a priori high degree of uncertainty (i.e., without introducing a bias) the mean and width of the distributions in the constraint GP training where used also in the prior.However, the polynomial coefficients θ i resulting from these constraints, used to construct the prior distributions for the Bayesian inference, are then sampled independently and are thus not correlated as they would be in the GP model training.Thus, the priors for the Bayesian inference are much broader than the distributions used for the GP model training.The means and standard deviations of the Gaussian priors for the polynomial coefficients are shown in the table I.
Regarding the prior for the Bayesian inference, it is important to note that a prior based only on the GP training constraints could also be a good starting point for the parameter estimation but not a necessary one.The physics constraints can disfavor the acausal range for the parameters.However, we employ this range only as a soft constraint in the prior as we use the mean and width of each coefficient independently, thereby the prior is not limited by the correlations between the coefficients from the GP-training set.This results in inferred potentials which can also be outside the training range for the the various equations of state required to train the Gaussian Process emulator.Once the EoS is constrained, of course, many observables for many beam energies and system sizes can be predicted and compared.We are also planning to make the model available in the future so that all these possibilities can be explored.
In addition to the directed flow, which was shown in the letter, a comparison with recently published HADES data on the differential elliptic flow in Au-Au collisions at E lab = 1.23AGeV [55] is presented here.This comparison of the two different MEAN EoS to HADES data is shown in figure 15.As one can see, the extracted EoSs reproduce the p T dependence nicely up to a proton momentum of 1 GeV.Above this range, the model slightly overestimates the elliptic flow compared to HADES data.The reason for this is likely a small momentum dependence of the potential interaction which is not considered in the present approach.It is however important to note that the integrated elliptic flow is only sensitive to the flow around the maximum of the proton p T distribution which corresponds roughly to p T between 300 and 400 MeV.
Figure 1 .
Figure 1.(Color online) Visualisation of the sampled posterior in the closure test.The color represents the probability for the potential at a given density.The 'ground-truth' EoS used for generating the observations is plotted as black solid line.The red dashed and orange dot-dashed curves are the MAP and MEAN EoS for the posterior.
Figure 2 .
Figure 2. (Color online) Posterior distribution for the EoS inferred using experimental observations of v2 and ⟨mT ⟩−m0.The top figure is the posterior when all 15 data points were used while the bottom figure is obtained without using the ⟨mT ⟩ − m0 values for √ sNN= 3.83 and 4.29 GeV.The MAP and MEAN EoSs in both cases are plotted in red dashed and orange dot-dashed curves respectively.The vertical, grey line depicts the highest average central compression reached in collisions at √ sNN=9 GeV.The CMF EoS is plotted in violet for density below 2n0.
Figure 3 .
Figure 3. (Color online) v2 and ⟨mT ⟩ − m0 values from UrQMD using the MEAN and MAP EoS as extracted from measured data.The observables for both MAP and MEAN EoSs, extracted by using all 15 data points are shown as solid and dashed red lines respectively, while those generated using only the 13 data points are shown as solid and dashed black lines respectively.The experimental data are shown as blue squares.The uncertainty bands correspond to a 68 % credibility constraint constructed from the posterior samples
Figure 4 .Figure 5 .
Figure 4. (Color online) Slope of the directed flow, dv1/dy, of protons at mid rapidity.The experimental data [37-39, 55-59]are shown as blue squares.The colored bars correspond to a 68% credibility constraint constructed from the posterior samples.
Figure 7 .
Figure 7. (Color online)Visualisation of the v2 and ⟨mT ⟩−m0 for 50 random EoSs from the training data.The upper plot is the v2 and the lower plot is the ⟨mT ⟩ − m0 as a function of √ sNN.The experimental measurements are plotted in blue squares while the gray lines are from the training EoSs.
Figure 9 .Figure 15 .
Figure 9. (Color online) Performance of the Gaussian Process models in predicting the ⟨mT ⟩ − m0 for 5 different collision energies.The predictions are shown in blue while the black, dashed line depicts the true= predicted curve.
Table I .
Means (µ) and standard deviations (σ) of the Gaussian priors for the seven polynomial coefficients (θi). | 2022-11-22T06:41:24.870Z | 2022-11-21T00:00:00.000 | {
"year": 2022,
"sha1": "3ae948a9159e426406c5e101cbb3a8e51e7028e3",
"oa_license": "CCBY",
"oa_url": "http://link.aps.org/pdf/10.1103/PhysRevLett.131.202303",
"oa_status": "HYBRID",
"pdf_src": "ArXiv",
"pdf_hash": "3ae948a9159e426406c5e101cbb3a8e51e7028e3",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Medicine",
"Physics"
]
} |
119280626 | pes2o/s2orc | v3-fos-license | Optimal stochastic transport in inhomogeneous thermal environments
We consider optimization of the average entropy production in inhomogeneous temperature environments within the framework of stochastic thermodynamics. For systems modeled by Langevin equations (e.g. a colloidal particle in a heat bath) it has been recently shown that a space dependent temperature breaks the time reversal symmetry of the fast velocity degrees of freedom resulting in an anomalous contribution to the entropy production of the overdamped dynamics. We show that optimization of entropy production is determined by an auxiliary deterministic problem describing motion on a curved manifold in a potential. The"anomalous contribution"to entropy plays the role of the potential and the inverse of the diffusion tensor is the metric. We also find that entropy production is not minimized by adiabatically slow, quasi-static protocols but there is a finite optimal duration for the transport process. As an example we discuss the case of a linearly space dependent diffusion coefficient.
Introduction. -The last decades have witnessed a tremendous development in our abilities to fabricate artificial devices on the micro-and nanometer scale, and to manipulate and monitor biological and soft matter systems. Out of the numerous evidences for this progress we mention just two remarkable examples, the realization of a micrometer-sized Stirling-engine [1] and the verification of Landauer's principle [2] using a colloidal particle in a double-well potential to represent the information memory. Both these examples link small non-equilibrium systems, in which diffusive processes due to thermal fluctuations play a dominant role, to concepts well-known from macroscopic classical thermodynamics. The theoretical basis for this connection is provided by stochastic thermodynamics, a framework which systematically extends thermodynamic quantities such as exchanged heat, applied work [3] or entropy production to individual fluctuating trajectories [4]. For the distribution functions of such quantities, exact general results can be obtained, the Jarzynski relation being probably the most prominent example [5].
For both the above mentioned experimental examples [1,2], it is well-known that optimal bounds exist in the limit of adiabatically slow modulation of the system: the Carnot efficiency for the Stirling engine [1], and the Landauer bound for information erasure [2]. However, is it possible to find an optimal time-dependent "control" (realized by external forcings) so that a specific quantity of interest becomes optimal during a process which takes only finite time? Within the framework of stochastic thermodynamics, this question has first been posed by Seifert [6] in order to minimize the mean work applied to a colp-1 loidal particle in a laser trap and to calculate efficiency of finite-time working cycles [7]. Afterwards it has been further extended to more general optimization problems in a number of publications, see e.g. [8][9][10][11][12][13][14].
All these studies have been performed for systems in contact with a single heat bath at constant temperature. In many cases of interest, however, especially when considering Brownian and molecular motors (see for example [3,15,16]), transport is induced by systematically changing the temperature in time and/or by generating temperature gradients. A recent work [17] considered the case of a time varying (though spatially homogeneous) temperature which is used as an additional control parameter. In the present Letter we study optimal finite-time processes in the presence of temperature gradients by optimizing the total entropy production [12] of a system described by Langevin equations. In doing so, we take into account that, if temperature is not homogeneous in space, the correct expression for the entropy production in the strong friction limit is not simply given by the overdamped approximation of the entropy production functional, but has an additional "anomalous" contribution stemming from a symmetry breaking of the fast velocity degrees of freedom induced by the temperature gradient [18].
In an earlier work, it has been shown that for a constant diffusion matrix, the control which optimizes heat or work is essentially given by the solution of an auxiliary problem described by deterministic transport according to Burgers equation [10]. However, this is not the case any more if temperature is space dependent. We will show here that optimization of entropy production in inhomogeneous temperature environments can still be mapped into a deterministic transport problem. Furthermore, we find that for constant temperatures but space dependent friction coefficient the auxiliary problem is equivalent to finding the geodesics on a curved manifold, where the metric tensor is the inverse of the diffusion matrix.
Entropy production in inhomogeneous media. -We consider driven diffusive motion in an inhomogeneous temperature environment modeled by the Langevin equationẋ where we have allowed temperature T (x x x) and friction coefficient γ(x x x) to be space-dependent with stationary profiles.
The first term f f f (x x x, t)/γ(x x x) on the right-hand side represents the external deterministic driving forces acting on the particle, while the last term 2T (x x x)/γ(x x x) η η η(t) models the impact of thermal fluctuations by unbiased Gaussian white noise with correlations η i (t)η j (s) = δ ij δ(t − s) (we set Boltzmann's constant to unity). This multiplicative noise term is interpreted in the non-anticipative Itôconvention. The unusual contribution T ∇γ −1 is a consequence of space-dependence of friction [19] and results from the small-inertia limit of the underlying Langevin-Kramers dynamics [20]. The inhomogeneous heat bath is assumed to locally fulfill Einstein's relation D(x x x) = IT (x x x)/γ(x x x) for the diffusion matrix D(x x x), which is proportional to the identity matrix I. In the following, we briefly recapitulate known properties of the entropy production associated with diffusive motion according to (1). It can be shown that the entropy production in the environment is given by the sum of two terms: a regular one and an anomalous one [18]. The regular one is defined as the log-ratio of the probability P of a specific forward path, which is a solution x x x(t) of (1), to the probabilityP for the occurrence of the backward path in the time-reversed overdamped dynamics [21]. The anomalous contribution accounts for the breaking of time-reversal symmetry in velocities, induced by the temperature gradient. It appears in the limit of vanishingly small inertia of the full Langevin-Kramers dynamics, but would be overlooked in the naive overdamped approximation when setting mass to zero [18]. The entropy production in the environment thus reads where the integral is along the path x x x(t) and the product labeled by the open circle has to be evaluated according to the midpoint rule (Stratonovich convention). Note that in addition to the entropy production f f f /T as an effect of the external forces there is also a regular contribution −∇ ∇ ∇T /T from the spatial change of temperature along the path. The entropy of the system itself is defined as [22] S sys = − ln ρ(x x x, t) , where ρ(x x x, t) is the solution of the Fokker-Planck equation associated with (1). The total entropy production (in the system and the environment) along the path x x x(t) is therefore given by Averaging (4) over many realizations of the path x x x(t) with given distribution ρ 0 (x x x 0 ) = ρ(x x x 0 , t = 0) of the initial points x x x 0 = x x x(t = 0), we find the quantity of main interest, the total average entropy production S tot . It can be written as where n is the dimensionality of x x x and τ denotes the time at which the path x x x(t) ends. The first term represents the regular entropy production, the second term is the anomalous contribution [18]. To obtain the specific form of the regular part from the expression in (4), we have made use of the Fokker-Planck equation for ρ(x x x, t) associated with (1), which can be written in form of the transport equation with the current velocity [23,24] v and using the partial integration "trick" ∇ ∇ ∇h = − h ∇ ∇ ∇ ln ρ , which is valid for arbitrary functions h = h(x x x, t) (bounded at infinity). Note that if temperature is constant (the regular case with ∇ ∇ ∇T = 0), the average total entropy production (5) is simply a quadratic form of the current velocity [21].
From expression (5) it is clear that the average entropy production is largely determined by the evolution of the distribution of paths x x x(t). Extremal entropy production therefore requires a specific "optimal" evolution of paths. Such "optimal" evolution can be imposed on the system by applying a suitable time-dependent protocol [6] to control (parts of) the external potentials and forces. In our model (1), the external control is incorporated into the term f f f (x x x, t) by its explicit time-dependence.
In the remainder of the paper, we study the conditions under which the total average entropy production (5) becomes extremal. This optimization problem is typically subject to constraints, in particular the distribution ρ 0 (x x x 0 ) of initial points is usually prescribed. Other additional constraints may be present as well, like a specific final distribution ρ(x x x, t = τ ) or a specific value of the control at final time τ .
Optimization of entropy production. -We now study the problem of optimizing the average entropy production S tot , which is of the general form where L contains the regular part of the entropy production and the anomalous one, The "potential" term is given by the anomalous entropy production The first step in our analysis consists in rewriting the average in (8) over many realizations of the path along which the integral is performed into an equivalent average over the distribution ρ(x x x, t), obeying (6), In the second step we have used the formal solution of the Fokker-Planck equation (6), where ξ(t; x x x 0 ) solves the auxiliary deterministic dynamicṡ and ρ 0 (x x x 0 ) is the distribution of the path starting points x x x 0 . As already mentioned, ρ 0 is typically specified by the problem at hand. The optimal average entropy production is thus obtained by extremizing the time-integral in the second line of (11) for any given initial point x x x 0 . This corresponds to a standard variational problem for the auxiliary trajectories ξ(t; x x x 0 ), with the "trajectory-wise" entropy production (14) being identical to the cost function [12]. The integrand L(ξ,ξ, t), as specified in (9), can be interpreted as the "Lagrange-function" of the problem. It contains a kinetic-like term (D −1 ) ijξ iξj in curved space and a potential-like term U . The metric of the space is equivalent to the inverse of the diffusion coefficient, The optimal solutions ξ(t; x x x 0 ) of this variational problem solve the Euler-Lagrange equations with the Christoffel-Symbols Γ i jk , defined as Γ i jk = 1 2 g im ∂g mk ∂x j + ∂gmj ∂x k − ∂g jk ∂x m . The relation (16) and its implications for optimal stochastic transport are the main results of this paper. We remark that if the temperature profile is homogeneous the anomalous contribution vanishes (U = 0) and minimization of the entropy production is equivalent to finding the geodesics for free deterministic motion in a space with metric tensor (15).
According to (12) and (13) the optimal density ρ(x x x, t) is transported along these auxiliary trajectories with a local velocity which corresponds to the current velocity (7) at this point. It is remarkable that the auxiliary deterministic dynamics (16) on the curved manifold can be fully described in terms of the current velocity. Once the auxiliary problem is solved, the external protocol acting on f f f (x x x, t) has to be adapted to generate the optimal transport according to the solution of (16).
Since the Euler-Lagrange equation (16) is a secondorder differential equation, we need-in addition to the initial point x x x 0 -a second condition to obtain a unique solution, e.g., we need to fix the initial velocity or an intermediate point along the trajectory. This freedom can be p-3 used to meet the additional constraints of the optimization problem. For instance, to reproduce a given final density ρ(x x x, τ ) we may choose the final points ξ(τ ; x x x 0 ), so that the relation ρ(x x x, τ ) = dx x x 0 δ(x x x − ξ(τ ; x x x 0 ))ρ 0 (x x x 0 ) is fulfilled. In that case, for a constant temperature (no anomalous potential term), the optimization problem is equivalent to an optimal assignment problem mapping the initial density ρ 0 (x x x 0 ) to the final one ρ(x x x, τ ) with the quadratic cost function dx x x 0 ρ 0 (x x x 0 ) dt (D −1 ) ijξ iξj [12]. If, however, no additional constraints are specified we can exploit the freedom of choosing a second condition for the solution of (16) to perform a further optimization step over the final densities ρ(x x x, τ ).
Example: one-dimensional motion for a linear diffusion coefficient. -In order to highlight the influence of an inhomogeneous temperature it is instructive to study the one-dimensional transport of a Brownian particle between given initial and final states (as introduced by Seifert and Schmiedl in [7]). We will here compare two cases: the anomalous one of a linear temperature profile and a regular one where temperature is constant but friction is space dependent. We know that the latter situation is solved by geodesics. The difference between the two cases gets more marked as the transport time increases and the process is closer to quasi-static operation.
In one dimension, the auxiliary equation of motion (16) readsξ − 1 2D where we remark that in the regular case of constant temperature the potential term vanishes (U = 0). Since (17) (like its general counterpart (16)) is obtained from extremizing a "Lagrange-function" (9), we actually face a Hamiltonian dynamics with preserved "energy" (D −1 ) ijξ iξj +U . Therefore, (17) can be easily solved for the quadratic velocity with the resulṫ where C is an integration constant corresponding to the conserved "energy". For the thermally inhomogeneous study case we choose a linear temperature profile and a constant friction coefficient with constant temperature gradient ϑ. In order to avoid non-physical zero or negative temperatures we restrict to positions larger than x min = −T 0 /ϑ. For the case of spacedependent friction we can define Therefore, in both cases, we have the same space dependent diffusion coefficient but only for the space-dependent temperature profile the second term in (18) contributes, representing the anomalous entropy production. With these definitions, the righthand side of (18) is linear in ξ and can be solved explicitly with a solution that is quadratic in time: with and the prescribed initial position ξ(0; x 0 ) = x 0 and final position ξ(τ ; x 0 ) = x τ , where for the sake of simplicity we have considered x τ > x 0 (i.e. motion towards the hotter region) 1 . In (23), we have introduced the characteristic parameter χ to distinguish between the anomalous 1 By (23) the motion starts with a positive velocity. There is a second solution with a plus sign replacing the minus sign in front of the square root so that motion starts with negative velocities. It is easy to verify that these solutions maximize the average entropy p-4 Optimal stochastic transport in inhomogeneous thermal environments case with inhomogeneous temperature (setup (19), χ = 1) and the regular one with constant temperature but spacedependent friction (setup (20), χ = 0). Actually, the term involving χ in the square root represents −τ 2 DU coming from (18).
Beside its dependence on the parameters that define the specific system (like T 0 , ϑ, and γ 0 ), and on the initial and final positions x 0 , x τ , this optimal solution also depends on the process duration τ . In the absence of the anomalous contribution to the entropy production (χ = 0), this dependence corresponds to a trivial rescaling of time. The optimal trajectory then is a parabola with positive curvature x τ + X| χ=0 , independent of process duration. In presence of the anomaly (χ = 1), however, the evolution of ξ(t; x 0 ) depends on τ also via X and can change qualitatively. For processes that take exactly τ = √ 2(x τ − x 0 )γ 0 /ϑ, the solution ξ(t; x 0 ) is a straight line, while it is a parabola with a positive (negative) concavity for shorter (longer) process durations. Interestingly, for very slow processes with the optimal trajectory even overshoots the target position at x τ and eventually changes direction to finally reach it. Such a counter-intuitive behavior can be traced back to the influence of the anomalous contribution to the "cost" of the optimal trajectory. In the present case, the entropic cost of the optimal evolution reads (see eqs. (9), (10), (14) and (18)) where in the second step we have used the explicit form (19) of the linear temperature profile. The first term of the integrand of the cost is constant whereas the second one depends on the position ξ. In fact, the instantaneous cost is lower at high temperatures which, in this case, are reached for large values of ξ. When the transport time is fixed and long it can therefore be profitable to spend part of it where the anomalous term is less costly even though this is away from the target. It is now interesting to consider the dependence of the total cost on the duration of the transport operation τ (Fig. 2). Naively we would expect a slow, quasi-static process (long τ ) to be less dissipative and therefore associated with a lower entropy production. For the non-anomalous setting of constant temperature this is indeed the case (see [12]) as we have production. However, this maximizing trajectory, if unconstrained, visits unphysical regions of positions corresponding to negative temperatures. When a temperature gradient is present the situation changes drastically as the anomalous contribution increases with the process duration and the minimum cost is achieved at finite time. For the discussed linear temperature profile the optimal cost reads where we recall that X| χ=1 also depends on τ as specified in (23). This expression is not a monotonic decreasing function of τ and therefore, there is a finite valued τ minimizing it: It is interesting to note that this optimal duration of the protocol depends inversely on the intensity of the gradient and corresponds to the case in which the solution of (22) is a straight line. Furthermore, considering (24), we can see that the overshooting of the final target takes places for transport times that are longer than the optimal ones τ o.s. = 2 xτ +T0/ϑ (xτ −x0) τ * > 2τ * . Before moving to the conclusion we wish to recall that, although the solutions (22) are sufficient to assess the optimal transport duration and highlight several peculiarities of the optimal protocol in presence of temperature gradients, they still depend on the explicit evolution of the probability density (via the current velocity). In order to have a complete solution for the protocols one has to consider the specific initial distribution and solve the corresponding assignment problem. p-5 Conclusion. -We have shown that the optimization of entropy production for driven diffusion processes in an inhomogeneous temperature environment can be mapped into an auxiliary deterministic transport problem describing motion on a curved manifold. The metric tensor of the manifold is given by the inverse of the diffusion matrix. Contributions to the entropy production due to the "entropic anomaly" [18] play the role of a potential energy for the auxiliary deterministic dynamics on the curved manifold. In non-anomalous cases the optimization reduces to the solution of geodesics. Recently, geodesics were found as optimal solutions in control parameter space for excess power in [14] within a linear response analysis, and for slowly varying protocols driving a particle in a harmonic potential in [17]. Using the simple example of onedimensional diffusion in a linear temperature gradient, we demonstrated that optimization of entropy production including the anomaly requires finite processing times which are inversely proportional to the gradient. In contrast, for regular settings (homogeneous temperature), the optimal average entropy production is reached in the quasi-static limit of adiabatically slow operation. We have also shown that for slow transports the anomalous optimal trajectory is markedly different from the regular one and may display non trivial features such as overshooting of the final target. We have here presented the details in the case of a spacedependent temperature and friction coefficient. However, temperature and viscosity (friction) variations with time can be treated along similar lines, the main difference to our central result (16) being an additional term from the time-derivative of the metric tensor. * * * This work was supported by the Academy of Finland as part of its Finland Distinguished Professor program, project 129024/Aurell, and through the Center of Excellence COIN. We furthermore acknowledge financial support from the VR grant 621-2012-2982. | 2013-03-13T16:16:15.000Z | 2013-03-13T00:00:00.000 | {
"year": 2013,
"sha1": "24aa38c8377014630aed4cde6377a7fef34fb62d",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/1303.3206",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "24aa38c8377014630aed4cde6377a7fef34fb62d",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
} |
16160267 | pes2o/s2orc | v3-fos-license | Task Allocation Model for Rescue Disabled Persons in Disaster Area with Help of Volunteers
In this paper, we present a task allocation model for search and rescue persons with disabilities in case of disaster. The multi agent-based simulation model is used to simulate the rescue process. Volunteers and disabled persons are modeled as agents, which each have their own attributes and behaviors. The task of volunteers is to help disabled persons in emergency situations. This task allocation problem is solved by using combinatorial auction mechanism to decide which volunteers should help which disabled persons. The disaster space, road network, and rescue process are also described in detail. The RoboCup Rescue simulation platform is used to present proposed model with different scenarios.
I. INTRODUCTION
Persons with disabilities suffer a much higher risk in the case of disasters than persons without disabilities.The data of recent disasters i.e.Tsunami, Katrina and earthquake shows that the mortality of disabled people during the disaster were very high (Ashok Hans, 2009).The reason for this is because many handicapped people may face physical barriers or difficulties of communication that they are not able to respond effectively to crisis situations.They were not able to evacuate by themselves.Obviously, disabled people need assistances to evacuate.
While in the past, persons with disabilities were not taken in consideration during the planning and mitigation of disaster management, in more recent years, this group of population has been realized as a prior target to help in emergency situations.It is important to learn the needs of persons with disabilities and the various forms of disabilities in order to help them effectively and minimize the mortality.The rescue process for persons with disabilities is a dynamic process under uncertainty and emergency, therefore it is not easy to predict what will happen in the rescue process.In that case, the computer simulation can be used to simulate the rescue process with various scenarios in the disaster area.
Most computer based simulation evacuation models are based on flow model, cellular automata model, and multiagent-based model.Flow based model lacks interaction between evacuees and human behavior in crisis.Cellular automata model is arranged on a rigid grid, and interact with one another by certain rules [1].A multi agent-based model is composed of individual units, situated in an explicit space, and provided with their own attributes and rules [2].This model is particularly suitable for modeling human behaviors, as human characteristics can be presented as agent behaviors.Therefore, the multi agent-based model is widely used for evacuation simulation [1][2][3][4].
Recently, Geographic Information Systems (GIS) is also integrated with multi-agent-based model for emergency simulation.GIS can be used to solve complex planning and decision making problems [5][6][7].In this study, GIS is used to present road network with attributes to indicate the road conditions.
We develop a task allocation model for search and rescue persons with disabilities and simulate the rescue process to capture the phenomena and complexities during evacuations.The task allocation problem is presented by decision of volunteers to choose which victims should be helped in order to give first-aid and transportation with the least delay to the shelter.The decision making is based on several criteria such as health condition of the victims, location of the victims and location of volunteers.
The rest of the paper is organized as follows.Section 2 reviews related works.Section 3 describes the proposed rescue model and the task allocation model.Section 4 provides the experimental results of different evacuation scenarios.Finally, section 5 summarizes the work of this paper.
II. RELATED WORKS
There is considerable research in emergency simulation by using GIS multi-agent-based models.Ren et al. (2009) presents an agent-based modeling and simulation using Repast software to construct crowd evacuation for emergency response for an area under a fire.Characteristics of the people are modeled and tested by iterative simulation.The simulation results demonstrate the effect of various parameters of agents.Zaharia et al. (2011) proposes agent-based model for the emergency route simulation by taking into account the problem of uncharacteristic action of people under panic condition given by disaster.Drogoul and Quang (2008) discuss the intersection between two research fields: multi-agent system and computer simulation.This paper also presents some of the current agentbased platforms such as NetLogo, Mason, Repast, and Gama.Bo and Satish (2009) presents an agent-based model for www.ijacsa.thesai.orghurricane evacuation by taking into account the interaction among evacuees.For the path finding, the agents can choose the shortest path and the least congested route respectively.Cole (2005) studied on GIS agent-based technology for emergency simulation.This research discusses about the simulation of crowding, panic and disaster management.Quang et al. (2009) proposes the approach of multi-agent-based simulation based on participatory design and interactive learning with experts' preferences for rescue simulation.Through the view of this background, this study will focus mainly on task allocation for volunteers to help disabled persons.With effective task allocation method, it can improve the rescue process.By considering the number of volunteers, number of disabled persons and traffic condition as changing parameters, we also draw the correlations between these parameters and rescue time.
A. Rescue Simulation Model
The ability to receive critical information about an emergency, how to respond to an emergency, and where to go to receive assistance are crucial components of an evacuation plan.In practical evacuation process, we assume that after the warning is issued; all disabled persons send information to the emergency center via a special device.This device measures the condition of the disabled persons such as heart rate and body temperature; the device can also be used to trace the location of the disabled persons by GPS.Emergency center will collect that information and then broadcast to volunteers' smart-phones through the internet.After checking the condition of victims, volunteers make their own decision to help victims and inform the emergency center.
The centralized rescue model is presented which has three types of agent: volunteers, disabled people and route network.The route network is also considered as an agent because the condition of traffic in certain route can be changed when disaster occurs.The general rescue model is shown in Figure 1.After the initial process, all the connected agents will receive the decisive information such as location of agents and health level via command K3; after that the rescue agents will make a decision of action and submit to the center using one of the commands from A3 to A7.At every cycle in the simulation, each rescue agent receives a command K3 as its own decisive information from the center, and then submits back an action command.The status of disaster space is sent to the viewer for visualization of simulation.The repeated steps of simulation are shown in Figure 3.
Center Agent
Action command and updated states of disaster space Viewer Decisive information
A. Disaster Area Model
The disaster area is modeled as a collection of objects of Nodes, Buildings, Roads, and Humans.Each object has properties such as its positions, shape and is identified by a unique ID.From table1 to The topographical relations of objects are illustrated from Figure 4 to Figure 7.The representative point is assigned to every object, and the distance between two objects is calculated from their representative points.
C. Task Allocation Model
The decision making of volunteers to help disabled persons can be treated as a task allocation problem [10][11][12][13][14].The task allocation for rescue scenario is carried out by the central agents.The task of volunteers is to help disabled persons; this task has to be allocated as to which volunteers should help which disabled persons in order to maximize the number of survivals.
We utilize the combinatorial auction mechanism to solve this task allocation problem.At this model, the volunteers are the bidders; the disabled persons are the items; and the emergency center is the auctioneer.The distance and health level of disabled person are used as the cost for the bid.When the rescue process starts, emergency center creates a list of victims, sets the initial distance for victims, and broadcasts the information to all the volunteer agents.Only the volunteer agents whose distance to victims is less than the initial distance will help these victims.It means that each volunteer agent just help the victims within the initial distance instead of helping all the victims.The initial distance will help volunteers to reduce the number of task so that the decision making will be faster.
The aim of this task allocation model is to minimize the evacuation time.It is equivalent to minimize the total cost to accomplish all tasks.In this case, the cost is the sum of distance from volunteers to victims and the health level of victims.The optimization problem is formed as follows.
Given the set of n volunteers as bidders:
{ } and set of m disabled persons considered as m tasks: { } .The distances from volunteers to disabled persons; distances among dsabled persons and health level of disabled persons are formulated as follow.
Let I is a collection of subsets of D. Let x j = 1 if the j th set in I is a winning bid and c j is the cost of that bid.Also, let if the set in I contains iD.The problem can then be stated as follows [15]: The constraint will make sure that each victim is helped by at most one volunteer.To illustrate with an example of bid generation, let's assume that a volunteer A has information of 5 victims (d 1 , d 2 , d 3 , d 4 , d 5 ).The initial distance is set to 200 meter.The volunteer estimates the distance from himself to each victims and select only victims who are not more than 200 meter from his location.Assume that, the victim d 1 and victim d 2 are selected to help with the cost is 180.1.The bid submitted to center agent is Bid A = ({d 1 , d 2 }, 180.1).
This optimization problem can be solved by Heuristic Search method of Branch-on-items (Sandholm, 2002).This method is base on the question: "Which volunteer should this victim be assigned to?".The nodes of search tree are the bids.Each path in the search tree consists of a sequence of disjoint bids.At each node in the search tree, it expands the new node with the smallest index among the items that are still available, and do not include items that have already been used on the path.The solution is a path which has minimum cost in the search tree.To illustrate with an example of a task allocation of volunteers to help disabled persons, let's assume that there are four volunteers and 3 disabled persons; The initial distance is set to 200 meter; At the time of simulation, distances from volunteers to disabled persons, the distance among disabled persons, and health level of disabled persons are assumed as follows.
For example, the volunteer can make three bids for victim { }, { } and { } based on initial distance.The cost for { } = Possible bids are listed as below.Then, the search tree is formed as below.The winner path is , which has the most minimum cost of 1060.The task allocation solution: volunteer v 4 will help disabled persons d 1 and d 2 ; volunteer v 1 will help disabled persons d 3 .
IV. EXPERIMENTAL RESULTS
In this section, we present experimental studies on different scenarios.The goal is to examine the proposed method of task allocation model for selecting disabled people to rescue.The evacuation time is evaluated from the time at which the first volunteer start moving until the time at which all alive victims arrive at the shelters.The simulation model is tested using the RoboCup platform with Morimoto Traffic Simulator [17].
A. Experimental Settings
We consider the number of volunteers, number of disabled persons, and traffic density as parameters to examine the correlation between these parameters with rescue time.The sample GIS map consists of 5 layers: road, building, volunteer, disabled person and shelter.The red points and green points indicate the locations of disabled persons and locations of volunteers respectively.These locations are generated randomly along the roads.Blue buildings are shelters.The initial health level of disabled persons are generated randomly between 100 to 500.Every time step of simulation, these health levels decrease by 0.5.If the health level is equal to zero, the corresponding agent is considered as dead.The movements of volunteer agents are controlled by Morimoto Traffic Simulator.
B. Experimental results
With a fixed number of disabled persons and the number of volunteers increase, the correlation between number of volunteers and rescue time is shown as below.The number of volunteers and the number of disabled persons are fixed, whereas the number of vehicle increases.We test with the total length of road of 500 meters.The increasing number of vehicles will make traffic density higher.The correlation between number of vehicle and rescue time is shown as below.In this paper, the decision making of volunteers to help persons with disability is presented as task allocation problem.The disabled persons are considering as the tasks, and these tasks are allocated to volunteers by utilizing combinatorial auctions mechanism.At each time step of simulation, the task allocation problem is solved in order to assign appropriate tasks to volunteers.Although there are some previous works [13,14] on applying combinatorial auctions to task allocation, our method has some differences in forming and solving problem; the volunteers only bid on disabled persons located within a certain distance and the health condition of disabled persons and the distance from volunteers to disabled persons are used as the cost of bids.The simple example of task allocation problem is presented to clarify the procedures of our method.The RoboCup rescue simulation platform is used to simulate the rescue process.The correlations between rescue time and other parameters such as number of volunteers, number of disabled persons and number of vehicles are also presented.
In future work, we are thinking of comparing the multicriteria decision making method with task allocation method in case of solving the decision making problem of volunteers to help disabled persons.
Silvia et al. (2005), Ranjit et al. (2001) and Santos et al. (2010) apply the auction mechanism to solve the task allocation problem in rescue decision making.
Figure 1 .
Figure 1.Centralized Rescue Model In simulation environment, we try to set up as close as possible to these above assumptions.Before starting simulation, every agent has to be connected to the emergency center in order to send and receive information.The types of data exchanged between agents and emergency center are listed as below.Message from agent A1: To request for connection to the emergency center A2: To acknowledge the connection A3: Inform the movement to another position A4: Inform the rescue action for victim
4 Figure 8 .
Figure 8. Branch on Items based Search tree
Figure 10 .
Figure 10.Correlation between Number of Volunteers and Rescue Time With a fixed number of volunteers and the number of disabled persons increase, the correlation between number of disabled persons and rescue time is shown as below.
Figure 11 .
Figure 11.Correlation between Number of Disabled Persons and Rescue Time
Figure 12 .
Figure 12.Correlation between Number of Vehicles and Rescue Time V. CONCLUSION table 4 present the properties of Nodes, Buildings, Roads and Humans object respectively.www.ijacsa.thesai.orgThese properties are derived from RoboCup rescue platform with some modifications.
TASKS ALLOCATION AND COST The bid b 2 and b 7 have the same task {d 1 }; b 5 and b 8 have the same task {d 2 }; b 1 , b 3 and b 6 have the same task {d 3 }.The more expensive bids will be removed. | 2014-10-01T00:00:00.000Z | 2012-01-01T00:00:00.000 | {
"year": 2012,
"sha1": "3fe43600358ba68029e4337834d7c5d62be52970",
"oa_license": "CCBY",
"oa_url": "http://thesai.org/Downloads/Volume3No7/Paper_13-Task_Allocation_Model_for_Rescue_Disabled_Persons_in_Disaster_Area_with_Help_of_Volunteers.pdf",
"oa_status": "HYBRID",
"pdf_src": "CiteSeerX",
"pdf_hash": "3fe43600358ba68029e4337834d7c5d62be52970",
"s2fieldsofstudy": [
"Computer Science"
],
"extfieldsofstudy": [
"Computer Science"
]
} |
238850710 | pes2o/s2orc | v3-fos-license | The seepage flow analysis of main dam stability in Way Sekampung dam development project
ARTICLE INFO An analysis using a depression line method was conducted in two conditions, at normal water level (± 124 m) with a result of 1.11 × 10-3 m3/s and at flood water level ± 126.5 m with a result of 1.33 × 10-3 m3/s. The capacity shows (<1%) the average enters the reservoir, making it safe from the danger of distress. The safety calculations for pipping showed a bigger value than the filtration flow speed indication at the average value of 4,638 (>4) which means that the dam will not make pipping symptoms.Based on the analysis conducted on the slope of the dam using slice method without entering the value of seismic coefficient obtained a safe number result in all loading conditions and the analysis by adding a seismic coefficient get a safe result except in two conditions, at elevation ± 126.5 m is SF 1.05 and at elevation ± 124 m SF is 1.05. Received: 20 June 2021 Accepted: 27 Junie 2021 Published: 29 June 2021
INTRODUCTION
Along with the development of the population, the human need for water is also increasing. Moreover, water is the main source of life for humans. One of the water sources is dams.
Dams have many benefits, including as a source of irrigation, family recreation, flood control, power plants, and waste storage. Dams shall be designed and maintained against safe seepage control. If not, the dam will run into problems due to excessive seepage. Excessive seepage may affect the safety of the dam itself if appropriate remedial action is not taken. The basic problem is to distinguish the extent to which the seepage affects a dam and what is the most appropriate remedial action, which must be taken to ensure that the seepage does not endanger the safety of the dam. One of the main factors for dam destruction is the immaturity of the geological and geotechnical aspects of planning that are directly related to the dam foundation. The dam should be built on a foundation that has rock quality with good bearing capacity. If the surrounding area has rock conditions with low bearing capacity, then geotechnical engineering can be carried out to overcome this problem. A foundation repair technique that is often used to overcome seepage and Tommy Andreant, Lusmeilia Afriani, Ofik Taufik Purwadi & Andius D. Saputra increase soil/rock bearing capacity in dams is grouting. Grouting is the process of injecting a cement The solution into the soil that has been drilled beforehand and serves to glue the soil/rock so that its bearing capacity increases. The Way Sekampung dam was built to make the inflow from the Way Sekampung watershed downstream of the Batutegi dam and upstream of the dam plan to be optimally utilized for various purposes for the sake of improving people's lives rather than being wasted into the sea. The Way Sekampung dam in Lampung will have a height of 55 meters from the bottom of the excavation with a dam length of 362 meters. The top elevation of the dam is El. +130 m. This type of dam uses homogeneous soil fill, rock, and upright core. The topography of the regulating dam plan location is a hilly area on the left and a bit flat on the right. In general, the geological formations in this area are layered tuff rock with sand inserts. The climatology of this area is generally the rainy season from October to April of the following year, with an average rainfall of 1990 mm. In this research, we will discuss the effect of seepage on the body and bottom of the dam on the stability of the Way Sekampung dam and how to overcome it with predetermined methods and theories,graphics, and reporting facilities.
LITERATURE REVIEW
A dam is a water structure specially built to block or hold the flow of water temporarily and the amount by using a homogeneous earth fill structure (Earthfill Dam), rock piles with a waterproof layer (Rockfill Dam), concrete construction (Concrete Dam), or various types other constructions (Soedibyo, 2003).The topography of the Way Sekampung dam shows the shape of the valley, with the slope of the left side relatively steeper than the right slope angle. The left slope angle is about 20 to 25 degrees with a hilly height between 92 m to 270 m above sea level.
Material for embankment dams is rock or soil material that is excavated from the area around the location of the prospective dam and the type of dam usually depends on the type, quality, and quantity of stockpile material available in the area. (Sosrodarsono, 1981). The waterproof material is a material that is necessary for the construction of an embankment dam and the type and stability of the dam are very dependent on the characteristics, quality, and quantity of the material that can be excavated for stockpiling in the watertight zone. (Sosrodarsono, 1981).
In planning a dam, it is necessary to pay attention to its stability against the dangers of landslides, slope erosion, and loss of water due to seepage through the dam body. Both the dam body and the foundation are required to be able to defend themselves against the forces caused by the presence of filtration water flowing through the gaps between the grains of soil-forming the dam body and the foundation. (Hardiyatmo, 2007). A slope is a land surface that opens and stands at a certain angle to the horizontal called a slope.
Slopes in soil and rock are formed either by nature or man-made. Natural forces such as wind, water, ground movement, etc., can form natural slopes. Road construction, dams, are made by humans by forming slopes on the ground because slope construction is more economical than making retaining walls.
Considering that the slope is formed by the number of variables and the number of uncertainty factors, including soil parameters such as soil shear strength and pore pressure conditions, simplification is always carried out in analyzing various assumptions. In analyzing the stability of the slope of the Way Sekampung Earthquakes can cause both natural and man-made slope movements and collapse. Therefore, it needs to be considered in the calculation of the safety factor of slope stability. To calculate the effect of gravity due to earthquakes, what is often done in slope stability analysis is to use a numerical constant which is usually called the earthquake coefficient (k). This coefficient is given in percent of gravity. For example, a 10% (0.1g) gravity coefficient is often used in calculations.
Calculation of the elevation capacity of ± 124 m:
The above calculation is repeated by entering the flood water level ± 124 m, that is, with the water level from the bottom of the dam as high as 49 m, then the seepage capacity value is obtained at 1.11 × 10-3 m3 / s.
Calculation of the elevation capacity of ± 126.5 m:
The above calculation is repeated by entering the flood water level +126.5 m, namely with the water level from the bottom of the dam as high as 51.5 m, then the seepage capacity value is 1.33 × 10-3 m3 /s.
Calculating the filtration flow rate
The filtration flow capacity is the seepage capacity of water flowing downstream through the body and foundation of the dam.
Calculation of dam slope stability
The stability of the dam slope is determined by several internal factors, including the soil design parameters used in the embankment material, here are the results of testing the soil material used in the main dam in Way Sekampung Dam.
Determining the seismic coefficient
The planned area for the Dam Regulating Dam Way Sekampung is included in the coefficient zone (Z) in the E earthquake area, which is 1.20. The parameter of the type of foundation as a building support is rock, with a value of v = 0.90 so that the design earthquake acceleration for the 100 year return period can be found with the formula: Figure 9. Graph of SF value without earthquake load in upstream In the analysis by manual method (slice method), in Figure 9 it can be concluded that the Safety Factor value on the slope of Way Sekampung Dam also shows the value of fluctuations. In the chart, it can be seen that the load influence factor caused by the hydraulic force of dam disposal capacity does not mean that the SF value will decrease because the smallest SF value (critical) obtained in the analysis is at NWL water level (+124 m), which is 1.74 and the largest number of SF is at water level (+0 m) or when construction is completed, which is 2.08. At water level LWL (+112 m) the SF value is 1.78 and under FWL conditions (+126.5 m) the SF value is 1.8 indicating that the slope is still safe from avalanche hazards.
Figure 10. Graph of SF value without earthquake load in downstream
In the analysis by manual method (Slice method), in Figure 10 it can be concluded that the value of Safety Factor on the slope of Way Sekampung Dam shows a volatile value. In the figure, it can be seen that the influence of the amount of load caused by the hydraulic force of the dam discharge does not indicate that the SF value is getting smaller, because the largest SF value obtained in the analysis is at the LWL water level (+112 m), which is 2.89 and at NWL (+124 m) which is 2.89 and the smallest SF number is at the FWL water level (+126.5 m) which is 2.87. At water level (+0 m) the SF value is 2.88, which indicates a safe condition from landslide symptoms and hazards. Figure 11. Graph of SF value against earthquake load in upstream In the analysis by manual method (slice method), in Graph 6 it can be concluded that the value of Safety Factor on the slope of Way Sekampung Dam also showed a significant decrease in value. In the graph, it can be seen that the influence of the magnitude of the load caused by seismic load and hydraulic force resulting from the high discharge of the dam makes the dam tilt in unsafe conditions, namely at nwl water level (+124 m) with a value of SF 1.05. and FWL (+126.5 m) with a value of SF 1.05 which should be greater than the provisions of SNI: 8064: 2016, which is 1.10. At water level (+ 0 m) the SF value is 1.97, which indicates safe conditions from landslide hazards. In the analysis by manual method (slice method), in Figure 12 it can be concluded that the value of Safety Factor on the slope of Way Sekampung Dam also shows a volatile value. In the chart, it can be seen that the influence factor of the magnitude of the load caused by seismic load and hydraulic load resulting from the high discharge of the dam does not indicate that the SF value will be reduced, because the smallest SF value obtained in the analysis was at the water level of FWL (+126.5 m) and the largest number of SF was at nwl water level (+124 m) which is 1.93 and LWL (+112 m) which is 1.9 and at water level (+0 m) 1.92. Overall, the slopes are in a safe condition.
CONCLUSION
Safety calculations for pipping showed a greater value than the filtration flow speed indicating an average value of 4,638 ( > 4) which means that the dam will not experience pipping symptoms. Debit analysis using SEEP/W method at low water level (± 112 m), normal water level (± 124 m), and flood water level (± 126.5 m) overall show safe figures showing (<1%) the average discharge that goes into the dam. Analysis using depression line method was conducted in two conditions, namely at normal water level (± 124 m) with a result of 1.11 × 10-3 m3/dk and at flood water level ± 126.5 m with a result of 1.33 × 10-3 m3/s. Capacity shows (<1%) the average discharge enters the reservoir, making it safe against the danger of distress.
Analysis conducted on the slope of the dam using the slice method without entering the value of the seismic coefficient obtained a safe number result in all loading conditions and the analysis by adding a seismic coefficient get a safe result except in two conditions namely at elevation 126.5 with SF 1.05 and at elevation 124 SF value 1.05 this can be caused by several things, decreased soil pore figures at the time the slope began to submerge. analysis can be done in more detail can be done by adding interviews with dam experts and supported by direct observations in the field specifically to facilitate research. | 2021-09-01T15:05:51.625Z | 2021-06-29T00:00:00.000 | {
"year": 2021,
"sha1": "96c24f039d8884b0524ad9806ae11479d36abb20",
"oa_license": "CCBYNCSA",
"oa_url": "https://josst.lppm.unila.ac.id/index.php/josst/article/download/1/1",
"oa_status": "HYBRID",
"pdf_src": "Anansi",
"pdf_hash": "77c340675efc967de73787ef261d9ee8933a9fdd",
"s2fieldsofstudy": [
"Engineering"
],
"extfieldsofstudy": [
"Geology"
]
} |
199452873 | pes2o/s2orc | v3-fos-license | Does Preliminary Model Checking Help With Subsequent Inference? A Review And A New Result
Summary Statistical methods are based on model assumptions, and it is statistical folklore that a method’s model assumptions should be checked before applying it. We review literature that investigated combined test procedures, in which model assumptions are checked first. Then, in case that the model assumption is passed, a test based on the model assumption is run, and otherwise a test with less strong assumptions. Much literature is surprisingly critical of this approach, owing also to the observation that conditionally on passing a model misspecification test, the model assumptions are automatically violated (“misspecification paradox”). We also review controversial views on the role of model checking in statistics, and literature investigating empirically to what extent model assumptions are checked in practice. We suspect that the benefit of preliminary model checking is currently underestimated, and we present a general setup not yet investigated in the literature, in which we can show that preliminary model checking is advantageous.
Introduction
Statistical methods are based on model assumptions, and it is statistical folklore that a method's model assumptions should be checked before applying it. Yet there is surprisingly little agreement in the literature about how to do this. Model checking is ignored in much applied work. As will be seen later, several authors who investigated the statistical characteristics of running model checks before applying a model-based method comment rather critically on it. So is it sound ad-vice to check model assumptions first? We shed some light on the issue by reviewing literature in which such an approach is investigated. We cannot attempt to achieve completeness, because the amount of literature on certain specific problems that belong to this scope is quite large, see, e.g., Bancroft and Han (1977) for what was already available at that time (if with somewhat limited scope). We therefore have to restrict our focus and will concentrate on the problem of two-stage testing, i.e., hypothesis testing conditionally on the result of preliminary tests of model assumptions. More work exists on estimation after preliminary testing, for overviews see Giles and Giles (1993), Chatfield (1995), Saleh (2006). Some work investigating current practice regarding model checking is also reviewed. The issue is connected to some recent discussions regarding the foundations of statistics, on which we reflect. We also present a new theoretical result concerning a combined procedure in which a final model-based or an alternative test (we focus on tests as an example for more general methods of inference) is chosen depending on whether a misspecification test passes or rejects the model assumption. A situation is constructed in which the combined procedure improves on the unconditional use of any of the two final analyses.
To fix terminology, we assume a situation in which a researcher is interested in using a "main test" for testing a main hypothesis that is of substantial interest. There is a "model-based constrained (MC) test" involving certain model assumptions available for this. We will call "misspecification (MS) test" a test with the null hypothesis that a certain model assumption holds. We assume that this is not of primary interest, but rather only done in order to assess the validity of the model-based test, which is only carried out in case that the MS test does not reject (or "passes") the model assumption. In case that the MS test rejects the model assumption, there may or may not be an "alternative unconstrained (AU) test" that the researcher applies, which does not rely on the rejected model assumption, in order to test the main hypothesis. A "combined procedure" consists of the complete decision rule involving MS test, MC test and AU test (if specified). We generally assume that the MS test is carried out on the same data as the main test. Some of the issues discussed below can be avoided by checking the model on independent data, however such data may not be available, or this approach may not be preferred for reasons of potential waste of information and lack of power (in case the "independent" data are obtained by splitting the available dataset, see Chatfield (1995) for a discussion of this). In any case it would leave open the question whether the data used for MS testing are really independent of the data used for the main test, and whether they do really follow the same model.
Note that the main hypothesis may originally have been defined in terms of the tested model, and may require a re-definition in case that this is rejected (for example, a hypothesis on the center of a Gaussian distribution may translate into a hypothesis about a mean, median, or mode in a more general nonparametric family of models). We acknowledge that researchers may in fact apply more than one MS test to check various model assumptions, and may have more than one alternative model or test available depending on the results of these (see, e.g., Spanos (2018)). For the sake of simplicity, and because there is hardly any literature investigating the performance of more complex combined procedures, we mostly stick to the situation in which only one MS test is performed.
Furthermore, chances are that informal approaches are used far more often, i.e., researchers may do some informal or formal model assumption checking, and may only decide how to proceed knowing the outcome of the model check, rather than using a combined procedure that was well defined in advance. Such a behavior obviously cannot be formally investigated. Formally defined combined procedures may serve as some kind of "formal model" for this course of action, informing us to some extent about advantages and disadvantages, while acknowledging that they cannot cover all the options open to a real researcher. On the other hand, there is a demand for fully formally defined procedures that allow a researcher to avoid supposedly subjective decisions. The simple rule "use MS test to check whether the model assumption for the MC test is fulfilled; if the assumption is passed, apply MC test, otherwise apply AU test" looks appealing to many (as one of the authors knows from many years of experience as a statistical advisor).
Section 2 introduces MS testing for model checking. Section 3 formally introduces a combined procedure in which an MS test is used to decide between an MC and an AU main test. Section 4 reviews the controversial discussion of the role of model checking and testing in statistics. Section 5 reviews work that investigated the use of model checking and misspecification tests in practical statistics. Section 6 runs through the literature that investigated the impact of misspecification testing and the performance of combined procedures in various scenarios. In Section 7 we present a new result that formalizes a situation in which a combined procedure can be better than both the MC and the AU test. Section 8 provides the conclusion.
Misspecification testing and checking model assumptions
Even though two of the most prominent statisticians that introduced the idea of statistical hypothesis testing, Fisher and Neyman, had differences, one thing they agreed on is that checking the assumptions comprising the statistical model in order to ensure its adequacy is essential. Fisher (1922)
stated:
For empirical as the specification of the hypothetical population may be, this empiricism is cleared of its dangers if we can apply a rigorous and objective test of the adequacy with which the proposed population represents the whole of the available facts. Once a statistic, suitable for applying such a test, has been chosen, the exact form of its distribution in random samples must be investigated, in order that we may evaluate the probability that a worse fit should be obtained from a random sample of a population of the type considered. Neyman (1952) outlined the construction of a mathematical model in which he emphasized testing the assumptions of the model by observation and if the assumptions are satisfied, then the model "may be used for deductions concerning phenomena to be observed in the future".
The idea of MS testing came about as early as the early 20 th century when Pearson introduced the Pearson's goodness of fit chi-square test. This is an MS test for the adequacy of a distributional assumption (the term "goodness of fit test" has a longer history than the term "misspecification test" but we will use the latter term here). The term misspecification was only coined as late as Fisher (1961) for the selection of exogenous variables in economic models. Spanos (1999) used the term extensively and discussed many aspects of the role of MS testing for specifying and validating statistical models, and how to proceed when certain statistical assumptions are violated. We do not attempt to review model misspecification testing exhaustively; for this see Spanos (2018) and the references given there. However, here are a few examples: Testing assumptions regarding distributional shape: The probably oldest test of a given distributional shape is Pearson's chi-square test. There are also tests based on the empirical cumulative distribution function such as the Kolmogorov's test and the Cramer-Von Mises statistic. These tests quantify the difference of the distances between the observed data and the hypothesized distribution. Another family of tests are those based on ordered samples. An example of tests of this kind include the Shapiro-Wilk test for testing normality. One can also carry out an MS test based on the moments using properties of the skewness and kurtosis coefficients, for instance the skewness-kurtosis test given by Fisher (1930).
Testing independence assumptions: Some non-parametric tests can be used to test the independence assumption, for example the runs test first presented in Wald and Wolfowitz (1940). Another approach is moment based tests, for example see Box and Pierce (1970) and Ljung and Box (1978).
Testing homogeneity of variance assumptions: Some early non-parametric tests for the homogeneity assumption were based on the signs of the differences, for example as proposed by Mann (1945) and Daniels (1950). Examples of parametric tests include the χ 2 -test, the F-test and the Levene's test (see Levene, 1960).
Graphical methods for checking model assumptions
Informal graphical assessments such as certain scatter plots for independence, others for constant variance and normal quantile-quantile plots for the adequacy of the Gaussian model are often recommended to check the assumptions of a particular main test, for example, a Student's t-test; testing the validity of a regression model by way of residual plots is treated in many textbooks.
Graphical displays have the advantage that they can give the statistician more insight about the nature of potential violations of model assumptions and appropriateness of alternative models and methods than formal MS tests. On the other hand, decisions based on graphical displays do not follow prespecified formal decision rules, and therefore the statistical characteristics of combined procedures involving graphical misspecification detection cannot be investigated by theory or simulation. It is conceivable to "translate" certain decisions based on graphical displays into formal rules and analyze them, although to our knowledge this has not happened in the literature yet. It may also be seen as incorrect, because graphical displays are used for stimulating the intuition better than formal rules, so to force graphical decisions into formal rules would not appropriately reflect their actual use. However, the issues with formal MS testing as elaborated below also occur if graphical models are used to check model assumptions, even though cannot be formally analyzed.
Combined procedures
The general setup we are interested in here is as follows. Given is a statistical model defined by some model assumptions Θ, where P θ , θ ∈ Θ are distributions over a space of interest, indexed by a parameter θ . M Θ is written here as a parametric model, but we are not restrictive about the nature of Θ. M Θ may even be the set of all i.i.d. models for n observations, in which case Θ would be very large. However, in the literature, M Θ is usually a standard parametric model with Θ ⊆ R m for some m. There is a bigger model M containing distributions that do not require one or more assumptions made in M Θ , but for data from the same space.
Given some data z, we want to test a parametric null hypothesis θ ∈ Θ 0 , which has some suitably chosen "extension" M * ⊂ M so that M * ∩ M Θ = M Θ 0 , against the alternative θ ∈ Θ 0 corresponding to M \ M * in the bigger model.
In the simplest case, there are three tests involved, namely the MS test Φ MS , the MC test Φ MC and the AU test Φ AU . Let α MS be the level of Φ MS , i.e., Q(Φ MS (z) = 1) ≤ α MS for all Q ∈ M Θ . Let α be the level of the two main tests, i.e., P θ (Φ MC (z) = 1) ≤ α for all P θ , θ ∈ Θ 0 and Q(Φ AU (z) = 1) ≤ α for all Q ∈ M * . To keep things general, for now we do not assume that type I error probabilities are uniformly equal to α MS , α, respectively, and neither do we assume tests to be unbiased (which may not be realistic considering a big nonparametric M).
The combined test is defined as This allows to analyze the characteristics of Φ C , particularly its effective level (which is not guaranteed to be ≤ α) and power under P θ with θ ∈ Θ 0 or not, or under distributions from M * or M \ M * . General results are often hard to obtain without making restrictive assumptions, although some exist, see Sections 6.1 and 6.4. At the very least, simulations are possible picking specific P θ or Q ∈ M, and in many cases results may generalize to some extent because of invariance properties of model and test. Also of potential interest are P θ Φ C (z) = 1|Φ MS (z) = 0 , i.e., the type I error probability (size) under M Θ 0 or the power under M Θ in case the model was in fact passed by the MS test, , the situation that the model M Θ is in fact violated but was passed by the MS test, and whether Φ C can compete with Φ AU in case that Φ MS (z) = 1 (M Θ rejected). These are investigated in some of the literature, see below.
For example, many researchers have found that the use of an MS test influences the size of the main test, meaning that P θ Φ C (z) = 1|Φ MS (z) = 0 can be substantially different from P θ Φ MC (z) = 1 In Section 7 we look at the performance of Φ C in case there is a "hyperprobability" of having data generated from either P θ ∈ M Θ or Q ∈ M \ M Θ ; such a situation in which both satisfied and violated model assumptions can occur and Φ MS has some distinction work to do has to our knowledge not yet been analyzed in the literature, which therefore may give a too pessimistic picture of the performance of the combined procedure.
Controversial views of model checking
The necessity of model checking has been stressed by many statisticians for a long time, and this is what students of statistics are often taught. At first sight, model checking seems essential for two reasons. Firstly, statistical methods that a practitioner may want to use are often justified by theoretical results that require model assumptions, and secondly it is easy to construct examples for the breakdown of methods in case that model assumptions are violated in critical ways (e.g., inference based on the arithmetic mean, optimal under the assumption of normality, applied to data generated from a Cauchy distribution will not improve in performance for any number of observations compared with only having a single observation, because the distribution of the mean of n > 1 observations is still the same Cauchy distribution).
Regarding the foundations of statistics, checking of the model assumptions plays a crucial role in Mayo's (2018) philosophy of "severe testing", in which frequentist significance tests are portrayed as major tools for subjecting scientific hypotheses to tests that they could be expected to fail in case they were wrong; and evidence in favor of such hypotheses can only be claimed in case that they survive such severe probing. Mayo acknowledges that significance tests can be misleading in case that the model assumptions are violated, but this does not undermine her philosophy in her view, because the model assumptions themselves can be tested.
A problem with preliminary model checking is that the theory of the model-based methods usually relies on the implicit assumption that there is no data-dependent pre-selection or preprocessing. A check of the model assumptions is a form of pre-selection. This is largely ignored but occasionally mentioned in the literature. Bancroft (1944) was probably the first to show how this can bias a model-based method after model checking. Chatfield (1995) gives a more comprehensive discussion of the issue. Hennig (2007) coined the term "goodness-of-fit paradox" (from now on called "misspecification paradox" here) to emphasize that in case that model assumptions hold, checking them in fact actively invalidates them. Assume that the original distribution of the data fulfills a certain model assumption. Given a probability α > 0 that the MS test rejects the model assumption if it holds, the conditional probability for rejection under passing the MS test is obviously 0 < α, and therefore the conditional distribution must be different from the one originally assumed. It is this conditional distribution that eventually feeds the model-based method that a user wants to apply.
How big a problem is the misspecification paradox? Spanos (2010) argues that it is not a problem at all, because the MS test and the main test "pose very different questions to data". The MS test tests whether the data "constitute a truly typical realization of the stochastic mechanism described by the model". He argues that therefore model checking and the model-based testing can be considered separately; model checking is about making sure that the model is "valid for the data" (Spanos, 2018), and if it is, it is appropriate to go on with the model-based analysis.
The point of view taken here, as in Chatfield (1995), Hennig (2007), and elsewhere in the literature reviewed below, is different: In case the model-based (MC) test is only applied if the model is not rejected, the behavior of the MC test should be analyzed conditionally on data not being rejected by the MS test, and this differs from the behavior under the nominal model assumption. We do not think that the misspecification paradox implies that combined procedures are invalid. Rather our perspective is pragmatic. Whether a combined procedure is good or not depends on its statistical characteristics, and how these compare to unconditional use of the MC or AU test. Various such characteristics can be of interest, particularly its behavior both in case that the assumption of the MC test holds, and that it is violated; for the latter case there are many possibilities, and investigation may constrain itself to look at specific alternative models.
In some situations, these different points of view can agree. If the distribution of the test statistic is independent of the outcome of the MS test, formally the misspecification paradox still holds, but it is irrelevant for the resulting data analysis. Conditioning on the result of the MS test will not affect the statistical characteristics of the MC test. An example for this is a MS test based on studentized residuals and a main test based on the minimal sufficient statistic of a Gaussian distribution (Spanos, 2010). Independence here and in most conceivable examples does only hold if the model assumption of the MC test holds, which means that only in this case conditioning on the MS test does not affect the MC test. One may hope that if the model assumption is violated, conditioning on the MS test rather helps the MC test, but as far as the literature reviewed below has investigated this issue, this is rarely true. More generally it can be expected that if what the MS test does is at most very weakly stochastically connected to the main test (i.e., if in Spanos's terms they indeed "pose very different questions to the data"), differences between the conditional and the unconditional behavior of the MC test should be small. This can be investigated individually for every combination of MS test and main test, and there is no guarantee that the result will always be that the difference is negligible.
Even in situations in which inference is only very weakly affected by preliminary model checking in case the assumed model holds indeed, the practice of model checking may still be criticized on the grounds that it may not help in case that the model assumption is violated, i.e., if data is generated by a model that deviates from the assumed one, the conditional distribution of the MC test statistic, given that the model assumption is not rejected, may not have characteristics that are any better than if applying the MC test to data with violated model-assumptions in all cases, see Easterling and Anderson (1978).
A view opposite to Spanos's one, namely that model checking and inference given a parametric model should not be separated, but rather that the problems of finding an appropriate distributional "shape" and parameter values compatible with the data should be treated in a fully integrated fashion, can also be found in the literature (Easterling (1976), Draper (1995), Davies (2014)). Davies (2014) argues that there is no essential difference between fitting a distributional shape, an (in)dependence structure, and estimating a location (which is usually formalized as parameter of a parametric model, but could as well be defined as a nonparametric functional).
Bayesian statistics allows for an integrated treatment by putting prior probabilities on different candidate models, and averaging their contributions. Robust and nonparametric procedures may be seen as alternatives in case that model assumptions of model-based procedures are violated, but they have also been recommended for unconditional use (Hampel et al., 1986), making prior model checking supposedly superfluous. All these approaches still make assumptions; the Bayesian approach assumes that prior distribution and likelihood are correctly specified, robust and nonparametric methods still assume data to be i.i.d., or make other structural assumptions. So the checking of assumptions issue does not easily go away, unless it is claimed (as some subjectivist Bayesians do) that such assumptions are subjective assessments and cannot be checked against data. To our knowledge, however, there is hardly any literature assessing the performance of model checking combined in which the "MC role" is taken by robust, nonparametric or Bayesian inference, but see Bickel (2015) for a combined procedure that involves model checking and robust Bayesian inference.
Another potential objection to model assumption checking is that, in the famous words of George Box, "all models are wrong but some are useful". It may be argued that model assumption checking is pointless, because we know anyway that model assumptions will be violated in reality in one way or another (e.g., it makes some sense to hold that in the real world no two events can ever be truly independent, and continuous distributions are obviously not "true" as models for data that are discrete because of the limited precision of all human measurement). This has been used as argument against any form of model-based frequentist inference, particularly by subjectivist Bayesians (e.g., de Finetti's (1974) famous "probability does not exist"). Mayo (2018) however argues that "all models are wrong" on its own is a triviality that does not preclude a successful use of models, and that it is still important and meaningful to test whether models are adequately capturing the aspect of reality of interest in the inquiry, or whether the data are incompatible with the model in ways that will mislead the desired model-based inference (the latter is our own wording). We broadly agree with this position, although we note that the current practice of model checking is almost exclusively framed in terms of whether model assumptions are fulfilled (or "approximately" fulfilled, which implies that there is a true model that could be approximated) rather than whether data indicate that the specific use made of the model may be corrupted by specific violations of the model assumptions, which would seem more appropriate. A purely logical rebuttal of the view that frequentist methods of inference such as tests can only be valid if the model assumptions are fulfilled is as follows. The basis of that view is that the theoretical characteristics of the methods are derived assuming the model, but this does not imply that their characteristics are so bad as to render inferences invalid if the model does not hold. This would need to be investigated separately, and the role of model checking could then be to distinguish situations in which the characteristics of the MC test are still good, and situations in which this is not the case (although it would require further research to find out in which situation which MS testing does this appropriately).
We here investigate model assumptions that concern data generating mechanisms, and therefore they can be checked against the data. We keep an open mind regarding whether preliminary model checking should be recommended "good practice" and even whether (frequentist) testing is advisable at all; we rather aim at "mapping" the debate than solving it.
Are model assumptions checked in practice?
How widespread and established is it actually to check model assumptions before a model-based procedure is applied? This is hard to say, and somewhat contradictory observations can be made. Many statisticians emphasize model assumptions and the requirement to check them when teaching statistics, however hardly any statistics textbook explains how to do this with clear enough algorithmic "recipes" that a non-expert reader could easily follow, and there is hardly any agreement what exactly should be done.
An example for explicit recommendation of model checking in the literature is Rule 8 of the "Ten Simple Rules for Effective Statistical Practice" by Kass et al. (2016) aptly named 'Check Your Assumptions'. The authors mention that "every statistical inference involves assumption ... even the so-called "model-free" techniques require assumptions, albeit less restrictive assumptions". Another instance is Osborne and Waters (2002), named aptly "Four assumptions of multiple regression that researchers should always test". Choi (2005) mentioned that the most common statistical errors involve "failure to recognize the correct distribution of the data", leading to incorrect choice of descriptive and inferential statistics. According to Olsen (2003), a frequent error made in data analysis is the application of statistical tests that assumes a normal distribution on data that actually follow a skewed distribution. Strasak et al. (2007a) conducted a bibliometric analysis of all original research articles published during the first half of the year 2004 in Volume 30, Numbers 1-26 of the New England Journal of Medicine (NJEM) and Volume 10, Numbers 1-6 of Nature Medicine (NMed). They reviewed the use of statistical methods used in these medical journals. At least one kind of inferential statistical method were used in 94.5% out of 91 articles in the NJEM and 82.4% out of 34 papers in NMed. Among the most frequently used methods are the t-test and non-parametric tests at 36.8% and 24.8%, respectively, out of the total number of papers. A subgroup of 53 papers (31 from NJEM and 22 from NMed) were further assessed. It was observed that 20.8% of these articles contained usage of wrong or suboptimal statistical tests resulting from incompatibility of test with examined data, inappropriate use of parametric methods, or using the wrong statistical test for the hypothesis under investigation. It was observed that 63% of the papers that use the t-test failed to report whether the test assumptions were checked. Similarly, Strasak et al. (2007b) assessed 15 papers from Wiener Klinische Wochenschrift and 7 papers from Wiener Medizinische Wochenschrift and found that the practice of improper use of statistical methods and failure to validate model assumptions were also found in these Austrian medical journals. It was observed in the papers that reported usage of t-test, 41.2% failed to report whether the test assumptions were checked. 18.2% of the papers did not include a multiple comparison or α-level correction.
A Chinese study carried out by Wu et al. (2011) reviewed articles from 10 Chinese biomedical journals regarding the misuse of statistical methods in 1998 and 2008. All the original articles published, 1, 335 in 1998 and 1, 578 in 2008, were reviewed. Out of these, a total of 1, 334 or 45.8% were reported to have incorrectly use either one of the most common statistical methods in these journals, namely the t-test, contingency tables, analysis of variance (ANOVA) or rank based nonparametric test. The authors are not explicit about whether they count the lack of checking of the model assumptions as "incorrect" and do not give precise numbers about it, but as a result of their study they suspect that researchers did not give enough attention to the distributional characteristics of the variables. Sridharan and Gowri (2015) studied the statistical errors committed by medical researchers in eight Indian medical and surgical journals over a period of 2 years. They collected 195 articles from 2005 and 220 articles from 2006. They found that 33.7% of these articles did not mention checking normality prior to parametric tests, besides other errors such as using multiple tests without correction. Hassan et al. (2015) compared errors in statistical methods made in articles from ten Indian medical journals in 2003 and 2013 to ascertain whether the statistical methodology used in these journals has improved in one decade by analysis of the number of errors committed. They reviewed 588 articles from 2003 and 774 articles from 2013. The most used statistical methods is the t-test, contingency tables and ANOVA. They observed that the proportion of erroneous statistical analyses had not decreased significantly, 25% in 2003 compared to 22.6% in 2013. However, they noticed an increased use of rank based non parametric tests in 2013, which they assume indicates that more attention are being paid to the assumptions of parametric tests. More recently, a study was done in Egypt by Nour-Eldein (2016) that assessed statistical methodology errors in family medicine articles by authors affiliated with the Suez Canal University over 5 years. Out of the 60 papers reviewed, the author found that a quarter (25%) "failed to report that test assumptions were not violated" as well as a few more errors that were made by medical researchers. This obviously does not imply that the assumptions were violated in critical ways when researchers used model-based methods.
These studies are of course limited to assessing the information reported in the publications. There is simply no way of knowing the unpublished details unless the authors were contacted and asked whether assumptions were in fact validated. It was also suggested that some authors merely copied methods from previous work without actually knowing what is needed to be done before running a statistical test. This could result from the fact that model checking was not reported and this practice was copied by subsequent studies. Altman (2002) wrote that "once incorrect procedures become common, it can be hard to stop them from spreading through the medical literature like a genetic mutation".
Keselman et al. (1998) reviewed articles from 17 journals of educational and behavioral sci-ence research. The authors claim to provide evidence that the vast majority of educational researchers conduct statistical analyses without taking into account the distributional assumptions of the procedure they are using. Out of the 411 articles reviewed, 61 had a between subjects univariate design. 13 out of the 61 did not report any cell of group standard deviations for any of the dependent variables under investigation. When the authors looked at the remaining articles it was found that the ratio of the largest to smallest standard deviation had a mean of 2.0, a median of 1.5 and a maximum of 23.8. In the articles that carried out factorial studies, the ratios has a mean of 2.8, a median of 1.7 and a maximum of 29.4. This shows that in the majority of the studies, the samples did not show variance homogeneity; often it looked in fact violated, where tests that assume variance homogeneity were used. Only in 12 articles were violations of the distributional assumptions mentioned as a source of concern by the authors. Hoekstra et al. (2012) proceeded in a different manner. They asked 30 researchers to analyze a number of fictitious datasets and observed and interviewed them regarding the awareness of model assumptions and whether models were checked. They observed that models were checked "correctly" in between 12% and 23% of cases (depending in what assumption was considered). They stated that one possible explanation is the lack of knowledge of "acceptable remedies" in case that an assumption was found to be violated.
The overall impression is that the situation is mixed. Model assumptions are often ignored but by implication of the cited numbers there also seems to be a good number of works in which they are actually checked in some way. It is not reported whether combined procedures are used, i.e., pre-specified choice of the main test conditionally on the outcome of a MS test. Chances are that this mostly happens in a rather informal manner without pre-specification, if at all. There is some scattered literature that uses a combined procedures in a more formal manner, e.g., Gambichler et al. (2002).
For almost all authors of the cited studies of literature containing applied statistic not checking the model assumptions constitutes an error, although this may be seen as somewhat controversial given the existing criticism of preliminary model checking, see above and below. We would not think though that this is a reason for the absence of model checking in the vast majority if not all of the surveyed publications in which this "error" was made. Chances are that even most authors whose work implies negative results for preliminary model checking would agree that simply ignoring model assumptions is not a beneficial approach.
6 Some specific test problems 6.1 The problem of whether to pool variances Historically the first problem for which preliminary MS testing and combined procedures were investigated was whether to assume equal variances for comparing the means of two samples. Until now this is the problem for which most work investigating combined procedures exists. Let X 1 , X 2 , ..., X n be distributed i.i.d. according to P µ 1 ,σ 2 1 and Y 1 ,Y 2 , ...,Y n be distributed i.i.d. according to P µ 2 ,σ 2 2 , where P µ,σ 2 denotes the normal distribution with mean µ and variance σ 2 . If σ 2 1 = σ 2 2 , the standard two-sample t-test using a pooled variance estimator from both samples is optimal. If testing is two-sided, this is equivalent to the F-test using the squared t-statistic. The F-test can also be applied to comparing means of more than two samples in an Analysis of Variance setup, and some early papers use the corresponding terminology despite only comparing two samples.
For σ 2 1 = σ 2 2 Welch's approximate t-test with adjusted degrees of freedom depending on the two individual variances is often recommended, see Welch (1938), Satterthwaite (1946), and Welch (1947). See Scheffé (1970) for some alternative solutions to the so-called Behrens-Fisher problem, i.e., comparing means without assuming equal variances.
The normal distribution assumption will be discussed below, but normality has often been seen as not problematic due to the Central Limit Theorem, and therefore the historical starting point is the equal variances assumption. Early authors beginning from Bancroft (1944) did not frame the problem in terms of "making sure that model assumptions are fulfilled", but rather asked, in a pragmatic manner, under what circumstances pooling variances is advantageous. If the two variances are in fact equal or very similar, it is better to use all observations for estimating a single variance hopefully precisely, whereas if the two variances are very different, the use of a pooled variance will give a biased assessment of the variation of the means and their difference.
It has been demonstrated that the two sample t-test is very robust against violations of equality of variances when sample sizes are equal as shown by Hsu (1938), Scheffé (1970), Posten, Yeh and Owen (1982) and Zimmerman (2006). When both variances and sample sizes are unequal, the probability of the Type-I error exceeds the nominal significance level if the larger variance is associated with the smaller sample size and vice versa [Zimmerman (2006); Wiedermann and Alexandrowicz (2007); Moder (2010)], which is amended by Welch's t-test. Bancroft (1944) was the first to investigate preliminary testing of the equality of variances. By using an F test as MS test to test the homogeneity of estimates of variance, he decided to use either a pooled estimate of σ 2 1 or just the sample variance of the first sample. He then looked at bias and variance of the resulting variance estimator, concluding that the lowest bias can be had by never pooling the estimate and not use the MS test to check the model assumption. On the other hand, the variance is lowest for the pooled estimate. Bancroft gives a set of numerical results but refrains from a general recommendation whether one should always pool, never pool, or use the combined procedure.
Starting from Bancroft's work, from the end of the 1940s, a good amount of research was done on the problem of pooling variances, much of which concerned the estimation of means and the corresponding mean squared errors, but some work also dealt with combined testing procedures. Bancroft and Han (1977) published a comprehensive bibliography, also including other problems of preliminary assumption testing. One reason for the popularity of the variance pooling problem in early work is that, as long as normality is assumed, only the ratio of the variances needs to be varied to cover the case of violated model assumptions, which makes it easier to achieve theoretical results without computer-intensive simulations.
For the purpose of the current presentation, Gurland and McCullough (1962) stand out among the early work. They defined four combined procedures for testing equality of means. The decision about equal variances was made based on the ratio of the sample variances. Two different procedures were compared for the case that equality of variances is rejected, and procedures were further distinguished by whether it can be assumed as known that in case of inequality of the variance a specific one of the variances is larger than the other. For these procedures the authors were able to compute sizes (type I error probabilities) and power analytically without simulations. They presented the results in comprehensive tables depending on the actual ratio between variances and the sample sizes. The combined procedures could achieve better power (with acceptable type I error probabilities) than the test without equal variances assumption occasionally but not often; results for unconditional use of pooling were not given. Bancroft (1964) gave recommendations for when to pool different mean squares occurring in Analysis of Variance tables used for standard testing problems based on significance tests of their equality. Recommendations depend on to what extent the involved degrees of freedom are imbalanced (less balance requires a higher level of the preliminary test, because pooling is more dangerous in that case). Another recommendation is to choose a much higher significance level of 0.25 or even 0.5 for the MS test than one would normally use in significance testing. This recommendation turned up again in later work combined procedures for other problems. Given that the AU test is based on a more general model assumption, it turns out to be advantageous in many situations to use it not only if there is strong evidence against the model assumption of the test based on the more constrained model, but already if a certain amount of violation of the constrained model seems just a realistic possibility. Arnold (1970) considered a different problem, namely whether to pool observations of two groups if the mean of the first group is the main target for testing. Pooling assumes that the two means are equal, so a test for equality of means here is the MS test. Arnold observes that in vast regions of the parameter space a better power can be achieved without pooling. The recommendation to generally use the test that requires less restrictive model assumptions because the combined procedure is better only in small regions of the parameter or distribution space of interest is another recurring theme in the literature about combined procedures. Moser, Stevens and Watts (1989) came to the same conclusion comparing a combined procedure for the problem of pooling variances with a standard two-sample t-test and Sattertwaithe's Approximate F-test (Satterthwaite (1946)), an alternative to Welch's t-test, based on size and power evaluations. They recommended to always use Sattertwaithe's Approximate F-test without testing the equal variances assumption. Moser and Stevens (1992) recommended to never test the equal variances assumption. The standard t-test was recommended under certain sample sizes, whereas mostly the AU test was recommended. Results for Sattertwaithe's Approximate F-test and Welch's t-test were approximately equal. Gans (1981) simulated the combined procedure for pooling variances on data from normal, uniform, and exponential distributions and concluded that the combined procedure does not fully remove the bias of the standard t-test. Markowski and Markowski (1990) evaluated the setup of having an MS test of homogeneity of variances, the F-test, before doing a t-test for various combinations of sample size and significance level, looking also at non-normal distributions. The samples were drawn from normal distributions, a contaminated normal distribution with a higher frequency of outliers, the exponential distribution and the chi-squared distribution. For data with non-normal distributions, the results support those of Box (1953) and strongly discourage use of the F-test as an MS test. For situation with data generated from the normal distribution, the MS test was either unnecessary or ineffective as an MS test to alert the researcher that a t-test may be inappropriate. For equal sample sizes, no MS test is needed as the t-test is robust enough. However, for unequal sample sizes, the t-test is not so robust and the authors note that a more effective MS test would be desirable.
These results are backed up by Albers, Boon and Kallenberg (2000a), who presented a second order asymptotic analysis of the combined procedure for pooling variances with the F-test as MS-test. They argue that this procedure can only achieve a better power than unconditional testing under the unconstrained model if the test size is also increased. This means that there are only two possibilities for the combined procedure to improve upon the MC test. Either the combined procedure is anticonservative, i.e., violates the desired test level, which would be deemed unac-ceptable in most applications, or the size of the MC test is smaller than the nominal level, which if its assumptions are not fulfilled is sometimes the case. Albers, Boon and Kallenberg (2000b) extend these results to the analysis of a more general problem for distributions P θ ,τ from a parametric family with two parameters θ and τ, where θ = 0 is the main null hypothesis of interest and the decision between an MC test assuming τ = 0 and an AU test without that assumption is made based on an MS test testing τ = 0 (in the two-sample variance pooling problem, τ could be the logarithm of the ratio between the variances; a simpler example would be the choice between Gauss-and t-test in the one-sample problem, where the MS test tests whether the variance is equal to a given fixed value). The tests are all assumed to allow a certain mathematical expansion that is fulfilled by standard tests such as likelihood ratio tests. Once more, the combined procedure can only achieve better power at the price of a larger size, potentially being anticonservative. Another interesting aspect is that the authors introduce a correlation parameter ρ formalizing the dependence between the MS-test and the main tests. In line with the discussion in Section 2, they state that for strong dependence preliminary testing is not sensible, and their results consider the case ρ → 0.
Zimmerman (2004) investigated by simulation the rejection rates of a combined procedure using the Levene test as MS-test on samples of different sizes with equal and unequal variances followed by either a pooled-variance Student t-test or Welch's t-test. Only the type I error probability was considered. The final recommendation was to use the Welch t-test unconditionally, especially when the sample sizes are unequal, where both the pooled-variance test and the combined procedure were found to be prone to exceed the nominal level. Zimmerman (2014) looked at the behavior of the pooled-variance t-test for samples that were selected so that the variance ratios did not exceed a certain cut-off value, and found that the resulting conditional sizes and powers were substantially affected.
Tests of normality in the one-sample problem
The simplest problem in which preliminary misspecification testing has been investigated is the problem of estimating the location of a sample. The standard model-based procedure for this is the one-sample Student's t-test. It assumes the observations X 1 , X 2 , ..., X n to be i.i.d. normal. For non-normal distributions with existing variance the t-test is asymptotically equivalent to the Gausstest, which is asymptotically correct due to the Central Limit Theorem. The t-test is therefore often branded robust against non-normality if the sample is not too low, see, e.g., Bartlett (1935), Lehmann and Romano (2005). An issue is that the quality of the asymptotic approximation does not only depend on n, but also on the underlying distributional shape, as the speed of normal approximation is not uniform. Very skew distributions or extreme outliers can affect the power of the t-test for fairly large n, see Cressie (1980) for a detailed discussion. Cressie mentions that the biggest problems occur for violations of independence, however we are not aware of any literature examining of independence testing combined with the t-test. Instead, a number of publications examine preliminary normality testing for the t-test.
Some work focuses just on the quality of the MS tests. There is a good amount of general studies investigating and comparing normality tests without specific reference to its effect on subsequent inference and combined procedures. Razali and Wah (2011), comparing the Shapiro-Wilk, Kolmogorov-Smirnov, Lilliefor and Anderson-Darling tests, concluded that Shapiro-Wilk has the best power for a given level. The Anderson-Darling test was a close second. This result concurs with Mendes and Pala (2003), Farrell and Rogers-Stewart (2006) and Keskin (2006). Some authors investigated normality tests regarding its use for subsequent inference without considering the later main test. Schoder et al. (2006) concluded that the Kolmogorov-Smirnov test performs badly, and they advised against preliminary testing of normality. Keselman et.al (2013) discussed different types of testing for normality. They used the Kolmogorov-Smirnov, Cramervon Mises and Anderson-Darling tests, 26 different shapes of distributions (14 distributions with different skewness and kurtosis values, 8 contaminated normal mixture models and 4 multinomial models), 3 sample sizes and 4 different level of significance. They concluded that the Anderson-Darling test is the most effective one at detecting non-normality relevant to subsequent t-testing, and they suggested that for deciding whether the MC test should be used, the MS test be carried out at a significance level larger than 0.05, for example 0.15 or 0.20, in order to increase the power.
The next group of work examines running a t-test conditionally on passing normality by a preliminary normality test, without considering what happens if normality is rejected. Easterling and Anderson (1978) state that they started their investigation from the belief that the practice of this two-stage procedure is not only conventional, but also good. To test this they considered various distributions such as normal, uniform, exponential, two central and two noncentral t-distributions. They only considered sample sizes 10 and 20. They collected 1000 samples for both situations, normality passed and rejected, respectively, at 10% significance level, using both the Anderson-Darling and the Shapiro-Wilk normality tests. After obtaining those samples, the empirical distribution of the 1000 t-values was compared to the expected frequencies from the Student's t-distribution. This worked reasonably well when the samples were drawn from the normal distribution. For symmetric non-normal distributions, the results were mixed, and for situations where the distributions were asymmetric, the distribution of the t-values did not resemble a Student's t-distribution, which they take as an argument against the practice of preliminary normality testing, because in case that the underlying distribution is not normal, normality testing does not help. They discuss this as follows: There are various reasons why the distributions of the t ratio in the cases considered might not follow a Student's t distribution -the non normality of the numerator, the non zero expectation of the numerator, the non chi-squareness of the square of the denominator, and lack of independence of numerator and denominator. For the asymmetric sampling distributions, the empirical distributions of t (not shown in this paper) suggest that the preliminary goodness of fit test causes a shift in mean. In order to obtain a sample from such a distribution which would pass a test for normality (which includes symmetry as a property) that sample would have to have fewer observations in the elongated tail than are expected.
They tried to adjust for the shift in mean, but this did not improve the results. As a result they favored a non-parametric approach. If a probability model is to be used as a reporting device to discover and describe patterns of variability, then normality testing could be sensible. Schucany and Ng (2006) investigated the Type I error rate of the one sample t-test given that the sample has passed the Shapiro-Wilk test for normality, i.e., the conditional Type I error rate. Data were sampled from normal, uniform, exponential and Cauchy populations. The simulation study showed that, for the uniform distribution, screening of samples by an MS test for normality leads to a more conservative conditional Type I error rate than application of the one-sample t-test without MS testing. In contrast, for the exponential distribution, the conditional Type I error rate is even more elevated than the Type I error rate of the t-test without MS testing (i.e. the unconditional Type I error rate) which is already above the nominal level. Furthermore, larger sample sizes and more liberal significance levels of the MS test shift the conditional Type I error rate even further away from the unconditional Type I error rate of the t-test and also from the nominal level, leading to either more conservative of more liberal test decisions. This common feature of the uniform and exponential distributions is especially interesting to note as, in both cases, the t-test without MS testing shows an acceptable Type I error rate at least for moderate sample sizes. Rochon and Keiser (2011) investigated the reasons behind the characteristics of the one sample t-test after MS testing for normality. Samples were drawn from the exponential, log-normal, uniform, Student's t with 2 degrees of freedom and standard normal distributions that had passed the pretest. The Shapiro-Wilk test and the Lilliefors modification of the Kolmogorov Smirnov test were used. It was found that the results from the two MS tests were similar, therefore only results from the Shapiro-Wilk test were presented. For the exponential and log-normal distributions, the Type I error rate is elevated for unselected samples and it is further increased by the MS test for normality. The inspection of the densities of the samples that pass the MS test shows that the closer the underlying population distribution is to the normal, the less impact the normality test has. Where the population distribution is farther from the normal, the MS screening in fact selects samples that look like normal and thus can no longer be considered representative of the true underlying population. They concluded that formal MS testing for normality cannot be recommended. The unconditional t-test relying on the normal approximation of reasonably large sample sizes by way of the Central Limit Theorem taking into account that the one sample t-test is more sensitive to skewness than to heaviness or lightness of the tails (Rao, 1998) is an alternative. If it is at least known that the underlying population distribution is symmetric, a non-parametric application such as the Wilcoxon-Mann-Whitney signed-rank test could be considered. It is recommended that the model assumptions should be checked from external data sources if possible, and not from the data set at hand.
To our knowledge, in the one-sample problem there are no investigations of full combined procedures.
Tests of normality in the two-sample problem
For the two-sample problem, the Wilcoxon-Mann-Whitney (WMW) rank test is a popular alternative to the two-sample t-test with (in the context of preliminary normality testing) mostly assumed equal variances. In principle most arguments and results from the one-sample problem apply here as well, with the additional complication that normality is assumed for both samples, and can be tested either by testing both samples separately, or by pooling residuals from the mean. When its assumptions are met, the two-sample Student's t-test was shown to perform superior to nonparametric tests in Hodges and Lehmann (1956) and Randles and Wolfe (1979). As for the one-sample problem, there are also claims and results that the two-sample t-test is rather robust to violations of the normality assumption (Hsu and Feldt, 1969;Rasch and Guiard, 2004) and some evidence that this is sometimes not the case and the WMW rank test is superior and does not lose much power even if normality is fulfilled (Neave and Granger, 1968). Fay and Proschan (2010) produced a survey on comparing the two-sample t-test with the WMW test, citing mainly theoretical arguments. They also discussed some decision rules between these two tests, recommending against normality testing. Rochon et al. (2012) investigated by simulation combined procedures based on both strategies for preliminary normality testing (both samples separately, and pooled residuals) using a Shapiro-Wilk test of normality. Data were simulated from normal, exponential, and uniform distributions. The MC test was the two sample t-test, the AU test was the WMW test. Although preliminary testing showed a substantial effect on conditional sizes and powers, the overall sizes and powers of the combined procedure were acceptable. Looking at the results in detail, for all the simulated distributions, the MC test achieved a higher power than the AU test (with the combined procedure somewhere in between), whereas all delivered more or less acceptable type I error rates. Overall therefore in these situations the use of the WMW test as AU test is not favorable, be it with or without preliminary MS testing. The authors advise that it would be better to use information from extra-data sources to decide between the MC and the AU test (as had been already advised by Easterling and Anderson (1978)). The pooled residuals normal testing strategy looked a little bit better than the separate groups one, but this may be due to the fact that in all experiments both samples were simulated from the same distributional family, which makes sure that pooling the residuals will not be misleading.
In the light of the conceptual problems and mixed results with preliminary normality testing, Zimmerman (2011) achieves good simulation results with an alternative approach, namely running both the two-sample t-test and the WMW test, choosing the two-sample t-test in case the suitably standardized values of the test statistics are similar and the WMW test in case the p-values are very different, although the tuning of this approach is somewhat less intuitive, and the MS paradox still applies, i.e., in case that the assumptions for the two-sample t-test are originally fulfilled, they are violated conditionally on the decision rule choosing that test.
The regression model selection problem is the problem to select a subset of a given set of explanatory variables {X 1 , . . . , X p }. This can be framed as a model misspecification test problem, because a standard regression assumes that all variables that systematically influence the response variable are in the model. If it is of interest, as main test problem, to test β j = 0 for a specific j, the MS test would be a test of null hypotheses β k = 0 for one or more of the explanatory variables with k = j. The MC test would test β j = 0 in a model with X k removed, and the AU test would test β j = 0 in a model including X k . This problem was mentioned as second example in Bancroft's (1944) seminal paper on preliminary assumption testing.
Traditional model selection approaches such as forward selection and backward elimination are often based on such tests and have been analyzed (and criticized) a lot in the literature. We will not review this literature here. There is sophisticated and innovative literature on post-selection inference in this problem. Berk et al. (2013) propose a procedure in which main inference is adjusted for simultaneous testing taking into account all possible submodels that could have been selected. Efron (2014) uses bootstrap methods to do inference that takes the model selection process into account. Both approaches could also involve other MS testing such as of normality, homoscedasticity, or linearity assumptions, as long as combined procedures are fully specified. For specific model selection methods there now exists work allowing for exact post-selection inference, see Lee et al. (2016). For a critical perspective on these issues see Pötscher (2005, 2015). In econometrics, David Hendry and co-workers developed an automatic modeling system that involves MS testing and conditional subsequent testing with adjustments for decisions in the modeling process, see, e.g., Hendry and Doornik (2014). Earlier, some authors such as Saleh and Sen (1983) analyzed the effect of preliminary testing on later conditional main testing. Godfrey (1988) listed a plethora of MS tests to test the various assumptions of linear regression. However, no systemic way to apply these tests was discussed. In fact, Godfrey noted that the literature left more questions open rather than answered. Some of these questions are: (i) the choice among different MS tests, (ii) whether to use nonparametric or parametric tests, (iii) what to do when any of the model assumptions are invalid as well as (iv) some potential problems with MS testing such as repeated use of data, multiple testing and pre-test bias. Godfrey (1996) discussed destructive and constructive value of MS tests. He concluded that efforts should be made to develop 'attractive', useful and simple combined procedures, keeping in mind that the combination of tests must be well-behaved. One suggestion was to use the Bonferroni correction for each test as "the asymptotic dependence of test statistics is likely to be the rule, rather than the exception, and this will reduce the constructive value of individual checks for misspecification". Giles and Giles (1993) reviewed the substantial amount of work done in econometrics regarding preliminary testing in regression up to that time, a limited amount of which is about MC and/or AU tests conditionally on MS tests. This involves pre-testing of a known fixed variance value, homoscedasticity, and independence against an autocorrelation alternatives. The cited results are mixed. King and Giles (1984) comment positively on a combined procedure in which absence of autocorrelation is tested first by a Durbin-Watson or t-test. Conditionally on the result of that MS test, either a standard t-test of a regression parameter is run (MC test) or a test based on an empirically generalized least squares estimator taking autocorrelation into account (AU test). In simulations the combined procedure performs similar to the MC test and better than the AU test in absence of autocorrelation, and similar to the AU test and better than the MC test in presence of autocorrelation. Also here it is recommended to run the MS test at a level higher than the usual 5%. Most related post-1993 work in econometrics seems to be on estimation after pre-testing, and regression model selection. Ohtani and Toyoda (1985) propose a combined procedure for testing linear hypotheses in regression conditionally on testing for known variance. Toyoda and Ohtani (1986) test the equality or different regressions conditionally on testing for equal variances. In both papers power gains for the combined procedure are reported, which are sometimes but not always accompanied with an increased type I error probability.
6.5 More than one misspecification test Rasch et al. (2011) assessed the statistical properties of a three-stage procedure including testing for normality and for homogeneity of the variances. They considered 5 distributions with different location, spread, skewness and kurtosis parameters. Various sample sizes, equal and unequal, and different ratios of the standard deviation were considered. They considered three main statistical tests, the Student's t-test, the Welch's t-test and the WMW test. For the MS testing, they used the Kolmogorov-Smirnov test for testing normality and Levene's test for testing the homogeneity of the variances of the two samples that were generated. If normality was rejected by the Kolmogorov-Smirnov test, the WMW test was used. If normality was not rejected, the Levene's test was run and if homogeneity was rejected, the Welch's t-test was used and if homogeneity was not rejected, the standard t-test was used. The authors presented the rejection rates and the power of the procedure and compared it with the tests when the model assumption were not checked. The authors concluded that assumptions underlying the two sample t-test should not be pre-tested because "pre-testing leads to unknown final Type I and Type II risks if the respective statistical tests are performed using the same set of observations". They prefer Welch's t-test overall to both Student's t-test and the WMW test. This preference is in line with an earlier recommendation of Rasch and Guiard (2004), who advise against the WMW test in case of unequal variances.
To our knowledge this is the only investigation of a combined procedure involving more than one MS test apart from the work on regression model selection cited in Section 6.4. Spanos proposed a "probabilistic reduction"-approach (e.g., Spanos, 2018) in order to systematize the process of model building involving MS testing of various assumptions, but he did not define a fully automatized procedure that could be investigated by means of theory or simulation.
Discussion
Although many authors have, in one way or another, investigated the effects of preliminary MS testing or later application of model-based procedures, there are some limitations in the existing literature. Only very few papers have compared the performance of a fully specified combined procedure with unconditional uses of both the MC and the AU test. Some of these have only looked at type I error probabilities but not power, some have only looked at the situation in which the model assumption is in fact fulfilled, and some have studied setups in which either the unconditional MC or the AU test works well across the board, making a combined procedure superfluous, although it is widely acknowledged that situations in which either unconditional test can perform badly depending on the unknown data generating process do exist.
Recurring themes in the work investigating combined procedures are a mostly critical position that authors take on preliminary MS testing; in case that it is done, a recommendation to use a higher level for the MS test than the conventional 5%; the requirement that the MS test should be independent or approximately independent of the later MC and AU test. In some setups, despite being often critical of the combined procedure, authors acknowledge that there is a requirement for distinguishing between situations in which the MC test should be applied and situations in which the AU test is favorable. Apart from MS tests, such distinctions could come from prior information and sources outside the data. Occasionally recommendations are conditional on sample sizes.
Comparing a full combined procedure with unconditional use of the MC test or the AU test, a typical pattern should be that under the model assumption for the MC test, the MC test is best regarding power, and the combined procedure performs between the unconditional MC test and AU test, and if that model assumption is violated, the AU test is best, and the combined procedure is once more between the MC test and the AU test. King and Giles (1984), Toyada and Ohtani (1986) are examples for this. Results on test size are consistent with this (i.e., in cases where the combined procedure violates the nominal test level, at least one of the unconditional procedures does that as well). Such results can be interpreted charitably for the combined procedure, which allows for some kind of maximin performance. It seems to us that part of the criticism of the combined procedure is motivated by the fact that it does not do what some seem to expect or hope it to do, namely to help making sure that model assumptions are fulfilled, and to otherwise leave performance characteristics untouched, which is destroyed by the misspecification paradox.
However, for pooling variances in the two-sample problem Welch's t-test seems to perform well more or less across the board, and in the case of normal testing for the two-sample problem, non-normal distributions have been chosen for simulation in the literature for which WMW does not seem to be of much use, although such distributions exist.
A more sober look at the results reveals that the combined procedures are almost always competitive with at least one of the unconditional tests, and often with them both. It is clear, though, that recommendations need to depend on the specific problem, the specific tests involved, and often also on in what way exactly model assumptions of the MC test are violated.
A positive result for combined procedures
The overall message from the literature does not seem very satisfactory. On the one hand, model assumptions are important and their violation can severely damage results. On the other hand, most comments on testing the model assumptions before using a method that is based on them, and only using the model-based method if the model assumption is passed are rather critical. Bayesians may think that all this only confirms that frequentist statistics does not work and should not be used, but the Bayesian approach does not free statistics from model assumptions, and it has been argued that Bayesians should do more to check them (Gelman and Shalizi, 2012).
In this section we present a point of view and a result that makes us think somewhat more positively about combined procedures and the impact of preliminary model testing. A characteristic of the literature analyzing combined procedures is that they compare the combined procedure with unconditional MC or AU tests both in situations where the model assumption of the MC test is fulfilled, or not fulfilled. However, they do not investigate a situation in which the MS test can do what it is supposed to do, namely to distinguish between these situations. This can be modeled in the simplest case as follows, using the notation from Section 3. Let P θ be a distribution that fulfills the model assumptions of the MC test, and Q ∈ M \ M Θ a distribution that violates these assumptions. For considerations of power, let the null hypothesis of the main test be violated, i.e., θ ∈ Θ 0 and Q ∈ M * (an analogous setup is possible for considerations of size). We may observe data from P θ or from Q. Assume that a dataset is with probability λ ∈ [0, 1] generated from P θ and with probability 1 − λ from Q (we stress that as opposed to standard mixture models, λ governs the distribution of the whole dataset, not every single observation independently). The cases λ = 0 and λ = 1 are those that have been treated in the literature, but only if λ ∈ (0, 1) the ability of the MS test to inform the researcher whether the data are more likely from P θ or from Q is actually required.
We ran several simulations of such a setup (looking for example at normality in the two-sample problem), which will in detail be published elsewhere. Figure 1 shows a typical pattern of results. In this situation, for λ = 0 (model assumption violated), the AU test is best and the MC test is worst. For λ = 1, the MC test is best and the AU test is worst. The combined procedure is in between, which was mostly the case in our simulations. Here, the combined procedure is for both of these situations close to the better one of the unconditional tests (to what extent this holds depends on details of the setup). The powers of all three tests are linear functions of λ (linearity in the plot is distorted by random variation only), and the consequence is that the combined procedure performs clearly better than both unconditional tests over the best part of the range of λ . Unless an unconditional test achieved perfect power (for too easily detectable violations of the H 0 ), in our simulations it was always the case that for a good range of λ -values the combined procedure was the best. To brand the combined procedure "winner" would require the nominal level to be respected under H 0 (i.e., for both P θ , θ ∈ Θ 0 and Q ∈ M * ), which was very often though not always the case.
Before stating a general result, here are some words on the relevance of such a setup. Obviously it is not realistic that only two distributions are possible, one of which fulfills the model assumptions of the MC test. We wanted to keep the setup simple, but of course one could look at mixtures of a wider range of distributions, even a continuous range (for example for ratios between group-wise variances). In any case, the setup is more flexible than looking at λ = 0 and λ = 1 only, which is what has been done in the literature up to now. In real research, is there something like a probability λ that model assumptions will be fulfilled? Of course model assumptions will never hold precisely, but the idea seems appealing to us that a researcher in a certain field who very often applies certain tests comes across a certain percentage different from 0 or 1 of cases which are well-behaved in the sense that a certain model assumption is a good if not perfect description of what is going on (the setup has a certain Bayesian flavor, but the researcher may not be interested in priors or posteriors for λ because the proportion λ under such an interpretation is pieced together from situations concerning different research topics).
We use the notation from Section 3 with the following additions. P λ stands for distribution of the overall two step experiment, i.e., first selecting eitherP = P θ orP = Q with probabilities λ , 1 − λ respectively, and then generating a dataset z fromP. The events of rejection of the respective H 0 are denoted Here are some assumptions: Both R MC and R AU are independent of R MS under both P θ and Q.
Keep in mind that this is about power, i.e., the H 0 of the main test is violated for both P θ and Q. Assumption (I) means that the MC test has the better power under P θ , (II) means that the AU test has the better power under Q. Assumption (III) means that the MS test has some use, i.e., it has a certain (possibly weak) ability to distinguish between P θ and Q. All these are essential requirements for preliminary model assumption testing to make sense. Assumption (IV) though is very restrictive. It asks that rejection of the main null hypothesis by both main tests is independent of the decision made by the MS test. This is unrealistic in most situations. However, it can be relaxed (at the price of a more tedious proof that we do not present here) to demanding that there is a small enough δ > 0 (dependent on the involved probabilities) so that |P θ (R MC |R MS ) − P θ (R MC |R c MS )|, |P θ (R AU |R MS ) − P θ (R AU |R c MS )|, |Q(R MC |R MS ) − Q(R MC |R c MS )|, and |Q(R AU |R MS ) − Q(R AU |R c MS )| are all smaller than δ , which can be fulfilled in many cases of interest. As emphasized earlier, approximate independence of the MS test and the main tests is an important desirable feature of a combined test, and it should not surprise that a condition of this kind is required.
The following Lemma states that the combined procedure has a better power than both the MC test and the AU test for at least some λ . Although this in itself is not a particularly strong Figure 1: Power of combined procedure, MC, and AU test across different λ s from an exemplary simulation. The MC test here is Welch's two-sample t-test, the AU test the WMW-test, the MS test Shapiro-Wilks, for λ = 1 corresponds to normal distributions with mean difference 1, λ = 0 corresponds to t 3 -distributions with mean difference 1. result, in many situations, according to our simulations, the range of λ for which this holds is quite large. Also the Lemma serves to give an idea of the required ingredients, i.e., what is important for the combined procedure to be superior to both the MC and the AU test, which is mainly the approximate independence between MS test and the main tests. (I)-(III) only require that the involved tests roughly do what they are supposed to do (and not even necessarily very well).
Conclusion
Given that statisticians often emphasize that statistical inference relies on model assumptions, and that these need to be checked, the literature investigating this practice is surprisingly critical. Preliminary tests of model assumptions have in many situations been found to strongly affect the characteristics of subsequent inference and to invalidate the theory based on the very model assumptions the approach was meant to secure. In some setups either running a less constrained test or running the model-based test without preliminary testing have been found superior to the combined procedure involving preliminary MS testing. This is in contrast to a fairly general view among statisticians that model assumotions should be checked; that view is explicitly or implicitly taken in most of the work empirically investigating "correct" or "incorrect" use of statistics in practice, see Section 5. The existence of situations in which performance characteristics rely strongly on whether model assumptions are fulfilled or not has been acknowledged also by authors that were more critical of preliminary testing, and therefore there is a role for model checking. There is however little elaboration of its benefits in the literature. One contribution of the present work is the investigation of combined procedures in a setup in which both distributions fulfilling and violating model assumptions can occur. This is more favorable for combined procedures than just looking at either fulfilled or violated model assumptions in isolation, and we believe that it is appropriate, because MS tests are used for distinguishing situations in which the model assumptions are appropriate from those where they are not, and this is only exploited in a setup where both can happen.
Mainly for this reason we believe that overall the literature gives a somewhat too pessimistic assessment of combined procedures involving MS testing, and that model checking (and drawing consequences from the result) is more useful than the literature suggests. The fact that preliminary assumption checking technically violates the assumptions it is meant to secure is probably assessed more negatively from the position that models can and should be "true", whereas it may be a rather mild problem if it is acknowledged that model assumptions, while providing ideal and potentially optimal conditions for the application of model-based procedures, are not necessary conditions for their use.
In any case, this depends on the specific combined procedure and the considered data generating processes. We believe that the focus of model checking is too much on the formal assumptions and not enough on deriving tests that can find the particular violations of model assumptions that are most problematic in terms of level and power. Here is an example from the literature review (Rochon et al., 2012). In terms of power, the two-sample t-test is better than the nonparametric WMW test if the underlying distributions are uniform. This clearly violates the normality assumption of the t-test (despite being asymptotically still correct), and will be picked up by many normality tests. Still it would be a bad decision to use the WMW test instead, even though its assumptions are fulfilled. An optimal combined procedure therefore should involve an MS-test that picks up only those deviations from normality for which the WMW test (or whatever test is chosen as AU test) is actually helpful. The development of MS tests that are better suited for this task and the investigation of the resulting combined procedures is a promising research area. | 2019-08-11T10:02:36.139Z | 2019-08-06T00:00:00.000 | {
"year": 2019,
"sha1": "61de8bc6fb9bb8cb8f06b8cbb4ec9de1474a8a4c",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "61de8bc6fb9bb8cb8f06b8cbb4ec9de1474a8a4c",
"s2fieldsofstudy": [
"Computer Science"
],
"extfieldsofstudy": [
"Mathematics"
]
} |
41734388 | pes2o/s2orc | v3-fos-license | Analysis of aroma compounds and nutrient contents of mabolo (Diospyros blancoi A. DC.), an ethnobotanical fruit of Austronesian Taiwan
Diospyros blancoi A. DC. is an evergreen tree species of high-quality wood. Mabolo, the fruit of this plant, is popular among the natives in Taiwan, but its potential in economic use has not been fully explored. Mabolo has a rich aroma. Of the 39 different volatile compounds isolated, its intact fruit and peel were found to both contain 24 compounds, whereas the pulp contained 28 compounds. The most important aroma compounds were esters and α-farnesene. Our data show that mabolo is rich in dietary fiber (3.2%), and the contents of other nutrients such as malic acid, vitamin B2, vitamin B3, folic acid, pantothenic acid, and choline chloride were 227.1 mg/100 g, 0.075 mg/100 g, 0.157 mg/100 g, 0.623 mg/100 g, 0.19 mg/100 g, and 62.52 mg/100 g, respectively. Moreover, it is rich in calcium and zinc; the contents of which were found to be 42.8 mg/100 g and 3.6 mg/100 g, respectively. Our results show that D. blancoi has the potential to be bred for a novel fruit.
Introduction
Diospyros blancoi, the subject of this study, is an Ebenaceae plant that can grow up to 20 m high or more. There are scientific names of different origins referring to this evergreen plant, with the three most commonly used being Diospyros discolor Willd., Diospyros philippensis (Desr.) Gü rke and D. blancoi A. DC. Knapp and Gibert [1] see D. discolor as the single correct name for the plant, whereas in the book Flora of Taiwan [2], it is referred to as D. philippensis. However, as early as in 1971, Howard [3] had already legitimized D. blancoi as the plant's scientific name, and thus it is the one that we use in this study.
In the Filipino language Tagalog, D. blancoi is called kamagong, and the fruit is known as mabolo, meaning "hairy" [4]. The wood of this plant is extremely dense, hard, and dark in color, consequently in Taiwan it is known as "Taiwan's black ebony" and considered a valuable timber species. In the Philippines, kamagong timber is also known as "iron wood" since it is durable and considered unbreakable. It is widely distributed in the Philippines [5], and is native to eastern and southern Taiwan [2,6]. On a northeastern Taiwanese island known as Turtle Island, a tree with a diameter at breast height (DBH) of 210 cm was discovered, and this may be the northernmost part of the world where this plant is found. The species, known as mao-shi in Taiwan [2], produces a fruit with a fluffy exterior, which is the same reference as mabolo in the Philippines.
D. blancoi is an important ethnobotanical plant in the Austronesian society. In China, the earliest mentioning of D. blancoi was in "Dong Fan Jì" by Chen [7] in 1603 during the Ming Dynasty. A travel report recorded in Taiwan, the article describes the western coast of Taiwan and the customs of Siraya tribal life, highlighting the consumption of important edible fruits and vegetables such as coconut, bergamot, sugarcane, as well as mabolo (Fig. 1). Taiwanese ethnobotanist Han-Wen Zheng [8] mentioned that mabolo is commonly found in the eastern and southern Taiwanese residential courtyards of Kebalan, Pangcah, Payuan, and Pinuyumayan. Tao people plant mabolo in private lands and taro fields. In Pinuyumayan culture, the lifecycle of mabolo is used as an indicator for the annual work calendar. This includes instructions stating that weeding should be done while the plant is flowering, millet should be collected when its fruit begins to form, or upland rice should be collected when fruits are harvested. Moreover, the elderly or sick people are honored with the mature fruit [8].
Seedless mabolo has occasionally been found in the field. With only the female plant found on Turtle Island, it suggests that a lack of pollen causes approximately 90% of the fruit to be seedless. Seeded fruits contain an average of three seeds, and the relatively low number of seeds is indicative of the existence of parthenocarpic varieties of D. blancoi. Our recent studies show that the seedless mabolo was also a result of artificially induced production [9].
D. blancoi maintains high diversity of fruit size within populations (Fig. 1B) and can be considered a good breeding resource. However, as it exhibits a low level of domestication, selecting elite trees and establishing initial lines that can be used for a potential traits screen becomes very important. The aims of this study are to highlight the importance of mabolo and promote the fact that D. blancoi can potentially be an important economic fruit [9].
Analysis of aroma compounds
Fresh ripe fruits that had fallen on the ground (random samples) were collected from the Kenting area. The fruit was subjected to three types of analysis. For the first analysis, the intact fruit was used; for the second analysis, the fruit peel (approximately 50 g) was used; and for the third analysis, the pulp of the fruit was used after the peel and seeds were removed. Each analysis was performed in triplicate. The samples were placed in the sample cylinder to extract aroma compounds using the headspace solid-phase micro-extraction (SPME) method for 30 minutes, which used polydimethylsiloxane/divinylbenzene (PDMS/DVB; 65 mm) fiber for adsorption. Gas chromatography-mass spectrometry (GC-MS) was performed with the Thermo GC Focus Series and Trace DSQ-MS (Thermo Fisher Scientific Inc., Waltham, MA, USA). A BPX5 (30 m length, 0.25 mm inside diameter, 0.25 mm film thickness) column (Thomas Scientific Inc., Swedesboro, NJ, USA) was used with helium (99.999%) as carrier gas at a flow rate of 1.0 mL/min, and the injector was set in the splitless mode. The injection port was set at 220 C, the MS interface was maintained at 210 C, and the ion trap was set at 200 C. The operating temperature range was 55e130 C, increased at a rate of 3 C/min; 130e210 C, increased at a rate of 2 C/min; and finally held at 210 C for an additional 2 minutes. The ionization energy of electron impact (EI) was 70 eV and the scanning range was between 33 m/z and 400 m/z. The mass spectra obtained were compared with the mass spectra in the NIST 02 version mass spectral database (National Institute of Standards and Technology, Washington, DC, USA). The match of the mass spectra fragmentation was over 850, and 10% or more probability was the basis for the identification of compounds. The concentrations of compounds were calculated based on the waveform area of the chromatogram. The process was repeated, and if the variation was more than 10%, then the samples were resampled, and the data were analyzed until consistent results were obtained.
Analysis of nutrient contents
Thirty fruits were collected as samples from the Kenting area (Taiwan), and the pulp of each fruit was weighed after peeling and the removal of seeds. The dry weight of the pulp was determined after freeze-drying for 1 week before the pulp was powdered and stored at À80 C. The process was repeated after conducting an analysis. If the variation between the two data was within 10%, the average of the two datasets was considered as the result. If the variation was more than 10%, the process was continued until a consistent result was obtained. The following analyses were performed according to the following protocols/methods after some modifications to the mentioned methods: moisture content ¼ (fresh weight À dry weight) * 100%; calorie ¼ 9 (crude fat) þ 4 (crude protein) þ 4 (carbohydrate); carbohydrate ¼ 100 À moisture À ash À crude protein À crude fat; and calories from fat ¼ 9 (crude fat).
3.
Results and discussion
Collectively, these compounds constituted 90.27% of the total content of volatile compounds. Twenty-eight volatile compounds were detected in the pulp, of which the six major compounds (each constituting > 5% of the total content of volatile compounds) were hexyl butyrate (29.37%), benzyl butyrate (15.08%), a-farnesene (13.62%), phenylethyl butyrate (9.24%), hexyl hexanoate (8.33%), and butyl benzoate (6.39%). Collectively, these compounds constituted 82.03% of the total content of volatile compounds. The main aroma compounds in the intact fruit, pulp, and peel were esters and a-farnesene. Not only for mabolo, the most important volatile compounds of many fruits are esters. For example, esters accounted for 78e92% of whole volatile compounds of apples [27], and there were 33 kinds of esters in the total 58 kinds of identified volatile compounds of passion fruit juice [28].
Mabolo has a rich aroma [29e35]. The strong aroma, which is due to the peel, is described by Smith and Oliveros-Belardo [29] as a smell that can increase appetite, while Morton [4] describes it as a cheese-like odor. As the strong aroma is not preferred by some people, it is recommended that the skin is peeled and the fruit is placed in the refrigerator for several hours before consumption. Selection of a line with a less intense aroma is of importance in order to cater to public taste. It has been suggested by Morton [4] that strains producing fruit with purple skin had better flavor. The fresh sweet aroma of the ripe mabolo soon after the fruit has fallen to the ground turns unpleasant as time progresses. Smith and Oliveros-Belardo [29] suggest that free butyric acid is the source of this unpleasant smell.
Collins and Halim [33] utilized steam distillation to obtain essential oils from mabolo, and after analyzing the oils by GC, infrared spectroscopy (IR), and MS, they identified 24 different volatile compounds. Smith and Oliveros-Belardo [29] analyzed the petroleum ether extract of mabolo and identified five main constituents, including benzyl salicylate (26.9%), benzyl benzoate (19.2%), cinnamyl benzoate (10.3%), butyl benzoate (6.0%), and benzyl butyrate (4.1%). Wong et al [34] analyzed the dichloromethane extract of mabolo and identified 67 j o u r n a l o f f o o d a n d d r u g a n a l y s i s 2 4 ( 2 0 1 6 ) 8 3 e8 9 compounds by GC and GC-MS analysis [34]. Esters constituted 88.6% of the total volatile content, wherein the most abundant esters were methyl butyrate (32.9%), ethyl butyrate (10.7%), butyl butyrate (10.2%), and benzyl butyrate (10.0%). When these results are compared with those from Smith and Oliveros-Belardo [29], they are found to vary depending on the analysis method used. Utilizing GC and GC-MS analysis, Pino et al [35] identified 96 different aroma components by distillation of pulp, which included benzyl butyrate (33.9% of the total composition), butyl butyrate (12.5%) and (E)-cinnamyl butyrate (6.8%). This result was similar to that obtained by Wong et al [34]. The main constituents, butyl benzoate and benzyl butyrate, were the same while compared to the result from Smith and Oliveros-Belardo [29] where only benzyl butyrate was the same. In this study, we used fiber at room temperature with headspace SPME and combined this with GC-MS for analysis. The four important aroma components in the intact fruit, pulp, and peel were butyl benzoate, hexyl butyrate, benzyl butyrate, and a-farnesene. Hexyl butyrate and a-farnesene have not been reported previously, which indicates variability in the results due to the different extraction methods used. The detection of hexyl butyrate and benzyl butyrate indicates that mabolo contains butyric acid esters, whereas Smith and Oliveros-Belardo [29] have suggested that the degradation of free butyric acid is the source of the unpleasant smell [29]. This needs to be confirmed by further studies.
Analysis of nutrient contents
The nutrients quantified from mabolo are shown in j o u r n a l o f f o o d a n d d r u g a n a l y s i s 2 4 ( 2 0 1 6 ) 8 3 e8 9 propionic acid, acetic acid, tartaric acid, succinic acid, and formic acid. The nutrients in mabolo have previously been analyzed in the Philippines and in India, and mabolo was found to be an excellent source of iron, calcium, and vitamin B complex [4]. In our study, we found that the calcium content was 42.8 mg/ 100 g, which is nearly half of what is obtained from milk (95.0 mg/100 g; Table 2). We could not detect iron in our analysis as the iron content was below the detection limit (0.2 ppm). The fact that mabolo is considered as a source of iron-nutrition could simply be folklore of the Philippines or India. In other words, the author of that cited literature may have only used a record of ethnobotany. If excluding the possibility of the above, the differences between our results and those from literature review may be attributed to sample contamination, difference of detection methods, different soil conditions for plant growth, or even difference of strains. This needs to be further studied for confirmation. Vitamin B2 (0.075 mg/100 g), vitamin B3 (0.157 mg/100 g), folic acid (0.623 mg/100 g), pantothenic acid (0.19 mg/100 g), and choline chloride (62.52 mg/100 g) were all detected in mabolo, proving the fruit to be an excellent source of vitamin B complex. Many B vitamins are involved in homocysteine metabolism, and hyperhomocysteinemia has been associated with cardiovascular disease and other age-related diseases [38]. Moreover, mabolo is rich in malic acid and zinc. Malic acid can be directly absorbed by the body to provide immediate energy. Malic acid also aids recovery from fatigue, and is often used to relieve muscle aches, enhance strength, and protect the liver, heart, and skin [39]. Our data show that mabolo contains high levels of zinc (3.6 mg/100 g), hence 340e420 g of mabolo can provide the recommended daily allowance of zinc [40]. Zinc is helpful for health, as it is required for the functioning of many enzymes and is an essential nutrient for the maintenance of normal gonadal function and for neurogenesis, synaptogenesis and neuronal growth [41,42]. Taking these results into account and considering the contents of vitamin B complex, malic acid, calcium and zinc in mabolo, we can now appreciate why the aboriginal Pinuyumayan honor the elderly with mabolo or offer it to patients. Moreover, we found that the dietary fiber content of mabolo was 3.2 g/100 g, which was higher than that of the "Red-Delicious" apple (1.6 g/100 g), "Fuji" apple (1.2 g/100 g), and pollination constant and nonastringent (PCNA)-persimmon (1.3 g/100 g; Table 2).
In this study, we measured tannins by Folin-Ciocalteu colorimetry. The data obtained can also be interpreted as the total polyphenol content. While tannins in mature mabolo are condensed and insoluble, they can still be measured. The total polyphenol content increased from 69.2 mg/100 g to 155.3 mg/100 g as the fruit matured and turned from light green to dark green, and the content in ripe mabolo was found to be 213.3 mg/100 g. Phenolic compounds are potent natural j o u r n a l o f f o o d a n d d r u g a n a l y s i s 2 4 ( 2 0 1 6 ) 8 3 e8 9 antioxidants, which decrease the generation of reactive oxygen species and scavenge free radicals. Phenolic compounds (phenolic acids, polyphenols, and flavonoids) have been used as antioxidants by humans [43].They also possess antiinflammatory and anti-cancer properties, thus aiding the prevention of many diseases [44].
Conclusion
Many ethnobotanical plants or wild plants without domestication have high value for human use in that they can provide new food sources such as an excellent edible oil [45] or vegetable for nutritional supplement [46] and so on. Many functional foods or medicinal ingredients are also developed from such plants. In some recent studies, a number of plant species with antioxidants such as polyphenols and flavonoids have been reported [47e53], and plants with other effects such as antitumor [48,54], antibacterial [48], anti-inflammation [46,52], liver protection [53,55], immunomodulatory effects [47] or even containing antiacetylcholinesterase activity [50] have been screened out in succession. Such plants with potential of development should undergo considerable research on their economic use, including mabolo as it contains good nutrients for health functions and it can be directly eaten as a vegetable or fruit. For vegetables and fruits of further use, it is important that we clearly know their chemical composition and potential biological properties [46]. Compared to the general persimmon, mabolo has a rich aroma and a high nutritious value that is good for human health. We start this research project and publicize the preliminary results at the initial stage to engender the interest of researchers around the world and speed up the usage of D. blancoi as an important economic fruit tree.
Conflicts of interest
All contributing authors declare no conflicts of interest. | 2018-04-03T05:08:49.603Z | 2015-10-16T00:00:00.000 | {
"year": 2015,
"sha1": "fbe66cd9dffd4c8bd8a8e56150f6310b80e6d052",
"oa_license": "CCBYNCND",
"oa_url": "https://doi.org/10.1016/j.jfda.2015.08.004",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "16914a9060eddcc30befb9c06279a52c908348a1",
"s2fieldsofstudy": [
"Agricultural and Food Sciences",
"Environmental Science"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
} |
52228732 | pes2o/s2orc | v3-fos-license | Control of Upper Limb Active Prosthesis Using Surface Electromyography
— Electromyographic prosthesis with higher degrees of freedom is an expanding area of research. In this paper, active prosthesis with four degrees of freedom has been investigated, which can be used to fit a limb with amputation below elbow. The system comprises of multichannel inputs which correspond to the flexion and extension as well as supination and pronation. To find maximum surface neural activity, accurate placement of electrodes has been carried out on 10 subjects aged between 22-30 years. Signals (0-500 hertz) acquired from contracting voluntary muscles with minimum cross talk and common mode noise. Clean filtered EMG signal is then amplified precisely. Finally digitization is being done to drive bionic hand. Practical demonstration on a simple DC motor proved providential using this method for the two motions of an actual human arm. EMG Signals emanating from muscles dedicated to individual fingers have been recorded. Moreover modern classifiers; KNN and NN have been investigated carefully with selected features through different time and noise levels.
I. INTRODUCTION
The field of study that deals with the detection (from needle, cup and surface electrodes), analysis (from picoscope) and the use of electrical signals (for active prosthesis) is known as Electromayography (EMG). It involves techniques for analysis and recording the surface activity which produce electric potential by skeletal muscles [1]. The device which records EMG signals is called electromyograph. An electromayograph detects the electrical potential generated by muscle cells when these are contracted or neurologically activated [2].
To develop innovative upper limb prosthesis, the real challenge is to incorporate the device with intuitive, intelligent and human like system without losing its functionality having high deftness [3]. EMG surface activity stimulated by voluntary contraction of the targeted muscle considered as the intention of the mayo-electric prosthesis user [4]. Most commonly Mean Absolute Value (MAV) is used to determine the intension of the user, in which the absolute value of EMG signal is compared with predetermined threshold value.
Human body consists of muscles, composed of fibers having motor points on it. These points when activated generate motor point active potential. A motor unit (MU) is defined as an anterior horn cell, its axon and the muscle fibers innervated by the motor neuron [5]. Motor unit action potential (MUAP) is a train of pulses or summation of a group of muscle fiber action potential (MFAP) where superimposed information of muscle and generated pulses is determined by each (MFAP).As long as force is maintained or even increased motor unit generates pulses continuously and resultantly muscle contracts [6]. Motor points remain activated as long as muscles are being contracted, This continuous activation of motor points superimpose to form EMG signal, when muscle exerts more force greater number of motor points are activated , so we can conclude that lifting a heavy weight fires more motor points that lifting a lighter weight. There are many factors which affect the EMG signals emanating from the muscles, important aspects are the type of muscle contraction that is occurring and the type of electrode used for the detection of these signals, Judicious application of recognized basis which can accurately identify the innervation zone with bounding effects of noise and cross talk providing immotility in signal and normalization of its amplitude which further enhance the single by removing the effect of many other variables [7]. EMG signal emanated can be used as a symptom about: contraction of muscle, force produced when muscle is being activated and tell us the fatigue variable of the targeted muscle.
Advance electronics and micro controller based designs brought new revolution is active prosthesis giving more degree of freedom to the designer. More movements and muscle activation can be controlled with the help of filter algorithms giving more functionality and maneuverability to the user.
II. MEASURING DEVICES
EMG due to its vast biomedical application becomes an interest of many researches working in the field of prosthetics [8]. Non-invasive techniques are the most desirable and applicable technique for measuring EMG signals with help of electrode. Which can be categorized in dry and gel type electrodes [9].
Gelled electrode contains gel between skin and the measuring electrode. These gelled electrodes are mostly disposable and not very feasible as it cannot be used for longer periods of time. Dry electrodes do not contain such medium and can be used for longer periods of time, which are ideal for active prosthesis.
Dry electrodes are further divided in to two types: active and passive. Passive electrodes do not require energy or current for their activation. Active electrodes require energy or current for their activation, these electrodes often having high input impedance and pre-amplification circuitry attached to it. It's important to prepare test subjects properly before testing [10]. Skin surface should be clean, hairs should be removed to avoid artifacts and electrode should be stationed properly holding its place. Gelled electrodes were used in this research with self-designed circuitry.
III. SIGNAL ACQUISTION AND POSITIONING OF EMG ELECTRODES
Positioning of EMG electrode and acquisition of required signal is an important feature of active prosthesis. Gelled electrodes which were used in the study were carefully place on the belly of the muscle, each electrode placed 1-2 cm from each other. The established location of electrodes is between innervation zone and the tendinous insertion [11].
Noise reduction is done for further enhancement of the acquired signal then this signal is send to an instrumentation amplifier for amplification purposes. Referenced signal must be acquired, which is isolated electrically and must be distanced from targeted muscle, the most desirable place for reference electrode is the neck or it should be unrelated to the muscles of forearm.
Placement of electrodes on the targeted muscles is an important part; small difference from the motor point can bring drastic effect on the amplitude of the signal. For flexion of hand; flexor digitorum profundus, which fans out in to four tendons connected to each of four fingers except thumb, is preferred muscle for the placement of electrode and for the extension, it should be on extensor digitorium communis. Individual signals from each finger have also been observed by placing electrode on the associated muscle of each finger. Figure1. General placement of surface EMG and reference electrodes.
For flexion of thumb, preferred location of electrode is on flexor pollicis longus, for its extension electrode should be placed on extensor pollicis longus. In case of pinkie, middle and ring finger electrode should be placed precisely on flexor carpi ulinaris, flexor carpi radialis and flexor palmaris longus respectively. Similarly for the extension of these fingers, electrode should be on extensor carpi ulinaris.
Index finger which is the most dexterous and sensitive finger controlling several motions of hand, for its flexion, flexor digitorum superficialis is the preferred location of electrode placement and for extension, recommended muscle is extensor inidcis.
IV. NOISE REDUCTION TECHNIQUE
EMG signal has very low signal to noise (SNR) ratio, many factor bring about these disturbances, and one of the major portion of these noises are cardiac artifacts or Electrocardiography (ECG). Non-stationary nature of EMG signals keeps the amplitude ratio between EMG signals and cardiac artifacts variable.
A signal processing technique based on finite impulse response (FIR) adapter filter can be employed to reduce noise in which multi-electrode array is used for signal acquisition purposes [12]. In this method referencing is done with respect to ECG signals then adaptive filter is applied to reduce power line disturbances.
A. Noise Reference Estimation
Acquired signal Z (t) at the surface of the skin composed of EMG signal 40Hz) is then applied to the summation of the signal.
As ECG and EMG signal has no co-relation so above equation reduces.
,, B. Adaptive Filtering Power line interference is one of the major causes which decrease the quality of EMG signal significantly [13]. Adaptive filter can be really helpful in attenuating these interferences.
Raw EMG signal contains power line interferences under 60Hz. If we know the characteristic of noise, a filter can be designed to reduce that noise with high efficiency [14].
With the help of noise estimate, noise influence on the acquired signal can be minimized; the estimate noise can be deduced by finite impulse response (FIR) adaptive filter through which a sample of noise is given as input to the controller.
V. MODERN CLASSIFICATION TECHNIQUES
After the amplification, classification of these EMG signals is done to identify the user's intended motion. Many authors investigated these classification strategies Sebelius et al. [15] and Paul et al. [16] are one of those who faced and investigated the issue of real-time implementation of artificial active prosthesis. Segmentation, in accordance with a flexion of the dedicated muscle, Pattern recognition, Feature extraction, Classification of signals and simulating actual prosthesis were the main subject of issue during the study [17]. Here Neural Network and K nearest neighbor classifiers have been studied.
A. Neural Network Classifier
In the recent past most of the study has been done on multichannel signal processing. The electromyogrphic signals from the multi-channel data acquisition system will increase the classification efficiency with the increase in classification accuracy but with increase of diminishing effect in signals if the number of channels is increase to 4 or more [18].
Many researchers have chosen multi-channel that is multiple electrode can be used to perform some specific function with only designated electrodes but some want to move further ahead by leaving aside this strategy. Furthermore the number of classes can be increased to increase the classification accuracy. It is understood that the accuracy will decrease because of the nature of accuracy and when the output data flowing through different channels increase which affects the quality of signal by affecting its feature space. Therefore increasing number of channels will certainly affect its feature space assimilated with each class [19]. A back-propagation neural network is the solution of the discussed problem, in which the EMG signal are acquired for different hand movement earlier defined which can be flexion, extension, supination and pronation or else. Calculated Time frequency based parameters can be used as input to this classifier ,Which can be Wavelet transform, Moving Average, Auto regression, Root Mean Square, Fast Fourier Transform, Variance, Standard deviation, Slope Sign Change, Willson Amplitude, Zero crossing, Wave Length. Selection of these features is the most important part of neural theory. Selecting relevant features gives the pattern which then can further be easily classified.
B. K Nearest Neighbor Classifier
Neural Network classifier is considered as slow and time consuming. K nearest neighbor can be used to get accurate results in span to time. In this classification technique, the reference vectors from all the required motions can be used to calculate the distance between the input vectors of present state [20]. KNN first assigns class to the un-known events which represents majority of its nearest neighbors. Assignment of class is based on most suitable pattern nearest to the system measured on the basis of Eculidean distance. Labeling is done to classify segment that is most frequently represented among the K nearest neighbor. At the end decision is made on the basis of taking a vote and by examining the labels. Discriminative approach been employed in KNN which is more suitable when reliable probabilistic densities are difficult to find.
VI. EXPERIMENTATION AND PROCEDURE
Acquired signal from the electrodes have frequency that ranges between 0-500 Hz which have noise from different sources such as cross talk, artifacts and above all power line sources. These noise ranges from 50-60 Hz which has to be removed before amplification is done. Signal is acquired from two electrodes which are 1-2cm apart from each other; the signal common to both is rejected with the help of differential amplifier. The amplitude of the acquired signal ranges from 0-20mV.
Amplification is one of the most important steps in active prosthesis done with the help of instrumentation amplifier; this amplification can be achieved with INA 121. It is an IC which has vast applications in biomedical field, which has ability to amplify up to 10,000 times [21]. Once the detection of EMG signals is achieved through sensing electrodes, differential technique is employed with the help of operational amplifier to achieve first step in amplification [22]. Figure3. Differential Amplifier is device which rejects the common signal of both the inputs provided to it. Acquired analog signal whose frequency ranges form 50-200 Hz and amplitude varies form 0-5 volt obtained after carrying out the above mentioned procedure. This analog signal has to be converted in to digital signal with the help of Analog to Digital Converter (ADC) which is commonly used in modern electronics. During digitization following things have to be kept in mind resolution, range of conversion and the sampling rate. The maximum voltage which an ADC can convert in to digital format is known as range of conversion. Sampling rate is kept high for the minimum loss of data; dynamic range of conversion is kept high which keep the amplification output small.
In the study 16 bit ADC has been used which come as a peripheral with ATMEGA16 microcontroller [23]. This has multiple channels with on chip-2 cycle multiplier. Above information completely fulfill our requirement for amplified signal between 1-4.8 volts. As flexion and extension of hand take place, signal after amplification is put into mentioned microcontroller for further digitization and motor control. ADC convert amplified analog signal it to digital format, where reference of the ADC is given 5 volts.
Thresholding technique is applied for the controlling different movements. Peak values are measured from each targeted muscle and threshold value is selected, threshold value must be 2 volts less than the peak value in case of flexion and extension. As the targeted muscles fire, the motor unit get excited, signal detected by the gelled electrodes, preamplification is done with differential amplifier, noise is reduced by using high pass filter, then the amplification is done with mentioned instrumentation amplifier, ADC convert this amplified analog signal in to digital signal which is between 0-5 volts and send it further for processing.
Comparison is being done between threshold value and the system value, as it passes the threshold value it gives the output as 1 which moves the motor in the desired direction. Same procedure can be repeated for desired motion of supination and pronation by assigning different motor points on the targeted muscle. With accurate placement of electrodes, we can able to differentiate between the motions for all the fingers, hence giving more functionality as well degree of freedom to the patient. Here are some experimental results after finger classification being done with the help of gelled electrodes. The amplitude of EMG signal emanating from belly of the muscles varies person to person. A muscular man can emanate EMG signal of more amplitude than a normal man. Acquisition of superimposed signals, amplitude from the contracting muscles will be higher as compared to isolated muscle that is valid in case of flexion and extension. It will have low amplitudes when placed on the dedicated muscle for each fingers. The given study is taken on normal built.
VII. CONCLUSION
A novel method for flexion and extension of mechanical arm is successfully achieved with the help of EMG. Electrode placement on the targeted muscle for the clean signal is necessary. During study it was revealed that each muscle form a set pattern of signal whenever the flexion or extension take place. Extracting the statistical features from EMG signals for classifying the motions through described classifiers, different wavelet function can be used for enhancing the classification rate. Amplitude of EMG signal varies as number of muscles fire increase or decrease. This study is a step forward towards achieving active prosthesis which is not only light weight, cost effective but also a successful replacement of upper limb amputation. Noise reduction techniques have been emphasized for future work. | 2018-09-05T15:09:36.010Z | 2021-11-27T00:00:00.000 | {
"year": 2021,
"sha1": "2a5fecbc2521e855f1bf5884874494872f124641",
"oa_license": null,
"oa_url": "https://doi.org/10.46300/9102.2021.15.17",
"oa_status": "GOLD",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "8939a36b0fe5a4267c543685aedb0579e3b37e20",
"s2fieldsofstudy": [
"Engineering",
"Medicine"
],
"extfieldsofstudy": [
"Engineering"
]
} |
1644611 | pes2o/s2orc | v3-fos-license | Completeness of birth and death registration in a rural area of South Africa: the Agincourt health and demographic surveillance, 1992–2014
Background Completeness of vital registration remains very low in sub-Saharan Africa, especially in rural areas. Objectives To investigate trends and factors in completeness of birth and death registration in Agincourt, a rural area of South Africa covering a population of about 110,000 persons, under demographic surveillance since 1992. The population belongs to the Shangaan ethnic group and hosts a sizeable community of Mozambican refugees. Design Statistical analysis of birth and death registration over time in a 22-year perspective (1992–2014). Over this period, major efforts were made by the government of South Africa to improve vital registration. Factors associated with completeness of registration were investigated using univariate and multivariate analysis. Results Birth registration was very incomplete at onset (7.8% in 1992) and reached high values at end point (90.5% in 2014). Likewise, death registration was low at onset (51.4% in 1992), also reaching high values at end point (97.1% in 2014). For births, the main factors were mother's age (much lower completeness among births to adolescent mothers), refugee status, and household wealth. For deaths, the major factors were age at death (lower completeness among under-five children), refugee status, and household wealth. Completeness increased for all demographic and socioeconomic categories studied and is likely to approach 100% in the future if trends continue at this speed. Conclusion Reaching high values in the completeness of birth and death registration was achieved by excellent organization of the civil registration and vital statistics, a variety of financial incentives, strong involvement of health personnel, and wide-scale information and advocacy campaigns by the South African government.
Introduction
The registration of vital events (births and deaths) is a crucial element of modern life, as it determines many rights and duties in modern societies and is necessary for administration and development planning. Vital registration is also an important source of data for demographic and public health research. In fact, many of the early investigations of population and health dynamics (fertility and mortality) and patterns (age and sex) were based on long-term time series of vital statistics. Compulsory registration of births and deaths has a long history in Europe, going back to the Middle Ages through parish registers and since the eighteenth century for civil registration (1,2). Vital registration has been complete Global Health Action ae Global Health Action 2016. # 2016 Michel Garenne et al. This is an Open Access article distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), allowing third parties to copy and redistribute the material in any medium or format and to remix, transform, and build upon the material for any purpose, even commercially, provided the original work is properly cited and states its license. since the early nineteenth century in Western Europe and since the early twentieth century in most industrialized countries (North America, Australia, New Zealand, Japan, Russia, etc.). It is also nearly complete in many Latin American countries and in selected Asian countries but remains incomplete in many other developing countries. Thanks to numerous initiatives from a variety of institutions (UNFPA, bilateral aid agencies, the African Union, Bloomberg, etc.), many projects were developed in the past decade to improve civil registration and vital statistics (CRVS) in areas where completeness is still low (3Á10).
Africa is a special case with respect to vital registration (11Á15). Only a few African countries have maintained a complete or near-complete vital registration for a long time. This is particularly the case for islands (Mauritius, Reunion, Seychelles, and Sao Tome and Principe) and to a certain extent for some North African countries (Egypt). In sub-Saharan Africa, there are examples of nearcomplete vital registration in selected populations, in particular in capital cities (Dakar, Brazzaville, Antananarivo) and among selected groups (such as white Europeans in South Africa). Otherwise, the registration of births and deaths has remained very low for most African populations, despite many laws passed and many attempts to improve registration (16Á20). However, this situation is changing rapidly in Southern Africa, where major efforts have taken place in recent years to reach near-complete registration of births and deaths. This is the case in South Africa, as well as in nearby Namibia and Botswana, who embarked on large programs to improve vital registration (15).
There is a long history of vital registration in South Africa but with serious discrepancies by population group and geographical area. The first law for compulsory birth and death registration was passed in 1867, but it affected primarily the white European population of the Cape Colony. The 1923 Births, Deaths and Marriages Registration Act made compulsory the registration of vital events for all persons living in urban areas, registration being left voluntary for those living in rural areas. During the apartheid years (1948Á1991), vital registration in the homelands was left to the homeland administrations and remained very deficient in these areas. The situation changed with the dismantlement of petty apartheid in 1986, and with the 1992 Births and Deaths Registration Act, the registration of all births and deaths to South African citizens and permanent residents became compulsory. These legal events, together with the strong will of the post-apartheid government and the reorganization of the civil registration system, radically changed the situation of vital registration in the country (21Á26).
Estimating the completeness of birth and death registration in South Africa is difficult for several reasons. First, the denominator, that is, the precise number of total births and deaths that occur in the country, is controversial, and various estimates made from censuses, demographic surveys, and models vary by a margin of 10% or more. For instance, the number of births in 2001 was estimated at 1.088 million by the United Nations Population Division (UNPD), and estimates used by the South African Statistical Office (Stats-SA) ranged from 1.076 to 1.171 million, a 9% difference between high and low values. Estimates made from census data show even larger variations, from 0.947 million (2001 census) to 0.955 million (2007 community survey) and 0.971 million (2011 census), up to 13% lower than UNPD estimates. Second, the numerator, that is, the number of events effectively registered, may also be a source of confusion. For births in particular, there is a huge gap between births declared within 1 month (as by law), births declared during the same calendar year (called 'current registration'), as published by the statistical office, and births declared later (called 'late registration'), which may occur several years after the event. For instance, the number of 1992 births registered that same year was 228,445, but by 2015 it had become 981,258 because of late registration (4.1 times more!). Despite this double uncertainty, there is no doubt that the completeness of birth and death registration has improved markedly since 1992, above all for the black African populations, who were largely overlooked by vital statistics before 1991. Compared with UNPD estimates of births and deaths, birth registration (same year) increased from 21.2% in 1992 to 84.1% in 2012, while death registration increased from 50.4 to 71.2% over the same period of time. It should be noted that UNPD estimates of the number of deaths after 2005 are likely to be overestimated, therefore underestimating the completeness of death registration after this date. Stats-SA estimates of the completeness of birth registration showed an increase from 24.7% in 1998 to 72.0% in 2005 (23). Using the actuarial society of south africa (ASSA) model developed by Prof. Dorrington and colleagues as a reference, estimates of the completeness of death registration show an increase from 85 to 90% for adults and from 44 to 78% for children between 1996 and 2000 (25).
The aims of this study were to document trends in completeness of the registration of vital events and to investigate their sociodemographic factors in Agincourt, a rural area of South Africa under demographic surveillance. In this area, both numerators and denominators have been known accurately since 1992, and the health and demographic surveillance system (HDSS) provides numerous demographic and socioeconomic correlates at household and at individual level to investigate risk factors.
Study area
The Agincourt HDSS has been described in detail elsewhere (27Á31). It is located in the dry lowveld of the northeastern part of South Africa, near the Mozambican border. The area is now part of Mpumalanga Province and was formerly included in the Gazankulu and Lebowa homelands. It is a relatively poor rural area, populated mainly by the Shangaan ethnic group. It includes a sizeable community of Mozambicans from the same ethnic group, who came in the 1980s as refugees during the civil war and settled in South Africa; they now account for about 30% of the population in the HDSS area. The Mozambican refugees became better and better integrated over the years in terms of income, education, and demographic behavior, but they still have distinctive features, as shown in this study (32Á35).
The HDSS area varied somewhat over the years. When it started in 1992, it included some 20 villages and a population of about 57,600 persons, and over the years, it integrated three newly created villages, part of the Reconstruction and Development Programme, which are mostly offshoots of former villages. The HDSS was extended to include four new neighboring villages in 2007, and another five villages between 2010 and 2012. The population counted some 110,000 inhabitants in total in 2015.
The HDSS is primarily a full population register, including the routine registration of births, deaths, in-and outmigration, and a variety of other events recorded during yearly home visits. The population register is updated by trained field staff, who interview the household with a prepopulated form based on information given the year before. This information includes the full household roster and the last-born child for each woman, which ensures complete registration of births and deaths. The HDSS is also the platform for numerous surveys and intervention trials. With respect to this study, births and deaths are recorded on questionnaires that include a special question on whether the birth or the death was registered. Because fieldworkers visit each household routinely once a year, the question on birth and death registration covers, on average, a time lag of 6 months, ranging from the date of the event to a year apart. As a result, the completeness of birth and death registration covers most cases of 'current registration' of vital events but not the so-called late registrations that were previously so prevalent in South Africa.
Data and methods
The data used for the study cover all vital events (births and deaths) that occurred in the resident population over a 22-year period, from the first census conducted in 1992 to the last round conducted in 2014. Within the surveillance population, undercounting of births and deaths was negligible. The few births or deaths that might be not recorded include those occurring among very recent migrants, and newborns that end up as neonatal deaths. In particular, during the peak of the HIV epidemic, some people were coming back to die in their village of origin, some of which might have not been recorded (36). Likewise, some very young women might have gone to their family to deliver a baby while living elsewhere. In principle, only births and deaths that occurred within the resident population, duly recorded in the HDSS, were counted, as is normally done elsewhere in demographic surveillance systems. As a consequence, the births and deaths recorded by the HDSS might differ somewhat from those recorded at the local registration sites. The main advantage of the HDSS is the congruence between the numerator and denominator, necessary for a proper estimation on completeness.
The Agincourt population is homogeneous in terms of ethnicity, but is stratified by socioeconomic status. There are major differences in most demographic indicators by level of education and by level of household wealth, and one peculiarity of this population is the status of the Mozambican refugees. Level of education is routinely recorded and updated in censuses as the highest grade completed. The household level of education is defined as the highest level found among household members. Household wealth is measured by a composite indicator with five components: characteristics of the dwelling, sanitation, sources of energy, utilities, and livestock. It is measured either in absolute terms (used in this study) or as the first principal component calculated on the same items and grouped as population quintiles. Items utilized for calculating household wealth were recorded in 2001, 2003, 2007, 2011, and 2013. The household wealth status selected for the 1992Á2000 events is therefore the status recorded in 2001 (29,30).
The analysis utilized both univariate and multivariate methods. Both analyses were based on the proportion of births and deaths that were registered, as declared by the family. A small number of events with missing values (1.9% for births and 5.2% for deaths), were excluded from the final analysis (this occurs when the information concerning the recording of the event is missing). The analysis focused on time trends over the study period, age and sex differences, refugee status, level of education, and household wealth.
Birth registration
Trends in birth registration A total of 42,977 births were recorded in the Agincourt HDSS from 1992 to 2014. Among those, 49.3% were registered, a value comparable to the national average for South Africa over the same period. However, the completeness of birth registration (excluding late registration) varied greatly over the years: it was only 6.2% in the first 2 years (1992Á1993) and reached 89.1% in the last 2 years (2013Á2014). This remarkable increase is displayed in Fig. 1 the year 2000, increased rapidly over the next 5 years to reach about 65%, then slowly and steadily increased to reach 90.5% in 2014. Therefore, during the course of the study, birth registration went from virtually zero (B10%) to what is considered 'near complete', that is, 90% and above. This remarkable achievement was also found at national level and seems unique in sub-Saharan Africa (Fig. 1). Gradients in completeness by socioeconomic status were as expected, with higher values for households with higher levels of education and wealth. Completeness varied from 42.8% on average for households with a very low level of education (less than Grade 7) to 76.1% for households with higher levels (completed high school, 'matric' and above). Here again, the gap between the high and low levels stayed approximately the same (about 10%) over the years, so that in the most recent period (2010Á2014) completeness was high for all groups, ranging from 81.0% for a very low level of education to 90.3% for a high level of education.
Gradients by wealth were also consistent but with a small interaction with period. Completeness was uniformly low at baseline (1992Á1994) for all wealth groups, ranging from 5.4% for the poorest to 14.4% for the wealthiest, whereas completeness was much higher for all wealth groups in the most recent period (2010Á2014), ranging from 66.5% for the poorest to 88.5% for the wealthiest. The gap between the two extremes tended to increase overtime, from 9 to 22%, which means that improvements for the poorest groups were somewhat slower than for the other groups, although still important given the situation at baseline.
Multivariate analysis
The multivariate analysis confirmed the observations of the univariate analysis: all factors investigated were highly statistically significant, except for the sex of the child. All factors appeared largely independent from each other, because the gradients found in the multivariate analysis were basically the same as those identified in the univariate analysis. In terms of the odds ratio (OR) associated with completeness of birth registration, the largest values (positive or negative) were found for period (OR045.1 for 2010Á2014) and to a lesser extent for age (OR00.38 for age 12Á17) and wealth (OR 01.81 for the wealthiest), and the lowest values were found for refugee status (OR 00.72) and level of education (OR01.43 for the highest level of education) ( Table 1).
Death registration Trends in death registration
The situation for death registration was quite different from that for birth registration (Table 3). First, the completeness of death registration was much higher (82.4%) than that for births over the whole study period. Second, it was also much higher at baseline (1992Á1994): 47.5%, compared with 8.0% for births. Progression over time was also steady and impressive, reaching high values (93.4%) in the most recent period (2010Á2014) and 97% in 2014 (see Fig. 1). This major improvement in Agincourt seems to match that found at national level.
Differentials in death registration
Gradients by age at death were more pronounced than gradients by age of mother for births. In particular, completeness was uniformly high for adults (91.3% for age 20Á99) and abnormally low for under-five children (33.7%), and particularly for infants (26.7%), whereas it reached higher values for older children (78.1% at age 5Á9 and 83.4% at age 10Á19). The rise in completeness by age, from birth to age 20, was steady and rapid from age 0 (26.7%) to age 5 (81.3%). This situation is surprising and could lead to major confusion if analysis were based on registered deaths of children. However, it seems peculiar to this area or at least much more pronounced than at national level ( Table 4). As for births, there was no variation by sex, both males and females being equally registered (82.4%). There was no interaction with age at death, even for children under five, nor with period, with the sole exception of the 1992Á1994 period when males were somewhat better registered than females (p B0.001). Completeness of death registration after the year 2000 was slightly higher for females (87.9%) than for males (87.0%), but the difference was not statistically significant (p00.152). With respect to age, increases in completeness were marked in all age groups ('46% on average). The age pattern of completeness was somewhat different at baseline, with increasing values with increasing age, from 12.3% at age 0Á4 to 64.9% at age 60Á79. At end point (2010Á2014), completeness was uniformly high (!96%) for all age groups except for children under five, where it remained average (51%). As a consequence, the largest gains were for younger age groups (age 5Á59), aside from the under-five children. However, even for the under-five children progress was remarkable, with a marked absolute increase of '39%, because it started from very low values.
Mozambican refugees started with very low values of death registration at baseline (19.3%), way below South Africans (58.0%), but recovered quickly, reaching 89% in the most recent period, not far from South Africans (95.3%).
Level of education was a minor factor in completeness of death registration at baseline and had almost disappeared at end point, since all categories were above 90% in 2010Á2014. As for birth registration, gradients by level of education were rather small over the whole period.
Progress was steady and impressive for all wealth categories, as was the case for birth registration. At baseline, the gradient by wealth was marked, from 38.9% at the lowest level of wealth to 59.4% at the highest level. At end point, the differences were smaller and the most advanced groups were all above 93%, with the lowest group at 85%. As a consequence, the difference between the highest and lowest wealth groups was about halved.
Multivariate analysis
As for birth registration, the multivariate analysis of completeness of death registration confirmed the univariate analysis, most differences being highly significant, with the exceptions of sex of the deceased person and of education. Gradients were also similar, showing the large independence between all explanatory variables ( Table 3). The largest odds ratios (OR 031.8) were found for the period 2010Á2014. Some age groups also had very low odds ratios, especially among children (OR00.03 for the age group 0Á4; OR 00.31 for the age group 5Á9; OR 00.53 for the age group 10Á19). The multivariate analysis showed significant differences for older persons (OR 00.78 at age 60Á79 and OR 00.68 at age 80Á99), which came mostly from the interaction between age and period, as older age groups did better in the early part of the study compared with younger adults. The odds ratio was also low for Mozambicans (OR00.30) and lower than for birth registration. However, as seen above, the differences between refugees and South Africans were largely reduced in recent years. Gradients by level of education were so small that they were not statistically significant. In contrast, gradients by household wealth were marked, with odds ratio reaching 2.57 for the highest quintile (Table 3).
Discussion
In the Agincourt population, the improvements in vital registration were pervasive. In the Agincourt HDSS area, as it is the case for South Africa as a whole, the completeness of vital events (births and deaths) increased markedly after the new law passed in 1992 and as a result of the strong political will of the new government elected in 1994. In remote areas, and in particular in the former homelands, birth registration was virtually non-existent before 1994 yet was near completion 20 years later. Likewise, death registration was deficient before 1994 but reached high levels in recent years. This major achievement is a real 'success story' for African countries.
The improvement in completeness of vital registration is due to the major concerted efforts of the South African government to register all births and deaths for all population groups, including in remote areas. This was achieved by reorganizing the CRVS system to accommodate the whole population, by developing a full-scale population register with a single ID number, by developing the infrastructure for registration (fixed points and mobile teams), by closely involving the hospitals and clinics where these events occur, by developing a fully computerized system with Internet connections, and by large-scale information and advocacy campaigns in the whole country. This new situation will enable the political and public health authorities, as well as researchers and concerned persons, to have a better understanding of the rapidly changing trends in fertility and mortality, two crucial components of population dynamics (37).
The registration of births and deaths was facilitated by the fact that certificates are now needed for many procedures, in particular the fact that birth certificates are needed for ID cards (necessary for voting, to obtain a driver's license, and for many other purposes) and for school enrollment, and that death certificates are required for burials in cemeteries and for accessing pensions for widows or widowers. Numerous changes have also occurred in South Africa over this period, some of which could have an impact on birth and death registration, in particular the strong incentives for getting access to social grants. Since the late 1990s, the country has developed a generous system of social grants for children, orphans, the elderly, and handicapped persons, all requiring an ID, a birth certificate, or a death certificate depending on the case. For example, the child support grant is a strong incentive to mothers to have their children registered; the foster child grant is an incentive to register the deaths of persons who left behind orphans; and the older person grant requires an ID card, as do the disability grant and the care dependency grant (37).
Another feature contributing to the success of CRVS in South Africa is the high level of development of the country. A high gross domestic product and an efficient tax system are necessary for financing the infrastructure and the functioning of the CRVS system, as well as for supporting the social grants. Last, the high level of education of the whole population, males and females alike, the relatively high level of urbanization, and the strong computer and Internet infrastructure all contributed to the achievement (37).
Turning to the present study, the improvements were pervasive in all demographic and socioeconomic groups. They reached all age groups, both sexes alike, all population strata, all socioeconomic status groups. Groups that started from very low vital registration values remain under the threshold of 90% completeness, but these groups are still moving up, and if they lag behind, it is only by a small margin, equivalent to a few years of improvements.
A few problematic groups for the recent period are worth noting. For birth registration, in relative terms, the main issues were among the births to adolescents, a problematic group for many reasons (38); the very poor households; and to a lesser extent the Mozambican refugees. For death registration, also in relative terms, the main issues were among children under five, particularly infant deaths, and to a lesser extent among the very poor households and the Mozambican refugees.
In absolute terms, the situation was somewhat different: socioeconomic status mattered little, and age was the main factor in lack of registration in recent years. Among births that occurred in 2013Á2014, 40% of the unregistered cases (n0549) were among women less than 22 years of age. Among the deaths that occurred in 2013Á 2014, 79% of the unregistered cases (n063) were among under-five children. These age groups should be targeted for improving completeness in the future.
The difference between birth and death registration has almost disappeared now that completeness is high. However, this was not the case before 1994, when there were major differences between these events. This difference could have been due to a variety of factors, in particular to the needs for people to register either birth or death. In particular, a death certificate was needed earlier for adults to be buried in a cemetery, which could explain the higher coverage for death registration before 1994. These differences could be further investigated retrospectively.
Despite being very specific in its ethnic composition and geographical location, the population of Agincourt usually fares close to the national average in terms of demographic and socioeconomic indicators (fertility, mortality, nuptiality, education, wealth, etc.). This seems to be the case also for completeness of vital registration.
There is no precise trend data at national level to compare with Agincourt, but the available evidence goes in the same direction. There are a few differences, however, probably due to the fact that Agincourt is fully rural and was part of the homeland system earlier on. For births, the very low values of completeness at onset (8% in Agincourt compared to about 21% at national level) lasted for about 10 years, and it took another 10 years to fill the gap, so that after 2010 completeness in Agincourt had reached the national level. For deaths, coverage also seemed somewhat lower at onset (46% versus 56% at national level) and was apparently somewhat higher at end point, although precise data are lacking at national level. This could be an indirect effect of the HDSS and especially of the comprehensive investigation of deaths by verbal autopsy, which has been going on since 1992 (39).
The issue of recording causes of death was not addressed in this study. Agincourt has maintained a comprehensive investigation of causes by verbal autopsy for all deaths that occurred in the study area. Adding verbal autopsies to death registration has been proposed for improving health information systems in areas where medical certification of causes of death is lacking (40).
The Agincourt study relied on family declaration of the registration of births and deaths. There is no reason to doubt the quality of this information, as families receive an official certificate and are well aware of their rights and duties. In an ideal world, one would like to match the HDSS information with the official records, event by event. An attempt to do so for the deaths that occurred in 1992Á1995 showed how difficult such a task would be. Out of the 1,001 deaths recorded in the population, only 187 could be matched by name in hospital registers (38). Another recent attempt conducted in 2006Á2009 showed that there was no major bias in death registration between HDSS and CRVS in the Agincourt area, demonstrated that 60.8% of death records could be matched using complex procedures, and confirmed that those that could not be matched were mostly the deaths of young children, those in poorer households, and Mozambicans, most likely because these deaths did not occur in hospitals and were never registered (41).
A large part of the gap in birth registration was compensated for recently by late registration nationwide. The Agincourt study was not designed to cover this issue, but more investigation could be done in the future to investigate whether full birth registration is achieved before children enter school, whether on time or late.
Although major improvements have occurred since 1994 and trends remain favorable, efforts should continue in the future to achieve full completeness in birth and death registration, as in more developed countries. Targeting the age groups where the most gaps were found could help in attaining this goal. | 2017-10-17T05:01:33.279Z | 2016-10-24T00:00:00.000 | {
"year": 2016,
"sha1": "73d8a272cd8cf90c40d8bebf92e460e6869a9a9c",
"oa_license": "CCBY",
"oa_url": "https://www.tandfonline.com/doi/pdf/10.3402/gha.v9.32795?needAccess=true",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "5468501873f70d6d78ef661e51235bb38883abda",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Geography",
"Medicine"
]
} |
218663927 | pes2o/s2orc | v3-fos-license | Postrenal Acute Renal Failure Due to Giant Fecaloma-related Bilateral Hydronephrosis: A Case Report and Brief Literature Review
An 88-year-old woman presented to the emergency department with abdominal distention, fever, and constipation of about a week's duration. Laboratory tests showed impaired kidney function tests and fluid electrolyte values. Bilateral hydroureteronephrosis was observed on non-contrasted abdominal CT. Imaging revealed no intrinsic urological pathology (ureteral stones, etc.) that could lead to obstruction in the urinary system; however, excessively dilated and feces-loaded rectum and colon were observed. The patient was treated with conservative methods. Unfortunately, she passed away due to general condition disorder.
Introduction
Fecaloma is formed due to the presence of organized, hard fecal residue in the colon and rectum for a long time. Sigmoid colon and rectum are the two regions where fecaloma is frequently seen. Significant complications may occur in the gastrointestinal system due to fecaloma formation. Due to the local pressurizing effect of fecaloma (pelvic mass effect), severe morbidity such as urinary system obstruction and rupture of colon or bladder may occur, and sometimes they can be mortal [1]. Although conservative methods such as the use of enemas, laxatives, and rectal evacuation are among the treatment options to relieve the fecal effect, surgical intervention is another option when these methods fail [2]. Rare cases of fecalomarelated urinary system obstruction and renal failure have also been reported [1,3,4].
In this report, we present a case of postrenal acute renal failure due to giant fecaloma-induced bilateral hydronephrosis in a female patient admitted to the emergency department with constipation. We also present a brief review of the relevant literature.
Case Presentation
An 88-year-old female patient was admitted to the emergency department with symptoms of constipation, weakness, and fever for about a week. Her past medical history was remarkable for hypertension, congestive heart failure, diabetes mellitus, constipation, and recurrent urinary tract infection. She also had a history of regular use of acetylsalicylic acid, amlodipine, furosemide, and pioglitazone. On initial evaluation, the patient had a blood pressure of 140/90 mmHg, a pulse rate of 111 beats/min, and a body temperature of 37.9 o C. Her urine output was 1 1 2 1 1 decreased (about 300 cc in the last 24 hours). Physical examination revealed abdominal swelling and sensitivity on lower abdominal quadrants. Bowel sounds were weak, and the rectum was full of fecaloid on digital rectal examination.
Among inflammation parameters, white blood cell (WBC) count was 15,100 cells/mcL (normal: 4,500-10,000 cells/mcL), and C-reactive protein (CRP) was 29 mg/L (normal: 0-5 mg/L) . Urine analysis was positive for WBCs (too numerous to count), red blood cells (RBCs, 15-25 per high power field), and bacteria (many). Creatinine increased to 3.49 mg/dL (normal: 0.6-1.1 mg/dL), and there was electrolyte imbalance in the biochemical analysis as her serum sodium was 110 mEq/L (normal: 135-145 mEq/L) and potassium was 5.67 mEq/L (normal: 3.3-5.1 mEq/L). Noncontrast CT of the abdomen showed dilated and fecaloid-filled rectum and colon, and bilateral hydroureteronephrosis in the urinary system ( Figure 1). It was observed that the bladder was displaced anteriorly by the rectum filled with fecaloid.
FIGURE 1: Non-contrast CT taken at admission
Dilated and fecaloid-filled rectum and colon (left, red arrow) and bilateral hydroureteronephrosis in the urinary system (right, red arrows) can be seen
CT: computed tomography
Her post-void residual of urine volume was about 20 cc. A Urethral Foley catheter was inserted into the patient and intravenous ceftriaxone (2 g/day) plus normal saline treatment were started. After general surgery and urology consultations, intensive rectosigmoidal lavage was applied through sodium picosulfate, rectal tube, and manual fecal extraction. Ureteral DJ stent placement was not planned due to the risk of carrying an ascending infection. Urine output increased right after rectosigmoidal lavage (about 200 cc/hour). A non-contrast abdominal CT, which was taken two days after the admission, revealed a decrease in the amount of fecaloid in the rectum and regression in the grade of hydroureteronephrosis ( Figure 2). The serum creatinine level decreased to 1.39 mg/dL on day two. After lavage, a rapid increase in diuresis and a decrease in creatinine levels supported the diagnosis of postrenal acute renal failure. Regression in the grade of the hydroureteronephrosis and decrease in the amount of fecaloid in the rectum can be seen in Figure 3. Although fecaloma was extracted in the patient's followups, she passed away due to the degradation in her general condition and fluid electrolyte disorder a week after her admission.
Discussion
Fecalomas are common in Hirschsprung's disease, Chagas' disease, patients with spinal cord injury, behavioral abnormality, and elderly patients with chronic constipation [3]. The risk factors in our case were advanced age and a history of recurrent constipation. In terms of the frequency of the disease, there is no significant difference in the ratio of women and men [4]. Usual fecaloma-related complications are intestinal obstruction, colonic ulceration, and stercoral perforation [4]. The urinary system can also be rarely affected due to the local pelvic mass effect. In the urinary system, urinary tract infection, hydronephrosis, and even bladder rupture may develop due to compression of the hard fecaloid onto the bladder [5]. The most common level of obstruction is the urethra or urethral-vesical junction. The mechanism of urinary retention caused by fecal impaction is believed to be a significant elevation of the floor of the bladder and posterior urethra with resultant obstruction of the bladder outlet [4]. Thus, intravesical ureters may be pressurized, and unilateral or bilateral hydronephrosis may occur [4,6].
As in our case, the local compression effect of fecaloma should be considered if there is deterioration in the renal function tests and obstructive uropathy findings, especially in patients in the geriatric age group who present to the emergency department with constipation. It should be considered as a cause of urinary tract obstruction and recurrent urinary tract infections. Fecalomas can also be confused with malignancies due to local pressure.
In the literature McWilliams et al. have reported detecting a pelvic mass that filled the left lower quadrant, causing hydronephrosis and renal failure in a 74-year-old female patient who had a history of cerebrovascular event. Initially, it was thought that the patient had uterine leiomyoma or malignancy; however, after the removal of the fecaloma through the rectal lavage and enema, diuresis started, thereby providing regression in hydronephrosis and a decrease in creatinine values [11]. Özer et al have reported the case of a 73-year-old female patient who had knee arthroplasty; it was initially thought that she had uterine or ovarian tumoral mass-related bilateral hydronephrosis. But, ultimately, it was found that the actual pathology was due to the giant fecaloma. However, due to the failure of defecation via rectal lavage or enema, nephrostomy catheters were placed in the patient. In our case, exitus occurred due to fluid electrolyte and general condition disorder despite the discharge of the fecaloma [1].
Rectal emptying through rectal lavage, enema, and suppository treatments are generally applied in all patients. In the treatment of acute conditions, more invasive procedures should be considered in patients who do not respond to these interventions. Underlying pathology should be eliminated, and dietary changes and mobilization should be applied to prevent chronic constipation-related hydronephrosis [12,13]. Patients with acute renal failure usually recover quickly after fecaloma evacuation, and increased postobstructive diuresis is frequently seen. Although a good recovery was reported in most cases in the literature review made by Iwata in 2015, two deaths were reported due to constipation complications, and one of them was a sudden death after bladder perforation associated with the pelvic mass effect of massive feces [4].
Conclusions
Although it is very rare, fecalomas causing pelvic mass effect should be considered as the cause of urinary system obstruction. The necessary interventions should urgently be made in these patients. Otherwise, a life-threatening condition may occur due to fluid electrolyte imbalance and deteriorated general conditions.
Additional Information Disclosures
Human subjects: Consent was obtained by all participants in this study.
Conflicts of interest:
In compliance with the ICMJE uniform disclosure form, all authors declare the following: Payment/services info: All authors have declared that no financial support was received from any organization for the submitted work. Financial relationships: All authors have declared that they have no financial relationships at present or within the previous three years with any organizations that might have an interest in the submitted work. Other relationships: All authors have declared that there are no other relationships or activities that could appear to have influenced the submitted work. | 2020-04-30T09:10:28.048Z | 2020-04-01T00:00:00.000 | {
"year": 2020,
"sha1": "7b56f0eda4dd940f5c34fd9b6f3cea90629afc06",
"oa_license": "CCBY",
"oa_url": "https://www.cureus.com/articles/29476-postrenal-acute-renal-failure-due-to-giant-fecaloma-related-bilateral-hydronephrosis-a-case-report-and-brief-literature-review.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "e7f5d236bf30257f39d93758b5f98c19b2acef99",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
205697459 | pes2o/s2orc | v3-fos-license | Interventions for waterpipe tobacco smoking prevention and cessation: a systematic review
Waterpipe tobacco smoking is growing in popularity despite adverse health effects among users. We systematically reviewed the literature, searching MEDLINE, EMBASE and Web of Science, for interventions targeting prevention and cessation of waterpipe tobacco smoking. We assessed the evidence quality using the Cochrane (randomised studies), GRADE (non-randomised studies) and CASP (qualitative studies) frameworks. Data were synthesised narratively due to heterogeneity. We included four individual-level, five group-level, and six legislative interventions. Of five randomised controlled studies, two showed significantly higher quit rates in intervention groups (bupropion/behavioural support versus placebo in Pakistan; 6 month abstinence relative risk (RR): 2.3, 95% CI 1.4–3.8); group behavioural support versus no intervention in Egypt, 12 month abstinence RR 3.3, 95% CI 1.4–8.9). Non-randomised studies showed mixed results for cessation, behavioural, and knowledge outcomes. One high quality modelling study from Lebanon calculated that a 10% increase in waterpipe tobacco taxation would reduce waterpipe tobacco demand by 14.5% (price elasticity of demand −1.45). In conclusion, there is a lack of evidence of effectiveness for most waterpipe interventions. While few show promising results, higher quality interventions are needed. Meanwhile, tobacco policies should place waterpipe on par with cigarettes.
Year of study: [2011][2012] Eligibility criteria: -Adults aged over 18 years -Suspected tuberculosis (cough >=3 weeks of unknown cause) -Regular smokers (>= 1 cigarette or waterpipe per day) -Excluded those requiring hospitalization or urgent medical attention 1) Two brief behavioural support cessations (first visit 30min, second on quit day 10min) (BSS) 2) Two brief behavioural support cessations (as above) plus bupropion for seven weeks (75mg/d for first week, 150mg/d for next six weeks) (BSS+) Participants in the experimental group were shown 20 MS PowerPoint slides that covered 1) what is a waterpipe and how it works (e.g., names it goes by, schematic of a waterpipe), 2) what is in waterpipe, focusing on the flavorings added to the tobacco, 3) who smokes waterpipe, concentrating on origins, spread, and use by subgroups, 4) amount of smoke inhaled by waterpipe in laboratory studies and in relation to smoking cigarettes, 5) production of tar, CO, and nicotine in waterpipe tobacco compared to cigarettes, 6) exposure levels of toxic compounds (e.g., aldehyde), and 7) health effects associated with waterpipe tobacco smoking (e.g., cancer, heart disease, infections). The average length of time reviewing these materials online was 7.5 min for the experimental condition. Setting: Villages Egypt that had between 10,000-20,000 inhabitants, at least one primary, preparatory, and secondary school, a public health clinic, a youth club, a mosque Country: Egypt Region: Qalyubia governorate Health promotion over a 12 month period simultaneously in all six villages 1) Primary school students participated in traditional and nontraditional activities aimed at preventing the initiation of smoking by deglamorizing tobacco use and teaching about its health hazards.
2) Preparatory and secondary school students engaged in an experiential learning program to develop social skills among teenagers to handle peer pressure to smoke.
3) Engaging mosques and churches in educating their communities about the hazards of smoking and ETS and in raising the issue of smoking as a sinful behaviour 4) Female social change agents (raedat refeyat) provided information to adult women in the home on the negative health effects of tobacco use and ETS. 5) They also taught these women how to better protect themselves and their children from ETS through a standardized message sensitive to cultural family dynamics. Ten sessions over ten weeks: four knowledgefocused, six skill-building (media literacy (1), decision making (2), refusal skills and social promise (3)) No intervention
Waterpipe cessation
Past-30 day waterpipe use Waterpipe knowledge, attitudes and beliefs -Starts with a 2-way conversation with a physician, informing the reasons for smoking, the mechanisms and risks of smoking with emphasis on health consequences and the role of advertising -Followed by an interview with a patient suffering from a tobacco-related illness. Emphasis was on the consequences of the patients usually long-term smoking habit -2/3 students from each class had a lung function test and finger pulse oximetry -A concluding group discussion about the test results (point above) and the students questions were answered Control group who did not receive the interventi on, but details not describe d
Abstinence from waterpipe smoking initiation
Post-test at six months after intervention Supplementary Table S3: Risk of bias assessments using Cochrane (randomised studies) and GRADE (non-randomised studies) and CASP (qualitative studies) frameworks Authors report using a computer random number generator.
Low risk of selection bias.
Insufficient information to permit judgement of 'Low risk' or 'High risk'..
Unclear risk of selection bias.
No blinding and outcome likely to be influenced by lack of blinding.
High risk of performance bias.
No blinding but outcome biochemically verified and unlikely to be affected by lack of blinding.
Low risk of detection bias.
Missing outcome data balanced across intervention groups, with similar reasons for missing data across groups.
Low risk of attrition bias.
The study protocol is available and all of the study's pre-specified (primary and secondary) outcomes that are of interest in the review have been reported in the pre-specified way Low risk of reporting bias. Dogar 2014 Authors report using a computer random number generator.
Low risk of selection bias.
Central allocation.
Low risk of selection bias.
No blinding and outcome likely to be influenced by lack of blinding.
High risk of performance bias.
No blinding but outcome biochemically verified and unlikely to be affected by lack of blinding.
Low risk of detection bias.
Missing outcome data balanced across intervention groups, with similar reasons for missing data across groups Low risk of attrition bias.
Not all of the study's pre-specified primary outcomes have been reported (point abstinence at 4 weeks) High risk of reporting bias.
Lipkus 2011
Authors report using a computer random number generator. Missing outcome data balanced in numbers across intervention groups Low risk of attrition bias.
One or more outcomes of interest in the review are reported incompletely so that they cannot be entered in a meta-analysis Unclear risk of attrition bias.
(point abstinence at 4 weeks) High risk of reporting bias. Multistage sampling of participants, although no details given (unclear risk).
High risk
Authors do not report using a validated tool.
Unclear risk
Authors report using a pretested questionnaire but no adequate evidence of validation provided.
High risk
Authors do not report controlling for relevant confounders.
High risk Authors provide no information about missing data, which appear apparent from the results tables.
Deshpande 2010 High risk
Mixture of random, convenience and purposive sampling.
Low risk
All venues are subject to the smokefree law and are hence 'exposed' to this health policy.
Low risk
Validated instrument for PM2.5 measurement. Standardised protocol for taking measurements (i.e. in centre of venue, for 60 mins).
Unclear risk
Authors report controlling for confounders but without adequate details.
Low risk
Data are complete.
Low risk
Authors report using a previously validated tool.
High risk
Authors do not report using a validated tool. Data are complete.
Stamm-Balderjahn 2012 High risk
Non-random, convenience sample with no eligibility criteria.
High risk
Authors do not report using a validated tool.
High risk
Authors do not report using a validated tool.
High risk
No controlling for confounding, but they assessed for interaction.
Low risk
Authors provide specific figures for missing data, suggesting low rates. Perceptions regarding water pipe smoking changed significantly after intervention and the opinion regarding addiction associated with water pipe smoking improved. Highly significant difference was observed with regards to shisha being more addictive and harmful than cigarette smoking.
Social perceptions related to water pipe that it is more socially acceptable and part of our cultural heritage remain deep rooted and no significant difference was observed.
Majority of the students were of the opinion that shisha cafes play an important role in promoting shisha smoking. Most students said that shisha smoking is influenced by other people in close family circle smoking water pipe. Perceptions regarding health hazards associated with shisha smoking changed significantly after the health awareness sessions. The students attributed shisha smoking to all forms of cancers specifically those of lips, bladder and lung. Strong positive association was also observed with infertility, high blood pressure and cardiovascular problems.
Conclusion
The knowledge of the participating students regarding water pipe smoking improved to some extent after the health awareness sessions especially in terms of health hazards associated with water pipe. This study helped in changing their perceptions regarding health hazards associated with shisha smoking.
Deshpande 2010 PM2.5 measurements of indoor air quality before SidePak AM510 Personal Aerosol PM2.5 decreased in all premises except hookah venues (mean 973 ug/m 3 pre ban to 1267 ug/m 3 post ban -30% This is possibly due to an exodus of smokers from their customary venues to hookah parlors, since these parlors were and after Active smoker density before and after the ban using the number of people smoking and room volume Monitor increase) Active smoker density decreased to zero in all premises except hookah venues, where it increased to 3.08 burning cigarettes per 100 cubic meters volume clearly violating the law under the excuse that the flavored hookah's being served did not contain any nicotine.
Hookah parlors remained uniquely insulated from the ban's effect. It was apparent that cigarette smoking was not discouraged in the hookah enclosures. Essa-Hadad 2015 Primary: past-7 day waterpipe use Secondary: feasibility outcomes Pre-and postintervention survey Past-7 day waterpipe use: 58.2% to 22.2% (p=0.01) Satisfied or very satisfied with intervention: 97.8% Recommend the intervention to a friend: 93.8% The findings from the study suggest that a tailored Web intervention was found interesting and acceptable among Arab university students and seems promising in reducing nargila smoking. Quadri 2014 Knowledge that waterpipe causes oral cancer Pre-and postintervention survey Knowledge increased from 0.80 (SD 0.34) to 0.98 (SD 0.13).
The post intervention results showed a significant improvement in the knowledge of the respondents as the mean value obtained was fairly high.
The study effectively increased the knowledge and awareness among the youth about oral cancer per se and its prevention measures. Hence, giving a direction for further public health initiatives in this prone oral cancer region. Many educational programs should be conducted on a regular basis targeting a larger sector of the community. The expenditure data do not provide information on tobacco products consumed at other commercial establishments such as restaurants and cafes. Our measures of spending on shisha tobacco are therefore likely an underestimate the total spending on shisha tobacco by households.
Stamm-Balderjahn 2012
Abstinence from waterpipe smoking initiation Pre-and postintervention survey Altogether, 23 students had taken up waterpipe smoking during the 6-month observation period: 5 in the intervention group, 18 in the control group. The difference was statistically significant (P<0,01). Compared to the control group, the nonsmokers (with respect to the waterpipe-only smokers) in the intervention group had a three and a half times likelihood of staying abstinent (OR: 3,64; SE: 0,52; 95% CI: 1,32 -10,03). Qualitative interview "I also use shisha as a substitute for coming off cigarettes, some people use nicotine patches and all that, I find shisha more effective . . . with shisha, a whole"
N/A as this was additional information outside of the manuscript
Watepripe use increased after the smokefree law for one participant -no waterpipe use for anyone else These accounts suggest that smokers were using these products prior to, and after, the implementation of smoke-free legislation. Some regarded these practices as less harmful than smoking, while others framed them as an alternative way of weaning themselves off cigarettes: However, some Bangladeshi smokers, old and young, appear to have increased their use of other forms of tobacco, such as shisha and paan, despite the former being included in smokefree restrictions and the provision of specific guidance to this effect. Prior to the implementation of the legislation, there was widespread concurrent use of traditional cigarettes and indigenous tobacco products among Bangladeshi smokers.
Since implementation, some smokers may be using such products as a substitute for smoking cigarettes and as an aid to smoking cessation, in the mistaken belief that these products are harmless. Thus, a modest reduction in cigarette consumption by some of our participants was counterbalanced by an increase in the use of other forms of tobacco. Jawad 2013 Waterpipe smoking behavior after English smokefree law Qualitative interviews post legislation "Regarding the impact on waterpipe smoking of the 2007 smokefree law in England, opinions were divided into two broad categories: either there was no effect, or there was increased use as a result of the ban. Some participants adapted by smoking at home instead of at cafes, and subsequently increased their waterpipe consumption as it was more readily available and notably cheaper. Five years after the ban, participants described frequenting UK waterpipe cafes that flouted the smokefree law" Of primary importance is the enforcement of waterpipe smoking legislation as directed by the World Health Organization Framework Convention on Tobacco Control. Other legislative issues that merit attention include appropriate taxation of waterpipe tobacco, enforcing the smokefree law to avoid carbon monoxide poisoning, and regulating the content of waterpipe tobacco Jawad 2014 Waterpipe premise compliance with English smokefree law Qualitative interviews post legislation "Our enforcement policy basically says to give guidance, then send them a warning letter, then enforce. So they all know what they're doing is wrong and illegal, but they carry on doing it because a) they think they're going to get away with it or b) they'd rather take the fine and carry on with their business. I mean, I have one business which is right across the road, who says "What Compliance with smoke-free law is generally poor, but unlike health warning labels or underage sales, is transiently compliant. Factors such as the cold weather, lack of regular monitoring from LA staff, peak times of trade, and low prosecution fines all encourage waterpipe premises to be noncompliant with smoke-free law. In one borough, fines ranged between £300 and £1,500. "A premises has forty people in there, if twenty of those are smoking and they paid fifteen pounds per waterpipe pipe -if they then get a fine of a hundred and fifty pounds, there's no deterrent for the premises because they can cover that in half a day.
fines are not designed for intentional and recurrent flouting of smoke-free law. Additionally, the prosecution process is labor and resource intensive.
Lock 2010
Change in smoking behavior, changes in the geographical location of smoking and its social impacts, and smoking illegally Qualitative interviews pre-and post-smokefree law "Some Somali respondents felt that smoking cessation services would not help as they focussed on cigarette use and did not address shisha smoking." ""For those who smoke Shisha they have to be home..... the gathering that used to take place in a restaurant takes place home a lot now." (Middle-aged Somali woman)" "Somali women appeared to experience the greatest social impact of SFL. All Somali respondents discussed the traditional importance of shisha, with all but one of the Somali women currently or previously smoking shisha, while few smoked cigarettes. Despite an estimated 17% of Somali women in this community who admit to smoke, it is considered culturally unacceptable for Somali women to smoke, especially in public. Respondents said this was the custom rather than because of specific religious beliefs." "Both Somali men and women agreed that the legislation has had a greater impact on women because of increased social restrictions. Before, SFL Somali women smokers could hire separate indoor smoking rooms in public shisha venues where they could socialise in private with friends. Women who continue to It is important to understand the differences found between ethnic groups after SFL. Overall, the social impacts appeared most restrictive for young Somali women who, due to cultural sensitivity around female smoking, were often now unable to smoke in public where they might be seen and were thus taking measures to hide their smoking (including visiting illegal venues). Somali respondents also perceived that smoking cessation services were not culturally sensitive, focussing on cigarette, and this may have contributed to some of the ethnic differences seen in the lack of willingness to use cessation services.
The perceived stigma for some women associated with smoking outside in public since SFL may make already disadvantaged groups even more difficult to target or engage in future smoking cessation strategies smoke shisha say they feel that they now can only smoke in private homes or, if continuing to smoke publicly, by taking measures to conceal themselves, travelling away from their local community or smoking in illegal venues (box 5).
"For girls, they cannot sit outside. They feel a bit embarrassed. A friend -a family friend, like, someone might see them and tell the family. So what happened was, they put hoods on, a bit clothing on. Now they face on the wall, and they're just smoking. but they cannot sit there for a long time. And when they're, like, smoking the Shisha, they're not feeling comfortable.... I did it a couple of times, but it was at night time anyway. So I'm sure that my family aren't around. It was far away from where I live and I went out with a couple of friends, and even though it does matter, the way you dress up I just put my hood on like this, and.nobody's gonna see your face. ...I was not feeling comfortable. you know, before it was really okay, not anymore. It's the shame." (Young Somali woman) ""I think I told you, that these people will go underground. and, yes, they did. There were restaurants that had a lower floor and I think they will let only their regulars in...I sat there and I could easily say that the space occupied about 50 to 60 people and I wouldn't be able to see the person at the far corner." (Young Somali woman about shisha smoking)" ""..one place.it was a normal restaurant upstairs, which used to be a normal Shisha bar.and you'd have to go downstairs and there was a room in the basement, that was for Shisha smokers. And you can still find places that are the same as before, like, inside, but it's just the fact that you have to pay more for them just cause it's inside and that's not allowed." (Young Somali woman)" | 2018-04-03T05:31:51.421Z | 2016-05-11T00:00:00.000 | {
"year": 2016,
"sha1": "b0ce80b7dfa2edd868203a5611a7b513f70a26a1",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.1038/srep25872",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "b0ce80b7dfa2edd868203a5611a7b513f70a26a1",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
988609 | pes2o/s2orc | v3-fos-license | Diversity of T Cell Epitopes in Plasmodium falciparum Circumsporozoite Protein Likely Due to Protein-Protein Interactions
Circumsporozoite protein (CS) is a leading vaccine antigen for falciparum malaria, but is highly polymorphic in natural parasite populations. The factors driving this diversity are unclear, but non-random assortment of the T cell epitopes TH2 and TH3 has been observed in a Kenyan parasite population. The recent publication of the crystal structure of the variable C terminal region of the protein allows the assessment of the impact of diversity on protein structure and T cell epitope assortment. Using data from the Gambia (55 isolates) and Malawi (235 isolates), we evaluated the patterns of diversity within and between epitopes in these two distantly-separated populations. Only non-synonymous mutations were observed with the vast majority in both populations at similar frequencies suggesting strong selection on this region. A non-random pattern of T cell epitope assortment was seen in Malawi and in the Gambia, but structural analysis indicates no intramolecular spatial interactions. Using the information from these parasite populations, structural analysis reveals that polymorphic amino acids within TH2 and TH3 colocalize to one side of the protein, surround, but do not involve, the hydrophobic pocket in CS, and predominately involve charge switches. In addition, free energy analysis suggests residues forming and behind the novel pocket within CS are tightly constrained and well conserved in all alleles. In addition, free energy analysis shows polymorphic residues tend to be populated by energetically unfavorable amino acids. In combination, these findings suggest the diversity of T cell epitopes in CS may be primarily an evolutionary response to intermolecular interactions at the surface of the protein potentially counteracting antibody-mediated immune recognition or evolving host receptor diversity.
Introduction
The development of a successful malaria vaccine has the potential to significantly reduce the estimated one million deaths a year caused by falciparum malaria. A major concern for vaccine development is the extensive genetic diversity of immunogenic Plasmodium falciparum antigens. P. falciparum circumsporozoite protein (CS) is a leading candidate antigen [1][2][3] and the recent interim analysis of the Phase III RTS,S/AS01E vaccine trial has shown approximately a 55% and 31% reduction in clinical malaria during the first year among children 5-17 months and 6-12 weeks of age, respectively [2,3]. However, the CS antigen of RTS,S is comprised of a single variant and the impact of the significant natural genetic variation in CS on vaccine efficacy is still unclear [1,[4][5][6].
Cell mediated immunity is thought to be mediated in part by T cell epitopes in the C terminus of the protein, including the epitopes known as TH2 and TH3 [7][8][9]. These two epitopes are highly polymorphic in natural parasite populations [1,5]. Understanding what drives this diversity could have a profound impact on improving the design of CS-based vaccines. Many theories about the mechanism of diversification in this region have been proposed. Good et al. suggested that they were maintained by natural selection favoring immune evasion (allele-specific immunity) [10]. This hypothesis was supported by the observation that the number of nonsynonymous nucleotide substitutions was higher than synonymous nucleotide substitutions in parasite populations [11,12]. On the other hand, recent evidence suggests that among CS isolates in the Gambia, there is only limited evidence of balancing selection, implying minimal allele specific immunity in CS [13]. Diversification may also have been driven by other mechanisms. Indirect evidence for selection on CS has been reported during the malaria transmission cycle [14,15]. This selection has been supported by population studies and is biologically plausible, as CS is required for oocyst development in the mosquito and is centrally involved in gliding motility of the sporozoite [16,17]. In addition to the diversity within the epitopes, recent analysis identified non-random associations between TH2 and TH3 epitopes in a parasite population in Kenya, consistent with recent mutations in linkage disequilibrium and/or functional constraints on CS limiting the repertoire of permissible amino acids and their combinations [1]. However, this study only evaluated the dominant alleles in the population and may not completely reflect the potential associations between T cell epitopes within the population. Also, recent studies of the population structure of the gene encoding CS (pfcsp) suggest that geographically variable levels of diversity and geographic restriction of specific subgroups may have an impact on the efficacy of malaria vaccines in specific geographic regions [18]. Thus, evaluations of the polymorphisms within and associations between T cell epitopes need to be conducted in varying geographic locations to determine whether previous findings in one or two parasite populations are generalizable.
The crystal structure of the C-terminal region of CS, termed the Thrombospondin type-1 repeat super family (TSR) region, containing TH2 and TH3 was recently published, showing unpredicted protein folding due to the presence of a hydrophobic pocket not found in other TSR domains from paralogous molecules from other organisms [19]. This new insight gained from the crystal structure enables us to more extensively investigation of the impact on protein structure of polymorphisms seen in the TH2 and TH3 epitopes in natural parasite populations and to model altered molecular interactions that may occur due to these changes. Using sequences of the TSR domain in parasite populations from Malawi and the Gambia, the patterns of nucleotide diversity within and between the two populations were evaluated, and haplotype associations between TH2 and TH3 polymorphisms elucidated. We characterize the impact of T cell epitope diversity on protein structure by mapping polymorphisms onto the newly derived crystal structure. Based on these findings, we use structural mapping to evaluate the interactions between epitopes and within epitopes, and an exhaustive point mutagenesis approach to identify any intramolecular structural constraints, as well as those residues under diversifying selection, providing new insight into how and why the described patterns of diversity occur.
Sequence Data
Sequences from Malawi (GenBank Accession numbers: JN634586-JN634642) were accessed from a previously published study from our group. Details of the sequencing from the 100 participants, which was done by massively parallel pyrosequencing on the 454 platform at University of North Carolina's High Throughput Sequencing Facility, have previously been published [5]. This deep sequencing allowed for the detection and characterization of minor variants in an infection representing $1%. The Gambian pfcsp sequences (GenBank Accession numbers: JX885511-JX885521) derive from 55 participants in a clinical trial in the year 2000 [20,21], and were generated by dideoxy fluorescent capillary sequencing at The London School of Hygiene & Tropical Medicine (LSHTM). Both major and minor abundance sequence variants from each isolate are reported, where these were unambiguous, as previously described [20]. All sequences from both locales were trimmed to correspond to a 220 bp fragment containing nucleotides 871 to 1090 of PF3d7_0304600 (PlasmoDB, accessed 9/26/2012), corresponding to amino acids 291 to 363 ( Figure 1)
Data Analysis
DNA alignments for each population were generated by using the DNAStar SeqMan, Version 9.1 [22] and descriptive statistics were generated by DnaSp, Version 5.10.01 [23]. The fixation index (FST) was calculated using Arlequin, Version 3.11 [24]. Neighbor-joining analysis was conducted with MEGA, Version 5 [25]. Bootstrap values, drawn from 500 replicates, were calculated for the deep branch points. Hudson's nearest-neighbor statistic (Snn) [26] was also calculated for the clustering of samples into geographic clusters. Input was in the form of a pairwise distance matrix between all haplotypes in the phylogenetic tree.
In order to evaluate if there were non-random associations between TH2 and TH3 epitopes within our populations, associations were explored by contingency table. Each unique TH2 and TH3 amino acid sequence was coded and given a unique label (TH2-''x'' or TH3-''y''). Paired epitopes were determined by the TH2 and TH3 type that occurred in each sequence haplotype identified in the population. An example is shown in Figure S1. The frequency of each pair of epitopes was then tabulated for each population (Table S1) to determine the observed frequencies of pairings. Due to large number of categories in genotypes, the contingency table was sparse with many zero count cells. To statistically deal with this kind of sparseness, we utilized a log-linear model with Poisson assumption that treats zero counts as sampling zero frequencies [27] implemented within the SAS procedure PROC CATMOD [28]. If non-random associations occurred between TH2 and TH3 types, the distribution of pairings should diverge from the predicted values based solely on the frequencies of the TH2 and TH3 genotypes assuming random association. A significant deviation from non-random pairing of TH2 and TH3 haplotypes was determined based on the overall distribution of the disparity between predicted and observed frequencies using the log-linear model. Additionally, individual tests of each TH2 and TH3 pairing were performed using the log-linear model with Poisson assumption. The significance cutoff was corrected for the number of comparisons by Bonferroni correction.
Structural Analysis
Structural studies of CS were carried out based on the newly crystallized structure, PDBID 3VDJ [19]. Sequence logos [29] were generated online using WebLogo [30]. The information content (conservation of the sequence) in bits was binned at increments of 0.25 and mapped to the crystal structure via a color scheme indicating the magnitude. Structure figures/images were generated using Visual Molecular Dynamics (VMD) software (University of Illinois at Urbana-Champaign, http://www.ks.uiuc. edu) and rendered with ray tracing in the software PovRay (http://www.povray.org). In order to map a specific haplotype to the structure, the sequencing data provides both the sequence and position, which is then matched to the corresponding position in the structure and evaluated for evidence of interactions between epitopes.
Point Mutagenesis
Exhaustive independent residue-by-residue point mutagenesis of the CS wild type sequence in the 3VDJ crystal structure was simulated using MUMBO [31] to calculate the Gibbs free energies of the reference and all potential single point mutations from the reference mutant strains. Each amino acid residue in the 3D7 reference structure was mutated to all 19 possible other amino acids one at a time. For each mutation at each residue, the energetic effect of the change was obtained by calculating the DDG, the difference between the overall energy DG of each residue in the mutated sequence subtracted from the DG of residues of the unmutated 3D7 reference sequence as follows: Briefly, MUMBO works by repacking amino acid side chains using the input structure backbone as a scaffold. The residues are built onto the scaffold using parameters derived from a standard library of crystal rotamer conformations for each amino acid. The energies for different rotamer combinations are assessed, and the energetically lowest is taken, which is consistent with the most stable packing. To overcome the problem of an exponentially expanding combinatoric space to explore, dead-end elimination is used to discard conformers and their combinations clearly producing energies far from the minimum, such as would arise from van der Waals clashes, thereby reducing the search space to a more tractable size. The force field used to compute the energy is the standard molecular mechanics atomistic potential energy function, as follows, using Chemistry at Harvard Macromolecular Mechanisms (CHARMM) parameters [32]: Thus, for each mutation, we rotamerized the mutation site, as well as all the wild type residues, such that the entire structure was repacked each time. The reference sequence from P. falciparum clone 3D7 (PF3D7_0304600) was also completely repacked to obtain the reference state DG, and the predicted protein structure obtained was similar to the published crystal structure [19]. The MUMBO analysis was first used to look for energetically constrained residues by determining positions where the average change in the Gibbs free energy (DDG) upon mutation (conservative measurement) deviated substantially from identical residues elsewhere within the sequence (i.e., a given mutation exceeds two standard deviations from the average for the same residue at other positions within the reference structure) (Table S4).
We also examined the DDG mutation profile for each position across all TH2 and TH3 residues from the perspective of the ancestral allele which was inferred from multiple alignments of Plasmodium species [19]. For positions where 3D7 residue was not the probable ancestral residue, the free energies were renormalized so that the putative ancestral residue DDG was zero. Across all TH2 and TH3 residues the median free energy change from the ancestral residue was calculated at each position for all 19 possible non-ancestral residues. Observed residue polymorphisms were Figure 1. WebLogo of Amino Acid Sequence of Circumsporozoite Protein from Malawi and the Gambia. Panel A and Panel B are the Weblogos for Malawi and the Gambia, respectively. In Panel A, the TH2 region (blue) and TH3 region (pink) are underlined. The TH2 epitope maps almost exclusively to the a-helix, while the TH3 epitope maps to the flap. The polymorphic residues and types of amino acids that populate these sites appear to be conserved between two geographically disparate African parasite populations. Bits represent the information content, which is a relative measurement of sequence conservation, with higher values representing conservation and lower values consistent with sequence diversity at a position. doi:10.1371/journal.pone.0062427.g001 Table 1. Shared TH2 between Malawi and the Gambia. Table 2. Shared TH3 epitopes between Malawi and the Gambia. categorized as increased or decreased free energy compared to the median DDG. In neutral non-functional sequence, the free energy will not impact the sequence and thus any amino acid may be equally likely to evolve as a polymorphism at a given position. In a sequence with conserved function, DDG is usually minimized and thus the majority of changes would be expected to fall below the median. Conversely, outside forces such as intermolecular interactions or other external selective forces are usually required to elicit drastic changes in DDG. To detect the likely effects of positive selection the observed categories were compared using a binomial distribution that models the neutral expectation of increases and decreases relative to the median being equally observed.
Results
In Malawi, the 100 participants had an average multiplicity of infection (MOI) of 2.35 leading to 235 parasite variants being identified [5]. These represented 57 unique parasite haplotypes. In the Gambia, there were 25 unique haplotypes in 55 variants. Of these haplotypes, 13 TH2/TH3 haplotypes were shared between the two sites, representing 23% of Malawian and 50% of Gambian isolates. Individually, 13 TH2 types were shared and 10 TH3 types were shared (Table 1 and 2). Upstream from the TH2 and TH3 epitopes (amino acids ,311), several polymorphic sites were also identified in both populations. Further analysis of these was precluded as these regions were not part of the available protein crystal structure. Variable amino acids upstream of TH2 were conserved between the sites with the exception of an E295K mutation found only in the Gambia population ( Figure 1).
In order to assess the extent of genetic diversity and the extent of genetic similarity between populations, we investigated the nucleotide diversity of this 220 bp region of CS. In general, both populations had high levels of haplotype diversity (H d : Malawi = 0.957 and Gambia = 0.953), which essentially is the measure of two random strains within the population having different haplotypes. The average number of pairwise nucleotide differences expected between two strains (K) was similar (6.00 vs. 6.68) with similar overall nucleotide diversity (p) diversity (0.023 vs 0.025), which is K normalized for the length of the sequence. Measures of nucleotide diversity are summarized in Table 3. The level of nucleotide diversity across this region is known to be uneven; therefore we re-evaluated nucleotide diversity (p) for each population using a sliding window approach (50 bp size, 25 bp slide) across the T cell epitopes using the program DnaSP ( Figure 2). As expected, the regions of peak nucleotide diversity correspond to the TH2 and TH3 epitope regions, with the maximum diversity seen between positions 897 to 972 (corresponding to the TH2 epitope). Interestingly, in both populations, all polymorphisms were nonsynonymous indicating that this region of the pfcsp gene is likely to be under strong selection. Since diversification of haplotypes within this region may also occur due to recombination, we estimated the minimum number of recombination sites using DnaSP v5.10.01 [23]. A high number of recombination sites were predicted in both populations (8 in Malawi and 7 in the Gambia). In Malawi, the majority of these (6) were located within the TH2 (nucleotides 978-1029) and TH3 (nucleotides 1100-1135) epitopes themselves, suggesting recombination may be important for generation of diversity in these sites (Table 3). A single recombination event was detected between the two epitopes. Between the two populations, the fixation index (F ST ), a measure of the population differentiation due to genetic structure was 0.034 suggesting little genetic distance between the populations. We confirmed this using both phylogenetic and statistical methods. A Hudson's Nearest Neighbor analysis, a test measuring how often the nearest neighbors are from the same population, showed no significant geographic separation of haplotypes (S nn = 0.440; ns). A neighbor joining network was constructed in MEGA and visually shows no evidence of geographic clustering ( Figure S2). These data suggest that the levels and distribution of nucleotide diversity are similar in Malawi and the Gambia, and that these two populations, separated by an extended geographic distance, are remarkably genetically similar at the nucleotide level.
Previous reports have suggested that the association between TH2 and TH3 epitopes is not random [1]. We assessed the distribution of the specific epitopes of TH2 and TH3 among the 235 Malawian and 55 Gambian variants identified. We used simulations to test whether the TH2 and TH3 epitopes were randomly associated within the sequence haplotypes. Based on these simulations, a model of random associations was rejected in Malawi [p,0.001, Degrees of freedom (df) = 490, G2 statistic = 824]. Among the Gambian isolates, we did not see a statistically significant overall departure from the null model of random assortment for the entire population [p = 0.298, df = 171, G2 statistic = 180.3]; perhaps secondary to the lower statistical power due to limited number of isolates. However, there was significant over-representation of certain combinations. If the assortment of T cell epitopes were non-random, we would expect the observed frequencies to be equal to the predicted frequencies from the contingency table analysis. Instead, we see many pairings in which the observed frequency is significantly higher or lower than the predicted frequency of a pairing (Figure 3). The complete contingency table is shown in Table S1 and the list of all statistically significant pairing is shown in Table S2 and Table S3. This suggests that similar to what was seen in Kenya, the associations between TH2 and TH3 are not random and the possible combinations that occur within natural populations are constrained by biology and/or by limited time for recombination to randomly reassort the mutations.
Patterns of amino acid polymorphisms within the epitopes were then assessed. The sequence logo of the amino acid sequence for both countries (Figure 1) suggests positional bias in the diversity of CS. There were no significant differences between the two populations, with the exception that position 295 is variable in the Gambia, but monomorphic among the Malawian isolates. Within the TH2 and TH3 epitopes, the distribution and frequency of amino acid type were evaluated (Figure 4) showing highly similar amino acid polymorphisms, with similar frequencies, between the populations. Interestingly, within Malawi, we found 118 (50.2%) variants having at least a TH2 or TH3 epitope within one amino acid of the 3D7 (RTS,S vaccine) epitopes, while 230 (97.9%) have a TH2 or TH3 epitope within two amino acids of 3D7 epitopes. Using the sequence logo, we identified ten sites within the TSP domain which are most highly mutable, namely positions 314, 317, 318, 321, 322, 324, 327, 352, 357, and 361 (information content #3). These polymorphic sites predominately involve positively or negatively charged residues. Interestingly, positions 314, 317, 318, 321 and 324 can be populated by either positive or negative residues, suggesting that any charge-charge interactions are poorly conserved.
Using the recently published crystal structure PDBID 3VDJ, we sought to conduct structural mapping of the highly mutable sites to gain insight as to how they are spatially oriented and related to one another. A surprising feature of the 3VDJ structure is its lack of resemblance to homologous domains in proteins such as thrombospondin, f-spondin and ADAMTS13, which have two antiparallel b sheets and one additional antiparallel strand, all held together by disulfide bridges [19]. The CS structure, on the other hand, features a short a-helical portion capped by a loop that folds onto the structure, and the N-terminal strand is ordered into an ahelix tethered beneath the flap by a hydrophobic stacking interaction of Trp 331 into the antiparallel b sheets. The highly mutable sites map to the ahelix, formed by the TH2 epitope, and the flap, formed by the TH3 epitope ( Figure 5). Furthermore, the novel pocket created by this unusual structure is comprised of highly conserved residues. The most polymorphic residues point away from the pocket. The surface views, rendered with a probe having a radius of 1.4 angstroms, the size of a water molecule ( Figure 6), show that the conserved pocket is quite large and readily accessible to solvent. Furthermore, the rear surface of the structure is highly conserved. Similarly, we examined the structural mapping of those combinations of epitopes that were identified as significantly over represented in our analysis of TH2 and TH3 association (Table S2 and S3). Examination of the pattern of polymorphism within and between TH2 and TH3 epitopes did not reveal any patterns consistent with spatial interaction (compensatory mutations) suggestive of intramolecular interactions within an epitope or between epitopes. This may suggest that the interactions underlying the selection of the observed polymorphisms are entirely due to intermolecular rather than due to functional structural limitation. Given the disruptive nature of the amino acid changes predominantly facing one side of the protein this would be consistent with a diversifying pattern of intermolecular interaction of the intact protein consistent with immune evasion (e.g. disruptive binding to epitope specific antibodies rather than HLA-binding peptide epitopes) or co-evolution with a host receptor.
Calculation of Gibbs free energies on exhaustively mutagenized structures can provide information on structural constraints of a protein. Given the newly evolved fold-flap and pocket in the P. falciparum CS, polymorphic changes could reflect a lack of structural constraint in this region. To study the energetic constraints and effects of point mutations, we performed a comprehensive point mutation analysis of the structure using MUMBO software. This yields an estimate of Gibbs free energy required for each of the possible alternate states, indicating the favorability of making each of the 19 residue substitutions theoretically possible at each position in the reference sequence (Table S4). After quality control checks to validate the appropri-ateness of the method to the CS structure, we searched for residues which behaved anomalously (differing by at least 2 standard deviations) when changed from the reference state relative to similar residues at other positions. Five such residues were identified. Substitution of Asn 340 with Leu, Ile, and Val was predicted to be particularly favorable on the grounds of energetics ( Figure S3), suggesting that mutation towards an aliphatic residue from a negatively charged residue was highly permissible. Substitution of Gln 343 to the aromatics residues His, Tyr, Trp, and Phe was strongly disfavored in this analysis and a similar trend was observed for Ser 332 and Ile 342. Substitution of Gly 341 by any other amino acid generates a substantial energy increase. This is supported by comparison between species of malaria, in which Gly 341 is conserved among all species, while the other residues have one alternate state (S332T, N340V, I342V, and Q343R) [19]. Upon mapping these residues to the structure (Figure 7), they clustered behind the conserved hydrophobic pocket, falling on b strand 2 except Ser332, which packs with b2. The observed tight packaging within the structure probably causes spatial constraints, disfavoring the incorporation of large amino acid substitutions. Gly appears to be selected for its small size, given the large van der Waals forces likely to be generated by substitutions from this smallest amino acid. The location of these restricted residues in relation to the pocket suggests that both the pocket and the packed core need to be highly conserved for stabilization of the molecule.
The mutagenesis studies can also be used to identify sites likely to be under selection by applying inductive reasoning. The presence of polymorphisms in a protein can be due to a lack of evolutionary constraint and/or selective pressures leading to diversification. Beginning with the supposition that in lieu of other forces or constraints (e.g. functional or immune interactions) a protein will evolve towards a more stable confirmation one might expect an excess of energetically favorable residues and polymorphisms arising over time. However, if an energetically unfavorable residue were to populate a position more frequently than expected, an intramolecular or intermolecular selective pressure may be acting on the sites of mutation or polymorphism. We examined the relative energetics of all 19 mutational possibilities at each site across TH2 and TH3. If intramolecular interactions determine the sites of polymorphism then polymorphic sites could be expected to have lower median DDGs on average reflecting less constraint (i.e. a greater subset of energetically accessible/reasonable residues). Initial comparison revealed a slight but insignificant difference in median DDGs between fixed and polymorphic positions (average: 3.14 vs. 1.41 respectively, p = 0.43, t-test). Upon excluding the hydrophobic sites, which are highly constrained, the difference between the median DDGs of the fixed and polymorphic sites decreased (average 20.86 and 20.39. respectively p = 0.60, t-test). This suggests that simple intramolecular energy constraints are not appreciably determining the pattern of polymorphism within TH2 and TH3.
To determine if intermolecular forces play a role in shaping the diversifying polymorphisms in TH2 and TH3, we devised a simple and conservative test for intermolecular forces. If there are no intermolecular selective forces acting on a site then we expect that observed mutations will be energetically more favorable and increase protein stability. In the worse case, a protein may be under no constraints and essentially adrift with random residue changes occurring regardless of energetics. In this case we would expect that observed residue changes would be equally likely to be greater than or less than the median DDG at a given position.
Thus, we would expect a 50/50 neutral model if we aggregated across TH2 and TH3. However, we observe 17 polymorphisms with DDG greater than the median and only 5 less than the median (p = 0.00845, exact binomial distribution) from the predicted ancestral state (Figure 8). For example position 317 contains Lys and Glu whereas favorable energetic mutations to Leu, Ile, Val, Tyr, Trp, and Phe do not substantively populate position 317. Similarly, the dominant mutations 318 Glu/Gln/ Lys, 321 Gln/Lys, 322 Lys/Thr, 324 Gln/Lys, and 361 Gly/Glu all have more energetically favorable options which do not appreciably manifest themselves. Given this relative unfavorability within the context of the protein compared to our conservative neutral model (presuming this protein has no energetic constraints), this suggests that outside intermolecular selective pressures, either immunological or functional (e.g. receptor binding), have shaped the pattern and nature of the TH2 and TH3 polymorphisms.
Discussion
In this study, we describe polymorphism in the P. falciparum gene pfcsp in two natural parasite populations and map predicted amino acid substitutions onto the recently elucidated crystal structure of the C terminal end of the CS protein. Our analysis then investigates how these polymorphisms might impact intramolecular interactions and may be shaped by intermolecular interactions. Such analyses are important for several reasons. First, concerns have been raised about the impact of antigen diversity on the development of effective malaria vaccines. Recent studies have suggested that diversity in vaccine targets can seriously compromise efficacy, as a recently tested apical membrane antigen 1 (AMA-1) vaccine was found to be efficacious only against those parasites with AMA-1 alleles similar to the variant in the vaccine construct [33,34]. Previous work has been contradictory regarding selection in this region. Some studies have suggested that the selective pressures put on CS by naturally acquired immunity and vaccine induced immunity appear to be modest [1,6,13,35]. On the other hand, this region has an extreme skew of nonsynonymous polymorphisms compared to synonymous polymorphisms [5,8], which suggests there is selection leading to the diversification of this region. Even weak or modest selection has the potential to affect the long term utility of a vaccine by promoting selection of vaccine resistant strains. Analysis of pfcsp variants in breakthrough infections in RTS,S vaccine recipients have not shown strain selection, but these were in Phase II trials, and may have been underpowered to detect all but the strongest effect [1,6,35]. Second, the variation can shed light on the biological role of the protein. Having the empirically determined crystal structure elucidated for a malaria vaccine antigen provides an opportunity to investigate the interplay among sequence variation, structure/ function requirements, and host immune selection in natural parasite populations. The integration of structural analysis with clinical response to infection has previously been used in evaluating the impacts of diversity on P. falciparum apical membrane antigen 1 (pfama1) [4].
We chose two well-separated parasite populations for our analysis on the understanding that inter-population differences, as well as intra-population sequence diversity, would be informative. However, the overall sequence diversity in pfcsp was similar in both populations (Table 1 and 2, Figure 1), did not differ from that observed in previous studies in Africa [13,18,36] and showed very little genetic differentiation between the two populations (Table 3 and Figure 4). This suggests that genetic drift may not be an important source of variation at this locus, and that the results in bits binned at increments of 0.25 was mapped to the crystal structures via a color scheme indicating the magnitude. The information content is a relative measurement of sequence conservation, with higher values being more highly conserved and lower values having more diversity at that sequence position. This value is based upon the R seq, which is determined by the difference between the maximum possible entropy and the entropy observed in the distribution at that location [30]. The maximum sequence conservation per site is dependent on the number of distinct symbols possible (20 for amino acids) and is therefore 4.32 bits for protein sequences. In this figure, red residues (IC,2.75) are the most highly mutable, followed by yellow (IC ,3.0). The majority of residues that are highly variable face the external matrix and are not associated with the back of the molecule or included within the pocket. doi:10.1371/journal.pone.0062427.g006 presented from the two parasite populations in the study are likely generalizable to much of Africa [18].
In a previous study in Kenya, Waitumbi et al [1] found evidence that associations between TH2 and TH3 epitopes were nonrandom, leading to the suggestion that there are functional limitations on polymorphism in this region of the gene. However, in that study only the most common pfcsp alleles were assessed. Our study confirms this finding in two additional African cohorts, showing that within a population, certain TH2 and TH3 combinations appear to be over-and under-represented. There are several potential explanations for this phenomenon. First, as suggested by these authors [1], this may represent structure/ function limitations that restricts the potential combinations of T cell epitopes within a CS variant. Our structural studies do indicate that the TH2 and TH3 regions are in close proximity; however, the most polymorphic residues are solvent-exposed and lack correlated mutations approximating each other in physical space, suggesting that direct interactions are unlikely (provided that this is the biologically active conformation of the molecule). The analysis did not reveal any patterns consistent with spatial interaction (compensatory mutations) allowing for intramolecular interactions within an epitope or between epitopes. This suggests that all of the interactions underlying the selection of this polymorphism are intermolecular rather than due to functional structural limitation. Such intermolecular interactions may even include CS-CS associations in the formation of the sporozoite coat [37][38][39]. Second, these combinations may represent the impact of a selective force limiting the distribution of haplotypes. Assessment of the complete population of parasites from Lilongwe, Malawi (Table S1) shows that while some combinations are clearly dominant (e.g., 22 of TH2-1/TH3-1 type), there is a plethora of uncommon variants that are still circulating with a wider distribution of TH2 and TH3 linkages. This may suggest that recombination may randomly generate diversity through this region, but selective limitations placed on the parasite population prevent the enrichment of certain pairings. This selection could potentially be driven by the overall immunity to specific variants that may fluctuate over time or potentially even occur within the mosquito vector. Third, it is not surprising that a strong association between TH types may exist due to the close physical proximity of the two epitopes within CS.
The patterns of amino acid substitutions in the TSR domain of CS were assessed in the two geographically-distant populations, showing that the most mutable amino acids (information content ,3.0 on the sequence logo) were confined to the TH2 and TH3 epitopes. The frequency of mutated amino acids in each position was similar between the different populations, suggesting that certain polymorphisms are preferred (Figure 4). These polymorphic sites are highly correlated with those seen in Kenya and Peru [1,40].
The extensive mutagenesis analysis of TH2 and TH3 detected no significant differences between the fixed and polymorphic positions. Additionally, our analysis of the energetics suggests that there are a disproportionate number of energetically unfavorable polymorphisms implying that intermolecular forces are acting to select for such changes (Figure 8). Such intermolecular interactions are more likely at the protein surface, supported by the fact that these polymorphisms are confined exclusively to the front of the molecule, surrounding, but not involving, the pocket ( Figure 6). The extensive analysis of simulated mutagenesis clearly suggested that residues surrounding and behind the pocket are not permitted to vary and have specific energy requirements (Figure 7), denoting that this region must remain conserved likely for functional or structural reasons. Thus the scenario could be that functionally important sites requiring specific amino acids, such as the pocket itself and residues closely packed against it, are conserved to maintain binding to a conserved host receptor, whereas surface sites allow mutagenesis to evade host immune recognition that might interfere with the function of the pocket. Charge reversals would indicate relaxed constraints only needing to meet the criteria of being able to engage in polar interactions. Mutating to a different charge meets that criterion, while making the site more difficult to be recognized by a residue specific interaction such as may occur in the immune response and affords strains a new charge as a means of evading the immune response while retaining functionality. So long as this unfavorable mutation is not sufficient to cause structural perturbation of requisite function, the evolutionary advantage would outweigh the energetic penalty, thereby driving protein evolution energetically uphill and being selected by evading the immune response. Conversely, the observed polymorphisms could be due to diversification of a Figure 7. Significantly energetically-constrained amino acid positions identified by MUMBO Analysis. The 5 amino acids identified by MUMBO as having constrained DDG mutational profiles relative to other identical amino acids within the crystal structure are color coded: ASN340 (red), Gly341 (orange) Ile342 (yellow) Glu343 (green), and Ser332 (cyan), are shown with respect to the TH2 and TH3 domains' surface area (colored as in Figure 4). These residues cluster behind the conserved hydrophobic pocket and were identified because there mutational profile differed on average by 2 standard deviations from all other identical residues within the crystal structure. doi:10.1371/journal.pone.0062427.g007 portion of the binding ligand combined with conservation of an important functional element with which the pocket interacts. Given the structural co-localization of TH2 and TH3 around the hydrophobic pocket, diversification due to processed peptides and MHC binding dependent only on primary structure now appears less parsimonious.
Given the disruptive nature of the amino acid changes, it is possible that the polymorphisms themselves so alter the secondary and/or tertiary structure of the CS protein that the crystallography data generated from the 3D7 variant are not a valid scaffold for structural mapping of variant epitope combinations. This could be tested by crystallographic analysis of a variant of CS dissimilar to that of 3D7. Furthermore, both the crystal structure and our predictive structural mapping were performed with no reference whatsoever to the NANP repeats which comprise the bulk of the amino-terminal half of the CS molecule and can be highly variable in number. However, the fact that the C terminus is so tightly folded suggests that it has a solid hydrophobic core and would likely be resistant to structural changes observed or due to distal effects in the NANP repeat.
Taken together, our results suggest that the patterns of diversity within the T cell epitopes within the TSR domain of CS are in part determined by the relative location of polymorphic amino acids within the intact protein structure. While our data argues that inter-molecular interactions of the intact protein are likely key to the observed diversity, it does not exclude a role for the T cell responses that have been observed in exposed individuals. However, it does raise the possibility that T cell responses are not the primary driver of polymorphism. Given that the TSR domain is well conserved across species and found 187 times within the human proteome [19], the T cell responses observed may in large part be due to the divergent nature of the TH2 and TH3 region being recognized as non-self by the human host's immune system. Thus, any functional impact that these regions Figure 8. Calculated DDG of observed polymorphic amino mutations from the ancestral amino acid residue compared relative to median of all possible mutations at each position. Free energy changes of polymorphisms in TH2 and TH3 are shown relative to the median change from all 19 substitutions from the predicted ancestral allele determined from Plasmodium sp. phylogeny. Mutations that have higher energy than the median are shown in red, while those with lower energy are shown in blue. Positive values represent increases in free energy and thermodynamic instability while negative values represent decline in free energy and greater stability. Neutral sequence where energetics have no effect would be expected to occur 50/50 above and below the median, while conservation of intramolecular function would be expected to minimize entropy and lead to lower energy states. Intermolecular interactions can lead to selection for less favorable states which are significantly enriched in the observed polymorphisms (17 increased vs 5 decreased, p = 0.00845). doi:10.1371/journal.pone.0062427.g008 have in driving the T cell response may be a consequence of the diversity rather than the cause of the diversity. In any respect, given the limitations of our in silico analysis, this calls for renewed and broader empirical work to elucidate the selective forces driving the diversity in the TH2 and TH3 regions. This should include the evaluation of the potential strain specificity of antibody-mediated immune responses to these epitopes and a better understanding of the impact of vector biology on selection of parasite variants. Such experiments are required to fully understand the potential impact of large-scale vaccination and to truly optimize vaccine design to CS. Figure S1 Example of the Determination of TH2 and TH3 Pairing From a Parasite Haplotype. This figure shows how each T cell epitope was coded into a unique type and how each pairing was determined. Red characters represent polymorphisms differing from the reference sequence. The unique TH2 types became rows on the contingency table (Table S1) while the unique TH3 types became columns on the contingency table. Each parasite isolate/sequence in the population was coded in a similar manner. This results in each cell in the contingency table being populated by the frequency of each unique TH2 and TH3 pairing. (EPS) Figure S2 Neighbor-Joining (NJ) Phylogenetic Tree of pfcsp from Malawian and Gambian Isolates. This figure shows the NJ tree for the 220bp fragment analyzed for the 57 unique haplotypes from Malawi (filled circles) and the 25 unique haplotypes from the Gambia (empty diamonds). Major branch divisions were estimated by bootstrapping 500 replicates. As suggested from the population genetic statistics, no population structure based upon geographic origin of haplotypes can be inferred. The NJ tree was created using MEGA software, Version 5.
Supporting Information
(EPS) Figure S3 MUMBO Analysis. This figure shows the MUMBO analysis of the five residues with special energy requirements. Panel A represents the energy requirements for SER332. Panel B, C, D and E show them for GLY341, ILE342, GLN343, and ASN340 respectively. The Y axis represents the DD Gibbs Free energy while the potential amino acids are on the Xaxis. (EPS) | 2016-05-12T22:15:10.714Z | 2013-05-07T00:00:00.000 | {
"year": 2013,
"sha1": "36e685ae71d63b819e5924212db39961fdda281b",
"oa_license": "CCBY",
"oa_url": "https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0062427&type=printable",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "93391c813c282a6172b140a4eeb66ea3458611b7",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Medicine",
"Biology"
]
} |
271507143 | pes2o/s2orc | v3-fos-license | Influence of coping with stressful situations on changes in aerobic capacity and post-workout restitution coefficient in the period of immediate preparation for the European men’s cadet wrestling championship
Aim of the study The research goal of the study was to determine the relationship between coping with stressful situations and the level of aerobic capacity and post-workout restitution, as well as the changes that occur between these variables through the period of training camp preceding international men’s championship competitions in age cadet. Two research hypotheses were verified. The athletes will maintain or improve the results obtained in the performance test and the post-workout restitution coefficient during the immediate preparation period for the European Championships (H1), and the style of coping with stressful situations significantly affects changes in aerobic capacity and the post-workout restitution coefficient during the immediate preparation period for the European Championships (H2). Materials and methods The athletes of the National Men’s Team of Poland in classical style wrestling (n = 16). Coping with stressful situations was examined using the Coping Inventory for Stressful Situations (CISS). Aerobic capacity was analyzed using the Maximal Multistage 20-m Shuttle Run Test. The level of post-exercise restitution was calculated using the Klonowicz coefficient of restitution. Results There was a significant increase in aerobic capacity levels (p < 0.001), a decrease in resting HR (p < 0.002), HR 1′ after the test (p < 0.0031), and HR 5′ after the test (p < 0.007). There was a significant correlation between emotional coping style and avoidant style focused on looking for social contacts vs. HR 3′ after the test and (r = 0.60; p < 0.015) and HR 5′ after the test (r = 0.57; p < 0.020). In addition, a correlation was noted between avoidant style and maximum aerobic speed (r = −0.64; p < 0.008), and avoidant style focused on substitute activities vs. distance and maximum aerobic speed (r = −0.72; p < 0.002). Conclusion It is reasonable to implement psychological training and regular monitoring of mental preparation in the national men’s team training program for athletes competing in wrestling.
Introduction
In order to adequately prepare an athlete to compete in international wrestling championship competitions, a proper psychological and physiological background is vital (Sabirov, 2022;Korobeynikov et al., 2022a).For an athlete to achieve sporting success, it is necessary to have the ability to respond appropriately to stressful stimuli and to recover quickly from them, regardless of the age group in which they compete (Piepiora et al., 2021a;Atiya et al., 2022).Despite the fact that during a wrestling bout, the work in the anaerobic energy zone dominates (Marković et al., 2018;Pryimakov et al., 2020a), a properly arranged training process for an adolescent wrestler must also include aerobic capacity training as an integral training component (Demirkan, 2015).On a single day, a given athlete may fight several bouts and must be prepared to fight the next day in the final block or for the repechage bouts (and the bronze medal bout if the repechage is won).The volume of a single bout in classical style wrestling in the cadet (U17) age group is 2 rounds of 2 min each, with a 30-s break between rounds.In addition, from the junior (U20) age group onward, athletes wrestle longer-2 rounds of 3 min each with a 30-s break between rounds (United World Wrestling, 2023).For an athlete to succeed at international championship wrestling competitions, in addition to proper mental preparation, they must have an adequate level of aerobic capacity and post-exercise restitution.Such preparation will give you the ability to win a bout at a given championship and prepare for the next one as quickly as possible.In previous studies, the psychological characteristics of wrestlers, athletes participating in combat sports, individual sports, and team sports at the competitive level have been carried out (Tomczak et al., 2013;Piepiora, 2019Piepiora, , 2021;;Piepiora and Petecka, 2020;Piepiora and Witkowski, 2020a,b;Piepiora and Piepiora, 2021;Piepiora et al., 2022;Piepiora and Naczyńska, 2023).Physical fitness, body composition, developmental age, aerobic capacity and its changes after a 3-month training period, eating habits, and somatic development were analyzed in young wrestlers (Clarke et al., 2013;Piepiora et al., 2017Piepiora et al., , 2018;;Witkowski et al., 2018).Physiological criteria for the functional fitness of national team wrestling athletes have been determined (Pryimakov et al., 2020b).It has been shown that the proper aerobic-anaerobic preparation of wrestlers is one of the determinants of the rate of recovery after high-intensity training (Sawczyn et al., 2015).In addition, it has been shown that the level of aerobic capacity, together with the rate of post-exercise recovery, can be one of the controlling tools in the training process of high-performance athletes in combat sports (Prokopczyk and Sokołowski, 2020).To date, the associations of stress coping style with the level of aerobic capacity and the level of post-exercise restitution and their changes during the training camp during the period of direct competitive preparation for championship competitions in wrestlers in the cadet age group have not been analyzed.The authors set out to analyze the results of the aerobic capacity test and the rate of post-exercise restitution in relation to the style of coping with stress in athletes in a training camp that was in the period of direct competitive preparation for the European Cadet Wrestling Championships.The research goal of the study was to determine the relationship between coping with stressful situations and the level of aerobic capacity and postexercise restitution, as well as the changes occurring between these variables through the period of the training camp preceding the international championship competition.The authors posed two research hypotheses: H1: The athletes will maintain or improve the results obtained in the performance test and the post-workout restitution coefficient during the immediate preparation period for the European Championships.
H2:
The style of coping with stressful situations significantly affects changes in aerobic capacity and post-workout restitution coefficient during the immediate preparation period for the European Championships.
The presented research will indicate the impact of psychological preparation on the physiological capabilities of young wrestlers during the period of immediate preparation for championship competitions.The research undertaken will provide important advice for coaches, sports psychologists, and people working with young athletes preparing for major competitions.Taking into account the period in the training process, the level of preparation should be at a constant or increasing level (due to the upcoming main competitions).Bearing in mind that the athletes are in the adolescence period and before major competitions, the authors hypothesized that their results may depend on their style of coping with stressful situations.
Participants
The study included a group of 16 men on the Polish National Cadet Wrestling Team in classical-style wrestling with an average age of 16.56 years (SD = 0.54).All those tested were called up to the national team training camp and were in the period of immediate preparation for the European Cadet Men's Wrestling Championships.
Methods
Coping with stressful situations was assessed using the Coping Inventory for Stressful Situations (CISS) questionnaire by Endler and Parker (1990).The questionnaire consists of questions about behavior in stressful situations and allows us to determine the respondent's tendency to use particular coping styles with stress (task-SST; emotional-SSE; avoidant-SSA).In addition, this questionnaire details two subcategories of avoidant style-engaging in substitute activities (ESA) and looking for social contacts (LSC).While filling out the questionnaire, in individual questions, the respondent determines how much he engages in given activities when he is in a stressful situation.Each question is answered using a 5-point scale (from 1 to 5).Individual ratings mean: 1-never; 2-very rarely; 3-sometimes; 4-often; 5-very often.The study used the CISS questionnaire adapted by Strelau et al. (2007) and Strelau and Jaworowska (2020) for use in Polish settings.
Aerobic capacity was explored using the Maximal Multistage 20-m Shuttle Run Test (the "Beep-Test").This test involves running a designated 20-m distance marked by lines between sound signals ("beep") at increasing speed and frequency at the next level.The examined must cross the designated line before the next signal ("beep"), otherwise he receives a warning.Receiving a second warning means the end of the test (Léger and Lambert, 1982).To calculate the estimated aerobic capacity, a (Léger et al., 1988).
The subjects completed the CISS questionnaire once, at the beginning of the National Team Training Camp.The aerobic capacity test, along with the analysis of post-workout restitution, was carried out twice -at the beginning of the training camp (term I) and at the end of the training camp (term II).Tests of capacity were conducted without prior training so that the athletes rested while they performed them.
Statistical analysis
The following indicators were used to analyze the results: mean (M), minimum (Min), maximum (Max), standard deviation (SD or ±), and significance level and probability (p).The normality of the distribution was tested using the Shapiro-Wilk test.The analysis of the significance of changes between the study at terms I and II was applied to the t-test for dependent samples; in cases where the variable under study in at least one of the terms was not normally distributed, the Wilcoxon paired rank-order test was used.Cohen's d coefficient was used to estimate the size of the effect.When the test value was less than 0.2, the result was considered insignificant; between 0.2 and 0.49, it was small; between 0.50 and 0.80, it was medium; and when it was greater than 0.80, it was considered as strong (Cohen, 1988).Pearson's test and Spearman's rank correlation coefficients were used to determine the strength of relationships between variables in the 2nd and 1st terms of the study.When the test value was less than 0.4, the result was considered low; between 0.4 and 0.69, it was considered medium; and 0.70 was considered strong (Akoglu, 2018).
Results
In the stress coping style test, the athletes scored an average of 60.1 (SD = 6.25) points in the task style, 41.9 points (SD = 8.19) in the emotional style, and 42.5 points (SD = 9.53) in the avoidant style.In the subcategories of avoidant style, they scored an average of 18.4 (SD = 5.15) points in the style, of engaging in substitute activities and 17.3 (SD = 3.59) points in the style of looking for social contacts.The tested men of the Polish National Cadet Wrestling Team between the 1st and 2nd terms of test terms showed a significant, large (Cohen's d size = 2.80) increase in the level of aerobic capacity (VO 2 max [mL/kg/ min]) by an average of 7.7 mL/kg/min, a large (Cohen's d size = 0.88) decrease in resting HR by an average of 8.1 bpm, a medium (Cohen's d size = 0.59) decrease in HR 1 min after the test by an average of 13 bpm, and a medium (Cohen's d size = 0.68) decrease in HR 5 min after the test by an average of 14.8 bpm.The other indicators tested showed no statistically significant (p ≥ 0.05) difference between the tested terms (Table 1).
The study at term I showed a positive medium significant correlation between SSE and HR 1′ after the test (r = 0.55; p < 0.05) and a negative medium significant correlation between SSA -LSC and HR 5′ after the test (r = −0.64;p < 0.01; Table 2).
Between the 2nd and 1st terms, there was a negative medium significant correlation between SSA and maximum aerobic speed at the last level of "Beep-Test" (r = −0.64;p < 0.008), a negative medium significant correlation between SSA-ESA and distance in Beep-Test (r = −0.55;p < 0.029), a negative strong significant correlation between SSA-ESA and maximum aerobic speed of the last level of "Beep-Test" (r = −0.72;p < 0.002), a positive medium significant correlation between SSA-LSC and HR 3′ after the test (r = 0.60; p < 0.015), and a positive medium significant correlation between SSA-LSC and HR 5′ after the test (r = 0.57; p < 0.020; Table 3).
Discussion
Both research hypotheses were partially confirmed.The results of the study show that there was a significant increase in VO 2 max [mL/kg/min] and a decrease through the grouping period: resting HR, HR 1′ after the test, HR 5′ after the test in the athletes.The level of COR 3′ decreased and COR 5′ increased, but both at statistically insignificant levels.Stress coping style was also shown to be significantly associated with indicators of the performance test and HR 1′ after the test.No significant associations were noted with post-workout restitution indicators.The results of the study show that athletes with a higher level of emotional stress coping style obtained a higher number of heart contractions 1 min after the test on the first test date.Athletes with an evasive stress coping style of seeking social contact obtained a lower number of heart contractions 5 min after the test on the 2nd test date, but this style had a significant effect on increasing heart rate 3 min after the test and 5 min after the test compared to the 1st test date.Wrestlers with a higher evasive style of coping with stress had a significantly lower maximum aerobic speed of the last level score in the 2nd term compared to the 1st term of the study.In addition, subjects with an evasive stress-coping style of engaging in substitute activities scored significantly lower distance and maximum aerobic speed of the last level in the 2nd term of the study.Previous research has shown that the only stress coping style suitable for competition in competitive sports is a task-based style (Secades et al., 2016).Younger wrestlers are more likely to react emotionally to stressful stimuli and seek interpersonal connections, which significantly reduces performance when competing in professional sports (Tomczak et al., 2013;Namazov et al., 2019).It has been proven that senior wrestlers deal with stress differently than younger athletes competing at the professional level (Rutkowska et al., 2020).This may be influenced by adolescence and the transition to sports at the senior level.During this period, psychosocial stress is among the biggest concerns of young athletes (Lundqvist et al., 2023).Therefore, it is reasonable to provide psychological preparation as early as possible, carried out in a methodical manner, giving athletes the ability to solve stressful situations and recover from them as quickly as possible.Importantly, other studies have shown that older athletes competing in the international elite experience a decline in mental toughness with age compared to younger athletes competing at the same level (Korobeynikov et al., 2022b).This indicates that there is a constant need to monitor the mental toughness of athletes competing in professional sports, even after they have achieved the highest results at major international competitions.This will allow for the identification of the moment of reduction in the level of psychophysiological capabilities of a particular wrestler.The results of the conducted research and other studies indicate that competing at the highest level in wrestling requires parallel sports preparation that includes fitness, technical, tactical, and mental training, which should be properly conducted from a young age, matching the age, level, and needs of the training period the athlete is in Stenling et al. (2015), Sabato et al. (2016), Piepiora et al. (2021b), andPiepiora et al. (2023).Research results indicate that the last training period for young wrestlers preparing for championship competitions should include psychological training.However, it should be taken into account that mental training is an integral part of competitive combat sports, significantly influencing the results obtained at the decisive moment, regardless of the age group in which one competes (Andreato et al., 2022).
That indicates that proper psychological preparation and the ability to control one's functioning at key moments should be continuously developed during the wrestlers' training process.That indicates that proper psychological preparation and the ability to appropriately and effectively cope with stressful situations should be continuously developed during the wrestlers' training process.This also indicates that continuous research should be conducted, taking into account the effectiveness of psychological preparation programs in professional sports in individual age groups and the implications of the most effective methods of psychological preparation for professional.
The presented research was characterized by limitations affecting the size of the sample, as it was conducted on a small, selected number of athletes attending a camp preparing for the European Men's Cadet Wrestling Championship.Taking into account the period of adolescence, selected groups, and various periods occurring in the training cycle, it would be reasonable to conduct multi-year research analyzing the Covering the research of competitive athletes of all age groups and a larger group of athletes than those directly preparing for championship competitions would make it possible to compare successful players competing in the senior group with adolescent athletes.Moreover, in the case of conducting comparative longitudinal studies, it will be possible to demonstrate young athletes who have the psychological predispositions to compete at the championship level in senior sports.At the same time, given the lack of similar studies and the correlations that were shown in the research, in the opinion of the authors, it would be valuable for a more detailed understanding of the studies to analyze the psychophysiological changes of the entire cycle of preparation for championship competitions along with the sports results achieved at them.
Conclusion
Significant correlations were found between the style of coping with stress and scores in the performance test and the number of heart contractions in athletes preparing for the European Cadet Men's Wrestling Championships.The demonstrated significant changes between the 1st and 2nd terms of the study indicate that it is necessary to introduce psychological preparation into the training program of the national team, along with regular evaluation of its effectiveness.organizations, or those of the publisher, the editors, and the reviewers.Any product that may be evaluated in this article, or claim that may be made by its manufacturer, is not guaranteed or endorsed by the publisher.
TABLE 1
Descriptive characteristics of performance and restitution variables in the 1st and 2nd terms of the study in the Polish National Cadet Wrestling Team men (n = 16).
TABLE 2
Correlations between aerobic capacity and post-workout restitution variables and stress coping style in the 1st and 2nd terms of the Polish National Team study in cadet wrestling men (n = 16).
Pearson test; b: Spearman's rank correlations; values p < 0.05 are in bold. a: | 2024-07-28T15:17:30.019Z | 2024-07-26T00:00:00.000 | {
"year": 2024,
"sha1": "7a61203fcfefb88b4319d6c4a39404e33bc115cb",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "ScienceParsePlus",
"pdf_hash": "5c9d3c4f9f407381e58f2b6eef727f88c5b7b946",
"s2fieldsofstudy": [
"Psychology"
],
"extfieldsofstudy": []
} |
225035566 | pes2o/s2orc | v3-fos-license | Thermodynamics-inspired Macroscopic States of Bounded Swarms
The collective behavior of swarms is extremely difficult to estimate or predict, even when the local agent rules are known and simple. The presented work seeks to leverage the similarities between fluids and swarm systems to generate a thermodynamics-inspired characterization of the collective behavior of robotic swarms. While prior works have borrowed tools from fluid dynamics to design swarming behaviors, they have usually avoided the task of generating a fluids-inspired macroscopic state (or macrostate) description of the swarm. This work will bridge the gap by seeking to answer the following question: is it possible to generate a small set of thermodynamics-inspired macroscopic properties that may later be used to quantify all possible collective behaviors of swarm systems? In this paper, we present three macroscopic properties analogous to pressure, temperature, and density of a gas, to describe the behavior of a swarm that is governed by only attractive and repulsive agent interactions. These properties are made to satisfy an equation similar to the ideal gas law, and also generalized to satisfy the virial equation of state for real gases. Finally, we investigate how swarm specifications such as density and average agent velocity affect the system macrostate.
Introduction
Recently, there has been considerable interest in better understanding collective behavior in a diverse range of natural as well as engineered complex systems [1] [2]. Swarms can be found in both the natural and engineered worlds, and represent complex systems in which agent-based interactions are often simple and local. Like other self-organizing complex systems, swarms are characterized by the formation of one or more macroscopic-scale collective patterns which can be extremely difficult to predict. These difficulties are in fact observed across a diverse range of selforganizing multi-agent systems (MAS), including traffic jams [3], economic systems [4], and even human societies [5]. Quantification and classification of the collective behavior of a swarming system not only is necessary to better understand and predict the dynamics of these complex systems, but also it can give us more intuition about more complex self-organizing systems [6].
Generally, this task can be conducted in two ways as shown in Fig. 1. The first method relies on obtaining the microstate information, i.e. the state information of all the individual agents in the swarm, and then using it to classify collective behavior. This approach is infeasible in practice due to the considerable difficulties in obtaining microstate information. The second method relies on estimating the macrostate information using macroscopic-scale state variables and then classifying collective behaviors. This approach offers greater potential, but requires significant investigation beyond what exists in the state-of-the-art.
Similarly, researchers in the field of thermodynamics have long known about the Lennard-Jones potential function [7], which is used to characterize the interactions between pairs of fluid particles. However, knowledge of classifying 'collective' behaviors of materials into different phases (e.g., solid, liquid, and gas) predates the work of Lennard-Jones in 1924. These observations drive us to a natural query: can swarms be modeled as a thermodynamic system?
There are several analogies that can be drawn between fluid and swarm systems. First, like swarming agents, fluid particles typically interact locally with each other but not over long distances. This is evident from Fig. 2 shows the asymptote for the Lennard-Jones potential tending to zero with increasing distance between fluid particles. Second, like fluid molecules in Lennard-Jones function, most agent interactions are usually modeled as being attractive and/or repulsive in nature (in addition to other local responses) [8][9] [10].
In fact, fluid-like behavior has been reproduced in robotic swarms for collective movement [11] and surveillance tasks [12] [13], but the authors do not yet know of any literature that explicitly defines macroscopic swarm states in the context of fluids. Existing literature provides only limited insights into the macroscopic-scale description of swarm properties [14]. For example, Jantz et al. [15] have used thermodynamics-related concepts to indirectly measure the performance of a swarm assigned the task of finding an exit using statistical interactions between agents. However, these works have not been extended to generate a macroscopic description of a swarm system.
Nonetheless, there are some recent efforts [16] [17] to to better understand the order-disorder phase transition in a simple class of swarms based on Vicsek model [18]. Our work follows the similar The authors believe that defining the swarm's macrostate could play a key role in quantifying, classifying and ultimately predicting the collective, emergent behavior of swarms in the future.
In the following sections, first, a general definition of swarm macrostate is described, then, a swarming model that is inspired by the works of Gazi et al. [8] and Couzin et al. [19] is introduced. Next, two macroscopic properties, viz. swarm temperature (T s ) and swarm pressure (P s ) are defined analogously to their thermodynamic counterparts. The swarm pressure and temperature are shown to satisfy the appropriate ideal gas law. We then extend these ideas to the virial equation of state and derive virial coefficients for swarms that are analogous to real gas-like behaviors.
Swarm Macrostate
A swarm macrostate can potentially encapsulate global-scale information about the system and help classify collective behavior and more easily. At the microscopic scale, the state of a two dimensional swarm consists of the positions (x i , y i ) and velocities (ẋ i ,ẏ i ) of the N agents in the system, and hence the state-space dimension is 4N . Within this 4N -dimensional state-space, the macrostate may be defined as a set of macroscopic-scale variables (ϕ) whose evolution is restricted to a low-dimensional manifold such that h(ϕ) = 0. For example, the macroscopic state variables of a gas (i.e. pressure, volume, and temperature) are related via the low-dimensional manifold given by h(P, V, T ) = P V − nRT = 0, which represents the ideal gas law. Now the obvious questions are: what are these macroscopic properties or macrostate variables need to describe a swarm system and how should we find them? In the following sections, we propose thermodynamics-inspired swarm macrostate variables (ϕ) and seek to find the nonlinear low-dimensional manifold h(ϕ) = 0.
, ϵ is intermolecular potential energy, r represents intermolecular relaxed distance, and ϵ 0 is a constant [20] (b) Thermodynamics-inspired pairwise agent interaction function f pair .
Swarm System Dynamics
To introduce the concept of a fluid-like swarm macrostate, the swarm dynamics will be restricted to two spatial dimensions. The collective behavior of the swarm will evolve in this 2-D world, with the state x i of each agent given by: where [x i , y i ] denotes the agent's position vector, and θ i is its heading. The single integrator dynamics of each agent are given by the following equation:ẋ where v denotes the constant linear speed for all agents, ∠ ⃗ f i represents the heading of the swarm interaction vector ⃗ f i for agent i, which will be discussed in subsection 3.1, and the constant k is proportional to | ⃗ f i |.
Many self-organizing complex systems, regardless of their domain of origin, exhibit a local behavior which is known as "short-range activation and long-range inhibition" [21]. In swarm systems, this agent-based behavior is modeled as attractive and repulsive interactions. In this notion, each agent attempts to avoid straying away from the group (long-range inhibition), and is also actively repelled by neighboring agents (short-range activation). The choice of the specific functions that determine the attractive and repulsive behaviors of individual agents can vary significantly, and this depends heavily on the application being modeled. For example, while biologists use certain behavioral functions to model natural biological systems [22] [19], roboticists and engineers may design different agent-based interaction to help the swarm succeed at a particular task [23][24][25] [26]. In fact, the similar behavior was proposed for fluid molecules where square well potential was utilized to determine the relationship between potential parameters and the macroscopic behavior of the fluid [27]. While the included work presents a specific model for attractive and repulsive interactions between agents, the insights from this study can potentially be extended beyond these functional forms. In the next few sections we make an analogy between swarm agents and fluid particles for a specific functional form.
Attractive/Repulsive Interaction Model
In the following discussions we define a swarm interaction intensity f and the swarm interaction vector ⃗ f to model the attractive and repulsive interactions between neighboring agents. The swarm interaction intensity f belongs to the class C 1 , i.e. its first derivative is continuous. The attraction/repulsion function is modeled along the lines of the Lennard-Jones potential function. Consequently, the interaction strength between pairs of agents is assumed to decay to zero at large distances to reflect the local nature of these interactions [23]. However, unlike the Lennard-Jones potential function, the interaction function is assumed to be bounded below and above to generate realistic swarm dynamics. Fig. 2 shows the similarities and differences between the proposed interaction function and the Lennard-Jones force function [7].
The pair-wise agent interaction intensity is evaluated by summing the attraction (f a ) and repulsion (f r ) intensities as a function of the distance d, as follows: where, and k a and k r are the unitless repulsion and attraction 'gains' which represent the magnitude of attraction and repulsion of an agent by another agent. In most swarming scenarios, the repulsive behavior is expected to dominate at shorter distances, so the repulsion gain is modeled to be higher than the attraction gain (i.e. k r > k a ). Additionally, the parameters r r and r a specify the radii of repulsion and attraction zones, respectively. Moreover, the parameters s r and s a characterize how quickly the repulsive and attractive interaction intensities fade with distance. The authors would like to note that the interaction intensities f r and f a represent scalar quantities and must not be interpreted as forces between agent pairs. a tt ra ct ion zone r e p u ls ion zone Figure 3: Schematic showing interactions between agent i and its neighbouring agent j, as well as between agent i and wall boundary constraint. The distances between them dictate whether the agent i experiences repulsion or attractive effects.
Modeling Repulsive Boundary Constraints
Most thermodynamic studies of fluids are typically conducted within a set of physical constraints which define the system boundaries. Similarly, swarming operations such as surveillance or search-and-rescue may require the agents to be restricted to a specified area. To replicate these operational conditions, we define a set of virtual boundary constraints around the swarming system which repel agents in their vicinity. As with the pairwise repulsion intensity, we define the boundary repulsion intensity as: where d ′ represents the distance of the agent in question to a boundary constraint, and s b represents how quickly the boundary effects fade with distance. As shown in Fig. 4, the value of s b is chosen to model rapid decay of repulsive boundary effects.
Eventually, the dynamics of an individual swarm agent are a combination of the pairwise interactions with all other swarm agents, as well as the repulsive boundary effects. These are quantified as a swarm interaction vector ⃗ f i for agent i, which is defined as follows: where ⃗ r ij represents the position vector from agent i to agent j, and ⃗ r ib is normal distance of agent i from boundary constraint b. These notations are visualized alongside the shaded repulsion and attraction zones for agent i and neighboring agent j in Figure 3. Additionally, the parameters N and B represent the number of agents and boundary constraints (or walls), respectively.
The direction of the swarm interaction vector dictates the desired heading of the agent, and its magnitude | ⃗ f i | specifies the intensity of the resultant swarm interaction. These quantities are used to simulate the single integrator swarm dynamics as discussed in (2). Figure 4 shows a representative snapshot of the simulation environment with thick black boundary constraints, agents as circles, and the swarm interaction vectors for each agent as purple lines.
Thermodynamic Analogy for Swarms with Non-interacting Agents (f pair = 0)
In thermodynamics, the ideal gas law is an equation relating pressure (P ), temperature (T ) and volume (V ) of an ideal gas as follows: P V = nRT , where n represents the number of moles, and R is the universal gas constant. Unlike a real gas, an ideal gas assumes the lack of interactions between gas molecules. The universal gas constant can be Figure 4: Two-dimensional simulator snapshot. The short black lines represent the headings and the purple lines represent the interaction vectors ⃗ f i . The swarming area is a 5m × 5m square (i.e. V = 25) and is fixed. In this study, the other agent specifications are: k r = 0.15, k a = 0.006, s a = s r = 5000, r r = 0.6, and r a = 1.8. Also, for all repulsive boundaries: k b = 0.1 and s b =500. written as R = N k b /n, where N is number of the particles and k b is Boltzmann's constant, resulting in the following representation of the ideal gas equation: which indicates that the value of P V /N T remains constant for an ideal (or perfect) gas. A key query now presents itself: Does there exist an analogous 'thermodynamic law' that describes a swarm system at the macroscopic scale, given that agent behaviors may potentially be governed by non-physical laws? Specifically, can we define swarm macro-properties that are analogous to pressure and temperature, i.e. swarm pressure (P s ) and swarm temperature (T s ), such that they satisfy following equation: where V represents the volume constraints placed on the swarm, N represents the number of swarming agents, and k s may be considered to be a 'swarm constant' analogous to Boltzmann's constant. We now define swarm pressure and swarm temperature by drawing inspiration from analogous definitions for fluid systems. While the analysis assumes ideal gas behavior and neglects pairwise agent interactions (i.e. f pair = 0), this assumption is relaxed in later sections. The reader should also note that due to the boundary constraints placed on the swarm in this study, only isochoric (i.e. constant volume) thermodynamic processes can occur in the simulated system.
Swarm Pressure (P s )
From a macroscopic perspective, the pressure of a gas is simply the magnitude of force applied on the wall per unit area. On the other hand, from a microscopic perspective, the kinetic theory assumes that pressure is caused by the force associated with individual atoms striking the walls. However, in the context of swarms, agents typically do not apply any 'force' to their surrounding environment. Therefore, we build upon the concept of gaseous pressure, and evaluate swarm pressure by measuring boundary effects using the boundary repulsion intensity f bound . To measure these boundary effects in the two-dimensional simulator, we evaluate time-averaged magnitude of boundary interactions across the entire agent population divided by total length of system boundaries L:
Swarm Temperature (T s )
In thermodynamics, finding an explicit microscopic-scale expression for temperature is a difficult task. The gas kinetic theory, however, does indicate that temperature is directly proportional to the total kinetic energy of the system which itself is directly proportional to square of average velocities of particles. However, since the swarm agent interactions are typically not physics-based, the expectation that temperature is proportional to the square of agent velocities may not be justified. To examine the relationship between swarm temperature (T ) and the absolute agent velocity (v), we run the simulation for various agent numbers N and velocities v and record corresponding P s values. Defining the swarm temperature T s ≜ v α , we evaluate the term P s V /N T s (= k s ) for various values of α ∈ [0.1, 2]. Figure 5 shows the variability in the 'constant' k s for various possible exponents of v. It is evident that k s does not necessarily remain constant for any arbitrarily chosen definition of T s . As is shown, α ≈ 1 results in k s being almost constant (in this case k s ≈ 0.303) and T s = v satisfies the ideal equation of states quite well. While these simulations help us define the relationship between the swarm temperature and agent velocity, they do not yet include pairwise agent interactions. As a result, these discussions do not yet help us determine a realistic macroscopic-scale description of the swarm. The next section relaxes this assumption by including pairwise agent interactions and modeling the swarm as a real gas.
Empirical Equation of State for Swarms with Interacting Agents (f pair ̸ = 0)
Our previous discussions indicated that we can identify a thermodynamic-inspired macroscopic-scale description of a swarm based on the ideal gas law. Specifically, we demonstrated that in an 'ideal gas-like' swarm, i.e. swarming without agent-to-agent interactions, the macroscopic-scale state variables are related such that the system evolves on a low-dimensional manifold characterized by (9). In this section, we relax the 'ideal gas' assumption to model realistic swarm behavior with agent-to-agent interactions given by (3). In thermodynamics, the value of any macroscopic property X can be written as the sum of an ideal term and a residual term [28], where the significance of the residual term may attributed to non-idealities such as particle interactions: The empirical equation of state, also known as the virial equation of state, is an equation which generalizes the ideal gas law to real gases, as follows: where ρ = N/V denotes the density, the density-dependent terms on the right-hand side represent 'residual' terms, and the constants B, C, D, etc. are functions of temperature and depend on the fluid substance being modeled. Specifically, the second virial coefficient, B, arises from the interaction between a pair of molecules, the third virial coefficient, C, depends upon interactions in a cluster of three molecules, D involves a cluster of four molecules, and so on for the following higher-order terms [28].
Taking inspiration from the virial equation of state for gases, we can generalize the ideal swarming equation of state to model realistic swarms by adding residual terms which correct the deviations caused by agent interactions: where a 1 , a 2 , etc. are functions of the swarm temperature T s . Theoretically, an infinite number of residual terms could be used to model real gases in (13), resulting in an infinite number of virial coefficients. However, the magnitude of virial coefficients for higher order terms is much smaller than for lower-order terms, since the likelihood of simultaneous higher-order interactions between several agents drops significantly. Consequently, we restrict the analysis to four higher order terms. The exact number of higher-order virial terms that may be neglected perhaps cannot be generalized for arbitrary swarms, just like it cannot be generalized for arbitrary gases.
The virial coefficients are identified by performing Monte Carlo simulations with p different agent velocities and q different agent densities (ρ = N/V ). The swarm pressure P s given by (10) and swarm temperature T s are recorded for each simulation, and are used to calculate the cumulative residual term ψ = (P s V /N T s k s ) − 1. As expected, the agent non-idealities result in different magnitudes of ψ for each simulation run. Using a swarm density matrix Γ for q different values of densities, and re-writing (13) in matrix form, we can obtain the virial coefficient matrix A using the linear least squares approach, as follows: where Γ ∈ R q×m is the density matrix, Ψ ∈ R p×q is the residual values matrix, and A ∈ R m×p is the coefficient matrix where each row of A is associated with a virial coefficient and specifies how that coefficient varies as a function of swarm temperature T s . In general, the accuracy of the equation of state increases if more number of virial coefficients m that are included in the analysis. However, this approach is limited in practice by both quality and quantity of the simulation samples. In the presented work and Monte Carlo simulations, four residual terms were considered.
Discussion
In this section we discuss the tripartite relationship of macroscopic properties and the potential of using residuals to design specific task-oriented swarms.
Tripartite Relation of the Macroscopic-scale Properties
The previous sections introduced the macroscopic properties of swarm pressure and swarm temperature. Along with swarm density, these three properties may be used to describe the macroscopic state of a swarm, as for fluid systems. As shown in Fig. 6, as swarm density and swarm temperature (which is proportional to agent velocity) increase, the swarm pressure increases as well. Specifically, within a fixed simulation area (i.e. constant 'volume'), higher swarm density results in more 'collisions' (or interactions) with the boundary constraints. Thus, these swarm behaviors mimic our knowledge of thermodynamics of real gases. Additionally, given two of the three macroscopic-scale variables for swarm systems enable us to identify the third variable using the low-dimensional manifold h(P s , T s , ρ) = 0 shown in Fig. 6. Fig. 7 shows the cumulative residual term as a function of swarm temperature and swarm density. The residual term indicates the deviation of the swarm from ideal-gas-like behavior, which also corresponds to deviations from non-interacting agent behavior. Knowledge of the residual term may be helpful depending on the context or task Figure 7: Unit-less cumulative residual term as a function of swarm density and temperature. As shown, for any given temperature T s , the minimum residual term can be identified for a specific density in the corresponding ρ − ψ plane. At this density, agents have the least interactions with each other, and mimic ideal-gas behavior most closely. assignment of the swarm. For example, if the swarm is being tasked with patrolling or surveillance, it may be desirable to spread out the agent such that they cover large areas without significant interactions or redundancy of effort. In this scenario, swarm engineers may benefit from selecting macroscopic parameters such that the swarm behavior mimics ideal gases, i.e. has the smallest residual term. On the other hand, if the swarm has to perform collaborative tasks, such as building structures or mapping, then significant agent interactions may be required. Consequently, the swarm engineers would benefit from selecting macroscopic parameters that result in a large cumulative residual term, ensuring that agents interact repeatedly in their environment.
Concluding remarks and future works
Swarm engineering is currently making the transition from a research-centric endeavor to industrial applications, and its scalability is going to depend on being able to control large number of agents with relatively few control parameters. The presented thermodynamics-inspired macroscopic variables (swarm pressure, swarm temperature, and swarm density) offer a potential set of control parameters for a swarm governed by attractive-repulsive effects. The results also indicate that an empirical thermodynamics-inspired equation of state can yield a tripartite relationship between these macroscopic-scale properties, and that the relationship can be used to find one macroscopic variable given all others.
For unknown large-scale swarms, various valuable information (i.e. number of agents, agents absolute velocity and operational coverage) can be encoded in these macroscopic-scale properties (i.e. macrostate) of the system to provide a quantitative representation of the collective behavior of the swarm. Moreover, future work will leverage this information (along with the existing thermodynamics knowledge-base on phase transitions) to predict qualitative changes in swarm dynamics. Future work will also extend this approach to three-dimensional swarms, which also creates the possibility of studying macroscopic swarm dynamics in the context of equilibrium thermodynamic process, such as isobaric, isentropic, and isothermal processes, as well as subsequent expansion to non-equilibrium thermodynamic processes.
The presented work also does not fully address the relationship between the local interaction function and macroscopicscale properties -the so-called micro-macro link. In future work, we intend to classify different collective behaviors of the swarm with limited macroscopic-scale data. It is worth mentioning here that for an unknown swarm, these macroscopic-scale properties are much easier to measure in comparison to estimating the microscopic-scale states of all agents. In fact, these issues were also faced by nascent thermodynamics researchers trying to analyze fluid behaviors in the 19 th century. Our future work will seek to leverage the significant advancements in thermodynamics over the past two centuries and re-purpose them to study large-scale swarms. | 2020-03-05T10:23:09.418Z | 2020-03-26T00:00:00.000 | {
"year": 2023,
"sha1": "e77523a1840b93c1a710bff6d306f152566d32a2",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "03709024b66a5a72b9c1dcfa794920c67ad10a31",
"s2fieldsofstudy": [
"Computer Science"
],
"extfieldsofstudy": [
"Physics"
]
} |
10096283 | pes2o/s2orc | v3-fos-license | Metagenomic evidence for taxonomic dysbiosis and functional imbalance in the gastrointestinal tracts of children with cystic fibrosis
Cystic fibrosis (CF) results in inflammation, malabsorption of fats and other nutrients, and obstruction in the gastrointestinal (GI) tract, yet the mechanisms linking these disease manifestations to microbiome composition remain largely unexplored. Here we used metagenomic analysis to systematically characterize fecal microbiomes of children with and without CF, demonstrating marked CF-associated taxonomic dysbiosis and functional imbalance. We further showed that these taxonomic and functional shifts were especially pronounced in young children with CF and diminished with age. Importantly, the resulting dysbiotic microbiomes had significantly altered capacities for lipid metabolism, including decreased capacity for overall fatty acid biosynthesis and increased capacity for degrading anti-inflammatory short-chain fatty acids. Notably, these functional differences correlated with fecal measures of fat malabsorption and inflammation. Combined, these results suggest that enteric fat abundance selects for pro-inflammatory GI microbiota in young children with CF, offering novel strategies for improving the health of children with CF-associated fat malabsorption.
results support a model of CF GI dysbiosis and dysfunction in which the malabsorption of dietary fat selects for a pro-inflammatory enteral microbiome.
Methods
Subjects and samples. The samples and source subjects for this analysis were collected as part of a prior study comparing the fecal microbiota of children aged < 3 years old with and without CF. This study was approved by the Seattle Children's Hospital Institutional Review Board, all procedures were carried out in accordance with the approved guidelines, and informed consent was obtained for all subjects. Subject inclusion/exclusion criteria, as well as demographic information, have been previously described 3 . The current analysis included not only the specimens described in our prior publication 3 , but additional specimens collected from the same subjects after our earlier analysis has begun. The final sample set (after removing low-coverage samples as described below) comprised 104 fecal samples from 14 children with CF (sampled between the ages of 15 days to 5 years) and 12 children without CF (sampled between the ages of 55 days to 3.5 years), and with between 2-5 samples per subject collected over an approximately one year period. In several analyses described here, we additionally binned samples by age group (e.g., first, second, or third year of life) to further control for potential age-related differences in the microbiome. Detailed information regarding the included specimens and source patients are provided in Supplementary Table S1.
Fecal fat and calprotectin analyses. Fecal fat content was measured by the acid steatocrit method, performed as previously described for each sample in the collection 3 . Calprotectin-a product of neutrophils -was used as a fecal measure of inflammation and was defined using an FDA-approved enzyme-linked immunosorbent assay as previously described 3 . Metagenomic sequencing. Sample processing and DNA extraction were performed as previously described 3 . Briefly, sequencing data for this study resulted from Illumina HiSeq-2000 sequencing using the Nextera platform. The Human Microbiome Project (HMP) protocol was used for processing reads 4,5 . Specifically, BMTagger was used to remove human reads. Duplicates were removed using the HMP documented protocol. Runs for which a pair failed were not duplicate-filtered. Reads were quality-trimmed using HMP scripts, modified to work with single-end runs. Reads shorter than 60 bases after quality trimming were removed. Four samples that had less than 10 M reads after filtering human reads were also removed from the analysis. This process resulted in a total of 104 samples with an average of 58 million reads per sample.
Taxonomic profiling and analysis. The taxonomic composition of each sample was defined using metagenomic phylogenetic analysis (MetaPhlAn 6 ; version 1.7.3). To examine variation in taxonomic profiles across samples, a principal component analysis (PCA) was performed. Differentially abundant taxa in CF versus non-CF samples were identified using the Wilcoxon rank-sum test with false discovery rate (FDR) <0.1. To examine the role of E. coli in CF vs. non-CF samples, we performed several analyses using both the original taxonomic profiles and taxonomic profiles in which E. coli was excluded and the abundances of the remaining species were renormalized within each sample. In addition, we confirmed that our results held when excluding samples that were taken from children who had been administered antibiotics in the prior 60 days, and again when excluding samples from children who were breastfed at sampling.
Functional annotation of metagenomic reads.
To determine the presence and relative abundance of genes in each metagenomic sample, reads were mapped to the Kyoto Encyclopedia of Genes and Genomes (KEGG). Specifically, each sequencing read was aligned to a peptide database containing the peptide sequences from all annotated KEGG organisms (KEGG 7 ; v. 67.0, July 15 th , 2013 weekly release) using mBLASTx with standard parameters and accepting all matches with an E-value < 1 (in accordance with HMP protocol 4,5 ). Each read was then annotated according to the KEGG Orthology groups (KOs) associated with the identified alignments using the 'top gene' approach that was previously described and carefully validated 8 . Notably, while significantly fewer KOs were identified in CF versus non-CF samples on average (rank-sum p < 0.03), when adjusted for the number of reads per sample (which were on average lower in CF samples), this difference became non-significant.
To streamline the analysis, we analyzed the data at the level of KEGG functional pathways and modules by summing the relative abundances of all KOs associated with each pathway or module. Pathways and modules were further filtered to verify that downstream analysis considers only bacterial pathways/modules. Specifically, a pathway (module) was included in our analysis only if at least 1% of bacterial genomes in KEGG contained at least 1 KO from that pathway (module), and if these bacterial genomes contained at least 5% (20%) of the KOs in the pathway (module) on average. Using this criterion resulted in a list of 146 and 409 bacterial pathways and modules, respectively.
Comparative statistical analysis of functional profiles. For each pathway/module, CF samples were compared to non-CF samples using the Wilcoxon rank-sum test including a multiple comparisons correction using a 5% false discovery rate (FDR) threshold for both pathways and modules. Considering the relatively small number of samples available, several analyses were done by comparing all CF samples to all non-CF samples, ignoring age and subject identity. In additional analyses, samples were further binned by age group, to better control for subject age. For each pathway/module, the median number of non-zero KOs (i.e., KOs identified) across all samples was calculated, and only pathways and modules with more than 20 or 5 non-zero KOs, respectively, were considered. PCA was used to explore variation in functional profiles across samples. To further examine the patterns observed in this PCA, we additionally applied MUSiCC, a novel marker genes-based normalization scheme for accurate profiling of gene abundances in the microbiome 9 . In addition, we confirmed that our results held when excluding samples that were taken from children who had been administered antibiotics prior to Scientific RepoRts | 6:22493 | DOI: 10.1038/srep22493 sampling, both within 30 and 60 days, and when excluding samples from children who were breastfed at the time of sampling. We further confirmed that our main findings held when restricting our dataset to include only a single sample from each subject (from the first year of life), to verify that our results were not biased by the presence of multiple samples per individual.
BWA-based analysis of the impact of E. coli on fatty acid biosynthesis pathway.
To verify that the observed depletion of the fatty acid biosynthesis pathway is not solely the result of increased E. coli abundance in CF samples, we re-mapped the reads from all samples to a set of genome clusters using BWA 10 , following the method described in Greenblum et al. 11 . We analyzed the relative abundance of the fatty acid biosynthesis pathway using this alternative mapping method, confirming that it reproduced results obtained with the original BLASTx-based mapping (specifically, the depletion of this pathway in CF samples; P = 2.9e-05, Wilcoxon rank-sum test). We then removed all reads that mapped to the E. coli genome cluster and applied the same analysis, confirming that the fatty acid biosynthesis pathway was still significantly depleted in CF samples (P = 0.039).
Creation of butyrate and propionate catabolism modules.
To create functional modules representing butyrate catabolism and non-catabolism, a list of 8 enzymes from the butyrate metabolism pathway found to be associated with butyrate catabolism 12,13 was compiled based on a detailed literature survey. The 14 KOs corresponding to these 8 enzymes were defined as the "butyrate catabolism" module, while the other 50 KOs representing enzymes in the butyrate metabolism pathway were defined as the "butyrate non-catabolism" module. Removing bcd (K00248) or both bcd and atoA-atoD (K01034, K01035), for which literature support was weaker, from the "butyrate catabolism" module did not qualitatively change the results reported below. We similarly partitioned the propionate metabolism pathway into propionate catabolism (19 KOs) and propionate non-catabolism (43 KOs) modules 14,15 .
Results
Taxonomic analysis of pediatric CF fecal samples reveals taxonomic dysbiosis that diminishes with age. In a previous cross-sectional microbiota analysis, we showed that E. coli was markedly more abundant in fecal samples from young children with CF than in those without CF 3 . This prior study focused on this striking E. coli dysbiosis, but did not characterize in detail how these differences related to the age of the source subjects, how these differences impacted the rest of the microbiota, and how community-wide dysbiosis links to E. coli abundance. To address these questions, here we obtained Illumina shotgun sequence reads from an expanded sample set, including a total of 104 fecal specimens collected over a period averaging approximately 12 months from 26 children (14 with CF and 12 without CF), comprising 52 CF and 52 non-CF samples, and characterized their microbiota using metagenomic phylogenetic analysis (MetaPhlAn 6 ). We found that CF samples had a high relative abundance of Proteobacteria (including but not limited to E. coli) and Actinobacteria and low abundance of Firmicutes, Bacteroidetes and Verrucomicrobia compared to samples from children without CF ( Fig. 1a; see also Supplementary Fig. S1). For this sample set, CF fecal microbiota also exhibited significantly lower α -diversity (Shannon index) than did non-CF microbiota (p < 0.0063, two-sided t-test). Importantly, however, this difference was no longer significant after excluding E. coli (p = 0.094; Methods), suggesting that the expansion of E. coli in CF fecal communities is not accompanied by a significant difference in the microbial diversity of the rest of the microbiota.
Going beyond phylum-level, we further characterized the taxa that differed significantly in relative abundance between CF and non-CF samples (Supplementary Table S2; Methods). We found that although Firmicutes was relatively depleted as a phylum in CF samples (P < 0.01), one order of Firmicutes, Lactobacillales, was significantly enriched in CF samples (P < 0.001), including the known pathogens E. faecalis and E. faecium, which are known to frequently exhibit antibiotic resistance 16,17 . In addition, the Firmicutes genus Veillonella was significantly enriched in CF samples (P < 10 −5 ), including the species V. parvula, which has been identified in the lungs of children with CF 18 . In contrast, the Firmicutes order Clostridiales (which includes many taxa that contribute to GI immune homeostasis and development 19,20 ) was significantly depleted in CF samples (P < 10 −4 ); one of the only two Clostridiales species that were enriched in CF samples was C. difficile, well-known for its pathogenic potential after infancy 21 . Importantly, since environmental factors such as antibiotic exposure and breastfeeding can impact an infant's microbiome [22][23][24][25] , we confirmed that analyses yielded similar results when excluding all samples from children who were breastfed at the time of collection, or samples taken after antibiotic treatment up to 60 days, as well as when controlling for E. coli abundance (see Supplementary Tables S3-S5).
To further characterize how the phylogenetic differences observed above change with age, we performed a principal components analysis (PCA) of the obtained taxonomic profiles. As shown in the resulting PCA plot (Fig. 1b), microbiota from control samples were largely distinguished from those of CF samples by the first component, owing mostly to differences in E. coli abundance (evidenced by the PCA loadings). Notably, however, CF samples from younger children were more clearly separated from younger control samples, while samples from older children with CF seemed to converge towards samples from older controls. The second principal component, predominantly governed by the relative abundance of Bifidobacterium spp, further separated younger children from older children in both groups. Next, to specifically examine the role of E. coli in CF and non-CF microbiota composition over time, we performed a similar PCA analysis after excluding the abundance of E. coli from the microbiota of each sample ( Fig. S2; see Methods). The resulting CF and non-CF samples were still largely separated on the PCA plot, mostly owing to the abundances of the Actinobacterium Bifidobacterium bifidum (which is generally higher in CF samples) and the Firmicute Eubacterium rectale (lower in CF samples). Indeed, a statistical analysis of the abundances of the various species in CF versus non-CF samples while controlling for E. coli abundance confirmed that Eubacterium rectale (as well as several other species) had significantly lower relative abundance in CF samples (Supplementary Table S3). Again, the magnitude of the differences between CF and non-CF samples tended to diminish with source subject age ( Fig. 1 and Supplementary Fig. S2), indicating that the taxonomic dysbiosis was most marked in infancy and waned over time.
Functional metagenomic analysis shows differences in metabolic capacity between CF and non-CF pediatric fecal microbiota. To investigate potential functional differences between CF and non-CF fecal microbiota, shotgun metagenomic reads were mapped to the KEGG database to estimate the abundance of each KEGG orthology group (KO) in each sample (Methods). In total, we identified 13,840 KOs across the entire sample set, with an average of 5,725 KOs per sample. We then summed the abundances of all KOs associated with each microbial pathway (or module) to obtain a comprehensive functional profile of each sample (Methods).
A PCA of the resulting pathway-level abundance profiles demonstrated a clear distinction between samples from young children with CF (≤1 year) and those without CF (Fig. 2). Importantly, however, those distinctions were diminished in samples from older children, mirroring the pattern observed in the taxonomic profile (Fig. 1b). In addition, as observed for taxonomy, functional composition among samples from children with CF tended to differ between younger and older subjects much more than did those from children without CF. Using a novel normalization method that aims to correct potential biases that stem from using relative, rather than absolute, abundances (Methods) produced similar patterns and further highlighted the differences in functional capacity between CF and non-CF fecal microbiota (Fig. S3). These findings suggest that the GI microbiome in young children with CF have functional capacities that differ markedly from those without CF, but that this effect diminishes with age.
Pediatric CF fecal microbiomes have altered capacities for fatty acid metabolism. We next used statistical analysis to identify significant differences in the relative abundance of specific bacterial functional pathways or modules that potentially underlie the separation observed in the PCA results above (Figs 2 and S3; Methods). We identified 17 pathways and 25 modules that were enriched in CF samples, and 36 pathways and 65 modules that were depleted in CF samples, relative to non-CF samples (p < 0.05, corrected for multiple comparisons; see Methods and Supplementary Tables S6-S9). Inspection of these functional differences revealed that multiple pathways and modules for fatty acid metabolism were differentially abundant in CF. Significantly, the KEGG fatty acid degradation pathway was enriched in CF, whereas the fatty acid biosynthesis pathway, as well as two fatty acid biosynthesis modules, were depleted in CF (Fig. S4 and Table 1). This decreased capacity of the CF fecal microbiota for fatty acid synthesis but an increased ability to metabolize fats overall might be expected if fatty acid availability was an important selective force for microbiota in the CF lumen. Examining the relative abundance of these pathways within subjects as a function of age again demonstrated that these differences in metabolic capacity between the CF and non-CF samples generally diminished with time (Fig. S4).
Because the CF fecal dysbiosis we observed previously 3 was characterized by a marked relative enrichment for the Proteobacterium E. coli, particularly among the samples taken at earlier ages, we additionally set out to examine the contribution of E. coli to the functional differences in fatty acid metabolism reported above. To this end, we considered only the 63 samples that had a relative abundance of E. coli of < 5% (since a substantial number of healthy samples, 7 out of 52, had at least 5% E. coli) and again used comparative analysis to identify differentially abundant pathways. We found that, despite the decreased sample size of this analysis, both the enrichment for genes encoding fatty acid degradation and the depletion of those encoding fatty acid biosynthesis in CF samples remained significant (p < 0.02 and p < 0.002, respectively, FDR <5%), indicating that the marked shifts in the abundance of fatty acid degradation and biosynthetic genes in the CF metagenomes could not be attributed solely to the differential abundance of E. coli, but rather involved the wider microbiota. In addition, we confirmed that the depletion of the fatty acid biosynthesis pathway in CF samples is not solely driven by E. coli by using an alternative sequence-based alignment analysis and removing short reads originating from E. coli genomes (see Methods). As for our taxonomic analyses described above, we further confirmed that our findings are not affected by excluding samples collected after antibiotic treatment or during breastfeeding (Supplementary Table S10).
Pediatric CF fecal microbiomes have increased capacities for breakdown of the anti-inflammatory small-chain fatty acids butyrate and propionate, which correlate with fecal measures of inflammation.
In addition to the above general shifts in fatty acid metabolism, we identified CF-associated enrichment specifically in the metabolism of butyrate and propionate -two short-chain fatty acids (SCFAs) produced and metabolized by GI microbiota and important for intestinal health 26 . In defining these pathways, however, KEGG does not distinguish between synthetic and degrading (or catabolic) processes. Considering the difference observed above between fatty acid biosynthesis and catabolism in relation to CF, we conducted a literature survey to manually partition the genes in each of these two SCFA pathways into two modules, one representing genes known to be associated with catabolism and the other representing genes encoding other (i.e., non-catabolic) enzymatic functions (Methods; see Supplementary Tables S11-S12 for full lists). In contrast to the trend observed above for fatty-acid metabolism, both the butyrate catabolism module and the butyrate non-catabolism module were enriched in CF samples; however, the CF enrichment level of the catabolism module was markedly more pronounced (p < 10 −6 vs. p < 10 −3 for catabolism and non-catabolism, respectively; Supplementary Table S13). Moreover, comparing the ratio between the average abundance of each of these modules in CF vs. non-CF samples, we detected a markedly more pronounced increase in the abundance of the catabolic module in CF vs. that of the non-catabolic module (1.95-fold vs. 1.07-fold for catabolism and non-catabolism, respectively). For propionate, only the catabolism module was enriched in CF, and again its relative abundance in CF vs. non-CF samples was much higher than that of the non-catabolic module (p < 10 −3 , 1.47-fold, vs. p = 0.56, 1.02-fold, for catabolism and non-catabolism, respectively; Supplementary Table S13). Plotting the relative abundance of these SCFA modules in CF and non-CF samples over time highlighted the more pronounced CF-associated enrichment of the catabolic modules (compared to the non-catabolic modules), again suggesting that these differences in SCFA metabolism diminish with age (Fig. 3).
The above results indicate that pediatric CF fecal microbiota have altered capacities for metabolism of fatty acids in general, and SCFAs in particular. Importantly, both butyrate and propionate are produced by the GI microbiota during fermentation of non-digestible starches and other carbohydrates 26 . In turn, both SCFAs (particularly butyrate) play important roles in GI epithelial health, including enterocyte nourishment and development, as well as ameliorating intestinal inflammation, reinforcing the epithelial defense barrier, and regulating intestinal motility, all of which are dysfunctional in humans and/or animals with CF mutations 2 . The observed enrichment of genes involved in catabolism of butyrate and propionate in the pediatric CF microbiota, which is likely to result in increased breakdown of these SCFAs, would be predicted to increase GI inflammation. In support of this prediction, we showed previously that measures of both fecal fat and inflammation in CF were highly correlated with the magnitude of CF-associated E. coli dysbiosis 3 . To directly explore the link between SCFA metabolism, fat content, and inflammation, we calculated the correlation between the overall abundance of genes for metabolism of butyrate and propionate, fecal fat content, and fecal calprotectin. We found a significant positive correlation between the fecal abundance of both the butyrate and propionate catabolism modules and fecal fat content (r = 0.61, p < 10 −4 and r = 0.47, p < 10 −4 , respectively; Supplementary Table S14). We similarly found a significant positive correlation between the fecal abundance of these two modules and calprotectin (r = 0.5, p < 10 −4 and r = 0.45, p < 10 −4 , respectively). Notably, the two corresponding non-catabolism modules were not significantly correlated with calprotectin (Supplementary Table S14). Combined, these findings could indicate that GI luminal fat selects for microbiota that, in turn, are pro-inflammatory, as schematized in the model in Supplementary Fig. S5.
Discussion
Nutrient malabsorption, intestinal dysfunction, and malnutrition are among the most important and troubling early manifestations of CF. The malabsorption of fats in CF is largely due to inadequate secretion of the enzyme pancreatic lipase into the intestinal lumen, with contributions from other mechanisms 27 , resulting not only in fatty stools, but also to loss of nutritionally important dietary fat and fat-soluble vitamins. Our results suggest a model wherein excess dietary fat within the CF GI lumen also plays an indirect role in the intestinal inflammation that characterizes childhood CF GI disease by selecting for microbiota that preferentially degrade the SCFAs butyrate and propionate, important molecules for enteric health (Supplementary Fig. S5).
SCFAs are known to have multiple positive effects in the GI tract. For example, butyrate has both growth-promoting and anti-inflammatory effects on enteric epithelia 26,28 , and it was shown to ameliorate intestinal inflammation in animal models of colitis by promoting the differentiation of homeostatic regulatory T cells 20 . Many of its effects in the GI tract apparently are conveyed by inhibiting the activation of both NF-κ B signaling and histone deacetylation 29 . Propionate, by contrast, is used as a substrate for gluconeogenesis and regulates cholesterol synthesis in the liver, potentially impacting nutritional status, but with less of a defined effect on inflammation 30,31 . SCFAs are produced by carbohydrate fermentation in the large intestine, largely by Firmicute bacteria of the order Clostridiales, including those in the genera Eubacterium, Faecalibacterium, Ruminococcus, and Roseburia 31,32 , many of which are depleted in human inflammatory bowel diseases 33 , and all of which were less abundant in the current study among CF microbiota (Supplementary Table S2). Moreover, SCFAs are important sources of energy salvage in people with malabsorption due to pancreatic insufficiency, a key manifestation of CF GI dysfunction 34 . While the enrichment for genes involved in butyrate and propionate catabolism relative to biosynthesis among CF metagenomes likely reflects an altered ratio of SCFA-producing versus SCFA-consuming taxa, it is challenging to determine exactly which species are responsible for the relative enrichment of catabolic genes in CF. Butyrate degradation, for example, is known to be associated with methanogenic archaea in the human colon 35 . An altered abundance of such archaea would have been reflected in our metagenomic (and thus gene content), but not taxonomic, analyses, as the taxonomic reference database had relatively little representation of archaea. Sulfate-reducing bacteria, including species of the genus Desulfovibrio, can also oxidize butyrate, and Desulfovibrio species are present in human feces 36 . Sulfate-reducing bacteria in the feces of diverse human populations have been shown to be capable of fermenting butyrate and propionate ex vivo 37 . Many of these sulfate-reducing bacteria have been shown to oxidize butyrate and other SCFAs using sulfate or nitrate as electron acceptors 12,38,39 ; both sulfate and nitrate are present in the human GI tract 38,[40][41][42] . The GI tracts of children with CF also contain elevated levels of the potential electron acceptor nitric oxide 43 . Therefore, the CF GI luminal environment would be predicted to be favorable for microbial butyrate catabolism, lending further support for our model. Furthermore, a previous metaproteomic study of CF fecal samples found evidence for a relative depletion of butyrate-producing bacteria, in support of our findings, but the species responsible were not identified 44 . The laboratory isolation and/or cultivation of most of these species is technically very challenging, and for some impossible, rendering further study of these concepts difficult.
Similarly, because butyrate and propionate are volatile acids, measurement of their abundances must be performed either on freshly collected fecal samples or on those that have been appropriately processed and stored in airtight containers 45 , neither of which was the case for our samples. In addition, 98% of SCFAs in the colon are absorbed rather than excreted 34 , and inflammatory and malabsorptive GI conditions are often associated with decreased colonic butyrate uptake and utilization 46 . Accordingly, a study comparing GI luminal SCFA measurements in children with vs. without CF would be required to test our model. Nevertheless, there is strong supportive evidence from animal models. For example, mice fed a high-fat diet had lower colonic abundances of butyrate-producing microbes, including Roseburia, and higher abundances of Escherichia and Desulfovibrio, than did mice on a normal diet, resulting in significant reductions in fecal butyrate and compromised GI host defenses that normalized with oral butyrate administration 47 .
While this study focused on bacteria, the GI tract microbiota clearly also includes both viruses and fungi, each of which could conceivably contribute to GI microbial community metabolism 48,49 . For example, the fungus Aspergillus nidulans has been shown to express a transporter for SCFAs (albeit with low affinity for either butyrate or propionate 50 ). Therefore, future work will be required to define the contribution of non-bacterial taxa on community metabolism.
While people with CF frequently receive antibiotics to treat their respiratory disease 51 , and antibiotic treatment can at least transiently deplete butyrate-producing microbiota in the GI tract 52 , we showed previously that pediatric CF fecal dysbiosis was independent of recent antibiotic exposure within the prior 30 days 3 . In this study, we again confirmed that our findings were not likely impacted by antibiotic exposure by excluding all samples that were collected within 30 days of antibiotic treatment (15 samples; Supplementary Table S1), or even those collected within 60 days (20 samples), and repeating our analysis (see Supplementary Table S10). Nevertheless, antibiotics would be likely to contribute to functional depletion of butyrate production capacity by the microbiome. We also confirmed that restricting our analysis to a single sample from each individual did not markedly impact our findings (Supplementary Table S10). Finally, since breastfeeding is known to impact the composition of the GI microbiota 24 , we additionally confirmed that restricting our analysis to samples from non-breastfed infants did not significantly affected our results (Supplementary Table S10).
CF is caused by dysfunction of the epithelial transmembrane ion channel, the CF transmembrane regulator (CFTR); interestingly, butyrate has been shown to increase the expression of CFTR on the epithelial apical surface 53,54 . Therefore, should GI luminal butyrate abundance be decreased in children with CF, treatments that address this imbalance (such as therapies that modify the GI microbiota or replete luminal butyrate concentrations) could improve CF GI function and nutritional outcomes, and subsequent long-term health, through multiple mechanisms.
In conclusion, we found that the fecal microbiomes from children with CF exhibited taxonomic and functional differences from those of children without CF. Computational analysis of these microbiomes indicates that the pediatric CF GI microbiota are selected, at least in part, by the high abundance of unabsorbed luminal fatty acids, and that these CF microbiota are predicted to yield lower amounts of health-promoting SCFAs in the GI lumen ( Supplementary Fig. S5). Future research will be required to verify the predicted depletion of butyrate and propionate in the GI tracts of children with CF, and to determine whether treatments to manipulate their microbiota lead to improved outcomes. | 2016-05-04T20:20:58.661Z | 2016-03-04T00:00:00.000 | {
"year": 2016,
"sha1": "7bb8a05d1b1ab550d38a32871387e8e0a9a03b9b",
"oa_license": "CCBY",
"oa_url": "https://www.nature.com/articles/srep22493.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "7bb8a05d1b1ab550d38a32871387e8e0a9a03b9b",
"s2fieldsofstudy": [
"Biology",
"Environmental Science",
"Medicine"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
} |
58539092 | pes2o/s2orc | v3-fos-license | Prognostic role of preoperative platelet, fibrinogen, and D‐dimer levels in patients with non‐small cell lung cancer: A multicenter prospective study
Background The relationships between coagulation factors and non‐small cell lung cancer (NSCLC) prognosis have been intensively studied. However, no previous study has investigated the combined effects of preoperative platelet (PLT), fibrinogen (FIB), and D‐dimer (D‐D) levels on the prognosis of NSCLC. Methods A multicenter prospective study was conducted over seven hospitals. A total of 395 patients diagnosed with operable NSCLC for the first time were included and followed‐up until disease progression or the end of the study. Baseline demographic and clinicopathological information, and preoperative coagulation test results were collected for each patient. Univariate and multilevel survival analyses were conducted using Cox regression and shared frailty models. Results Multilevel analyses revealed that there was a marginally significant association between elevated PLT level (> 215 × 109/L) and unfavorable progression‐free survival (PFS) (hazard ratio 2.42, P = 0.05), whereas preoperative FIB and D‐D were not significant prognostic factors for PFS (P = 0.31 and 0.30, respectively). Compared to patients with one elevation of the three coagulation factors, patients with at least two elevations of the three factors had a significantly higher risk of cancer progression (hazard ratio 4.62, P = 0.02). Conclusion The number of elevated preoperative coagulation factors may have a significant effect on PFS and could be used to predict the prognosis of NSCLC patients after surgery. Future studies are warranted to further investigate the interactions between these three coagulation factors.
Introduction
Lung cancer is the most common malignant tumor in China. In 2015, 733 300 people were newly diagnosed with lung cancer, corresponding to an incidence rate of 53.4 per 100 000 people. 1 Lung cancer is also the leading cause of cancer death in China. According to the Global Burden of Disease, over 13 million Disability-Adjusted Life Years (DALYs) were lost as a result of lung cancer in China in 2016, accounting for 36.0% of global DALYs from lung cancer. 2 Among the two major types of lung cancer, non-small cell lung cancer (NSCLC) is the most common, constituting 85% of all cancer cases. 3 Surgery is the best treatment option for many patients with NSCLC, but high incidence of local and distant relapse results in poor five-year survival rates. 4 In terms of predicting NSCLC prognosis, cancer stage at diagnosis, performance status, gender, and weight loss are the most widely accepted predictive factors. 3 Other prognostic determinants include age, histologic subtype, and biologic markers, such as p53 gene mutation and K-ras oncogene activation. 3,5,6 Despite these conventional predictive factors, there is growing research interest in investigating the relationships between blood coagulation factors and NSCLC prognosis. Evidence from biological research has shown that tumor cells and the hemostatic system are highly interconnected. Tumor cells can activate systemic coagulation through multiple pathways and induce hemostatic and fibrinolytic abnormalities, which in turn contribute to cancer angiogenesis and metastasis. 7 Previous research has shown that elevated platelet (PLT) count, fibrinogen (FIB) level, and D-dimer (D-D) level are all associated with poor prognosis in NSCLC patients, [8][9][10][11][12][13] but the results have been inconsistent. 14,15 The majority of these studies were retrospective single-center cohorts with relatively small sample sizes. Furthermore, no previous study has investigated the combined effects of D-D, FIB, and PLT levels on NSCLC prognosis. Therefore, we conducted a multicenter prospective cohort study to investigate both the individual and combined effects of preoperative PLT, FIB, and D-D levels on progression-free survival (PFS) in operable NSCLC patients.
Study design and patients
This study was conducted between April 2016 and December 2017 in seven Grade-A tertiary hospitals located in Jiangsu province, China. Basic information of the seven hospitals is summarized in the Supplementary Appendix. Patients admitted to one of the seven hospitals that met the following criteria were included in the study: (i) diagnosed with NSCLC for the first time; (ii) without distant metastasis (I-III tumor node metastasis [TNM] stage); (iii) underwent surgery (sleeve lobectomy, segmentectomy resection, wedge resection, or pneumonectomy); and (iv) agreed to participate in the study and signed informed consent. The exclusion criteria were: (i) a diagnosis of other primary cancer, severe cardiovascular or respiratory disease, severe rheumatic disease, or severe hematological disease; (ii) severe postoperative complications within 30 days (including death); or a (iii) history of anticoagulant or antiplatelet drug use within two weeks before surgery.
The World Health Organization Classification of Lung Tumors (4th edition) was used for the histological classification of lung cancer, 16 while lung cancer stages were determined in accordance with the International Association for the Study of Lung Cancer staging system. 17 Blood samples were collected at three time points to measure PLT, FIB, and D-D levels: one day after admission (preoperative), 72 hours after surgery, and one day before discharge from hospital. However, because of missing data (> 20%), only the preoperative coagulation test results were used for survival analyses. Other demographic and clinical information collected included: age, gender, anatomic location of the tumor, tumor size, lymphovascular invasion (LVI), visceral pleural invasion (VPI), and type of surgery. The Ethics Committee of Jiangsu Province Hospital approved this study.
Treatment and follow-up
Adjuvant chemotherapy, radiotherapy, or chemoradiotherapy was prescribed to patients according to the Chinese guidelines on the diagnosis and treatment of primary lung cancer (2015 version). 18 Patient follow-up was conducted at three-month intervals until December 2017, disease progression, death, or loss to follow-up. The median follow-up duration was 13.2 (range: 3.0-18.5) months. At each follow-up visit, patients underwent chest computed tomography (CT), abdominal ultrasound scan, and other examinations when necessary. If a patient did not show up for a scheduled follow-up visit, a study nurse contacted the patient or his/her relatives.
PFS was chosen as the study endpoint and was defined as the interval from surgery to local or distant relapse, whichever occurred first. Survival time was considered as censored if the patients died, were lost to follow-up, or were progression-free at the end of the study.
Coagulation assays
Patients' venous blood samples were obtained by trained nurses and immediately sent to each hospital's clinical laboratory. The laboratories in the seven hospitals used the same methods to measure blood PLT count (fluorescent nucleic acid stain method), plasma FIB (clotting method), and D-D concentrations (immunochemical method). The clinical laboratories of the seven hospitals have all passed external quality assessment conducted by Health Commission of Jiangsu Province, indicating that their results are comparable.
Statistical analysis
Patients with missing clinicopathological and demographic data were not included in the analysis. The Markov chain Monte Carlo (MCMC) method was used when preoperative PLT, FIB, or D-D values (42 in total) were missing. 19 The median values of preoperative D-D, FIB, and PLT were used as cutoff values to dichotomize them. Baseline characteristics were described using medians and quartiles for continuous variables and percentages for categorical variables. The Kaplan-Meier method was used to construct survival curves. 20 Univariate survival analyses were conducted using the Cox proportional hazards regression model. 21 Statistically significant potential confounders were included in the subsequent multivariate survival analyses as covariates. Two models were used in the multivariate survival analyses. In the first single-level Cox regression model, we adjusted variables that were found to be statistically significant in univariate analyses. Considering the treatment level differences across hospitals, we used multilevel survival analysis in the second model. In the second multilevel model, both patient-level covariates and hospital-level variations were adjusted using the shared frailty model, which incorporates a random intercept into the Cox proportional hazards regression model. 22 A P value < 0.05 was considered statistically significant. All statistical analyses were conducted using SAS version 9.4 (SAS Institute Inc., Cary, NC, USA).
Baseline demographic and clinicopathological characteristics
We recruited 457 patients, 410 of which met the inclusion criteria and were included in the study. We further excluded 2 patients with unknown progression dates, 12 patients with missing clinicopathological data, and 1 patient without blood coagulation test results from the analyses. The baseline characteristics of the remaining 395 patients are summarized in Table 1 and the Supplementary Appendix. Among the 395 patients, 225 (56.96%) were men and 170 (43.04%) were women, at a median age of 64 years. The majority of the tumors were adenocarcinoma (83.29%), followed by squamous cell carcinoma (13.42%), and other histological types (3.29%). There were 341 (86.33%) patients at stage I-II and 54 (13.67%) at stage III. The medians of preoperative PLT, FIB, and D-D values were 215 × 10 9 /L, 2.75 g/L, and 0.20 mg/L, respectively. Except for age, gender, and histological type, all other clinicopathological patient characteristics were statistically different between the seven hospitals (Supplementary Appendix). Rates of disease progression were also significantly different between the seven hospitals (P < 0.001).
Survival analyses
During the follow-up period, 24 patients experienced NSCLC recurrence. The one-year PFS rates were 92.84% and 97.27% for patients with values of preoperative PLT > median and preoperative PLT ≤ median, 92.38% and 97.95% for patients with values of preoperative FIB >
306
Thoracic Cancer 10 (2019) 304-311 median and preoperative FIB ≤ median, and 93.20% and 96.92% for patients with values of preoperative D-D > median and preoperative D-D ≤ median, respectively. The results from univariate Cox regression indicated that elevated preoperative PLT count and FIB level were both significantly associated with poor outcomes (P = 0.03 and P < 0.01, respectively), whereas preoperative D-D level was not associated with PFS (P = 0.06) (Fig 1). Other factors that were found to be statistically significant in the univariate analyses include gender (P = 0.02), anatomic location (P < 0.001), LVI (P = 0.01), VPI (P = 0.02), TNM stage (P < 0.001), and tumor size (P < 0.001).
As shown in Tables 2 and 3, the results of the singlelevel and multilevel models were consistent. The random effects in the multilevel model were statistically significant (P < 0.01), indicating that the multilevel model was suitable to fit our data. After adjusting for the potential confounders, preoperative D-D was not significantly associated with patient outcome (hazard ratio [HR] 1.61, 95% confidence interval [CI] 0.66-3.93; P = 0.30). Preoperative FIB was also unrelated to PFS in the multilevel survival analysis (HR 1.79, 95% CI 0.59-5.43; P = 0.31). The preoperative PLT association with PFS was marginally significant in the multilevel model (HR 2.42, 95% CI 0.98-5.95; P = 0.05).
In order to investigate the combined effects of preoperative D-D, FIB, and PLT levels on PFS, we further assigned each patient a score (0-3) according to the number of elevated preoperative coagulation factors: 0, patients with no elevation in the three preoperative coagulation factors; 1, patients with one elevation in the three preoperative coagulation factors; 2, patients with two elevations in any of the three preoperative coagulation factors; and 3, patients with all three preoperative coagulation factors elevated. The results of multilevel analyses revealed that the risk of relapse is 76% higher for every one point increase in the score (HR 1.76, 95% CI 1.05-2.94; P = 0.03) ( Table 2). Specifically, compared to patients with scores of 0 or 1, the risk of relapse is 4.6 times higher for patients with scores 2 or 3 (HR 4.62, 95% CI 1.29-16.5; P = 0.02).
Except for TNM stage, which was significant in both single-level multivariate models, other potential confounders, including gender, anatomic location, LVI, VPI, Only the variables that were statistically significant in univariate analyses were included in multivariate and multilevel analyses. and tumor size were not significantly associated with outcome in multivariate models.
Discussion
The relationship between cancer and blood coagulation was first reported in 19th century. Since then, much attention has been devoted to this research field. Experimental studies have found that PLT contributes to cancer progression through both thrombin-dependent and thrombinindependent mechanisms. 23 In thrombin-dependent mechanisms, thrombin activated PLT can release a variety of growth factors, such as vascular endothelial growth factor, 24 platelet-derived growth factor, 25 and transforming growth factor-β, 26 and therefore promote tumor cell angiogenesis and proliferation. In thrombin-independent mechanisms, tumor cells can directly induce PLT activation and aggregation and then facilitate the formation of tumor cell-platelet thrombi, which is believed to have the effect of preventing intravascular tumor cell elimination by natural killer (NK) cells. 27 Some studies have shown that FIB can also protect tumor cells from NK cell-mediated cytotoxicity by aggregating around tumor cells and forming dense fibrin layers. 28,29 In fact, FIB and PLT can interact with each other to protect tumor cells from NK cytotoxicity. 30 FIB can enhance the adhesion of PLT to tumor cells, and PLT in turn can release thrombin and facilitate the aggregation of FIB. 30 FIB can also serve as a scaffold to bind both vascular endothelial growth factor and fibroblast growth factor-2 and augments their proliferative and angiogenesis effects. 31 As a fibrin degradation product, D-D is a sensitive indicator of coagulation and fibrinolysis activation. 32 Despite its routine use as a predictor of venous thromboembolism in cancer patients, a growing number of studies have reported that a higher D-D level is associated with tumor metastasis; however, the exact mechanisms are still unknown. 33,34 Some researchers have speculated that D-D dimer is a global surrogate marker that indicates tumor aggressiveness. 35 In the current study, the results of univariate analysis indicate that an elevated preoperative PLT count is significantly associated with a poor outcome. After adjusting for confounders, this relationship was marginally significant, which may suggest the independent prognostic role of PLT in NSCLC. In a study conducted by Kim et al., preoperative thrombocytosis (PLT > 400 × 10 9 /L) was associated with a higher risk of recurrence among NSCLC patients. 11 Li et al. revealed that advanced NSCLC patients with elevated PLT counts (> 200 × 10 9 /L) had significantly poorer PFS compared to patients with PLT counts ≤ 200 × 10 9 / L. 12 In contrast, in studies conducted by Cakar et al. and Canova et al., thrombocytosis was not a significant prognostic factor of PFS. 14,15 However, in the study conducted by Canova et al., the PLT count was measured after chemotherapy rather than before treatment. In the study conducted by Cakar et al., both NSCLC and SCLC patients were included. These differences may explain the insignificant results of the two studies.
In the current study, we also found that patients with elevated preoperative FIB levels had a significantly higher risk of disease progression. However, this association was not confirmed in multilevel analysis. Inconsistent with our result, Sheng et al. and Jiang et al. both reported that NSCLC patients with preoperative hyperfibrinogenemia (≥ 4.0 g/L) had an increased risk of disease progression compared to patients without hyperfibrinogenemia. 8,10 Meanwhile, a recent study also revealed that preoperative hyperfibrinogenemia is significantly associated with disease progression among NSCLC patients. 13 Nevertheless, the inconsistencies in results between our study and previous studies can at least partly be attributed to the short follow-up time in our study. Only 24 patients had disease progression and over 90% of the patients were censored, therefore the statistical power of our study is low.
Jiang et al. reported that preoperative D-D positivity (> 0.55 mg/L) is a significant and independent predictor for unfavorable disease-free survival in NSCLC patients. 8 A recent study conducted by Kaoru et al. also revealed that an elevated preoperative D-D level (>1.0 mg/L) is significantly associated with poor recurrence-free survival among stage I NSCLC patients. 9 In our study, however, neither univariate nor multilevel analyses confirmed the prognostic significance of elevated preoperative D-D. We believe this can be explained by the low statistical power of our study. Moreover, other more potent prognostic factors may have overshadowed its effect in the multilevel analyses.
To the best of our knowledge, this study is the first to investigate the combined effects of preoperative PLT, FIB, and D-D levels on NSCLC prognosis. We found that the risk of cancer progression significantly increased for every onepoint increase in the number of elevated coagulation factors. Compared to patients with only one elevated coagulation factor, patients with at least two elevated coagulation factors had a significantly higher risk of cancer progression. Combining the individual and combined effects of preoperative PLT, FIB, and D-D levels on NSCLC prognosis, we speculate that there may be some interactions between the three coagulation factors or any two of them. More studies with large sample sizes and longer follow-up are warranted to further investigate the potential interactions between preoperative PLT, FIB, and D-D among NSCLC patients. Meanwhile, the underlying mechanisms are largely unknown and require further study. Based on these results, we propose the combined use of preoperative PLT, FIB, and D-D levels to predict the prognosis of operable NSCLC patients. As PLT, FIB, and D-D levels are routinely measured before surgery for NSCLC patients, clinicians may also consider the combined use of the three coagulation factors in clinical practice.
The strengths of our study include a larger sample size compared to some previous studies, a multicenter and prospective study design, and the adoption of multilevel survival analysis. Nevertheless, some limitations need to be highlighted. First, as previously mentioned, the follow-up duration of our study was short for an analysis of PFS and the sample size was insufficient, which decreased our statistical power to detect differences between different groups. We will consider extending the follow-up time and including more cases in our further study. Secondly, because of missing data, only the preoperative PLT, FIB, and D-D levels were used for survival analyses; therefore, the effects of postoperative PLT, FIB, and D-D levels were not investigated. Thirdly, we did not collect information on the adjuvant therapies administered and adjust them in the analyses, which may have biased the results. Finally, coagulation assays were conducted in different laboratories and therefore systematic differences may exist. Nevertheless, we believe the differences were minor as all laboratories have passed external quality assessment and used the same methods for coagulation assays.
In the current multicenter prospective study conducted on operable NSCLC patients, we found that an elevated preoperative PLT level is significantly associated with poorer PFS. However, the prognostic significance of preoperative FIB and D-D levels were not confirmed, partly a result of the short follow-up duration in our study. More importantly, we found that the number of elevated preoperative coagulation factors may be a significant indicator of progression. We reason that there are potential interactions between these three coagulation factors that need further study. Researchers and clinicians may consider using preoperative PLT, FIB, and D-D levels in combination when predicting the prognosis of NSCLC patients after surgery. | 2019-01-22T22:25:34.463Z | 2019-01-04T00:00:00.000 | {
"year": 2019,
"sha1": "ffa3e0ed45f7b224084e5dae4b7f7e4e777cdf88",
"oa_license": "CCBYNC",
"oa_url": "https://onlinelibrary.wiley.com/doi/pdfdirect/10.1111/1759-7714.12956",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "ffa3e0ed45f7b224084e5dae4b7f7e4e777cdf88",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
3429309 | pes2o/s2orc | v3-fos-license | DeepLab: Semantic Image Segmentation with Deep Convolutional Nets, Atrous Convolution, and Fully Connected CRFs
In this work we address the task of semantic image segmentation with Deep Learning and make three main contributions that are experimentally shown to have substantial practical merit. First, we highlight convolution with upsampled filters, or 'atrous convolution', as a powerful tool in dense prediction tasks. Atrous convolution allows us to explicitly control the resolution at which feature responses are computed within Deep Convolutional Neural Networks. It also allows us to effectively enlarge the field of view of filters to incorporate larger context without increasing the number of parameters or the amount of computation. Second, we propose atrous spatial pyramid pooling (ASPP) to robustly segment objects at multiple scales. ASPP probes an incoming convolutional feature layer with filters at multiple sampling rates and effective fields-of-views, thus capturing objects as well as image context at multiple scales. Third, we improve the localization of object boundaries by combining methods from DCNNs and probabilistic graphical models. The commonly deployed combination of max-pooling and downsampling in DCNNs achieves invariance but has a toll on localization accuracy. We overcome this by combining the responses at the final DCNN layer with a fully connected Conditional Random Field (CRF), which is shown both qualitatively and quantitatively to improve localization performance. Our proposed"DeepLab"system sets the new state-of-art at the PASCAL VOC-2012 semantic image segmentation task, reaching 79.7% mIOU in the test set, and advances the results on three other datasets: PASCAL-Context, PASCAL-Person-Part, and Cityscapes. All of our code is made publicly available online.
INTRODUCTION
Deep Convolutional Neural Networks (DCNNs) [1] have pushed the performance of computer vision systems to soaring heights on a broad array of high-level problems, including image classification [2], [3], [4], [5], [6] and object detection [7], [8], [9], [10], [11], [12], where DCNNs trained in an end-to-end manner have delivered strikingly better results than systems relying on hand-crafted features. Essential to this success is the built-in invariance of DCNNs to local image transformations, which allows them to learn increasingly abstract data representations [13]. This invariance is clearly desirable for classification tasks, but can hamper dense prediction tasks such as semantic segmentation, where abstraction of spatial information is undesired.
In particular we consider three challenges in the application of DCNNs to semantic image segmentation: (1) reduced feature resolution, (2) existence of objects at multiple scales, and (3) reduced localization accuracy due to DCNN invariance. Next, we discuss these challenges and our approach to overcome them in our proposed DeepLab system.
The first challenge is caused by the repeated combination of max-pooling and downsampling ('striding') performed at consecutive layers of DCNNs originally designed for image classification [2], [4], [5]. This results in feature maps with significantly reduced spatial resolution when the DCNN is employed in a fully convolutional fashion [14]. In order to overcome this hurdle and efficiently produce denser feature maps, we remove the downsampling operator from the last few max pooling layers of DCNNs and instead upsample the filters in subsequent convolutional layers, resulting in feature maps computed at a higher sampling rate. Filter upsampling amounts to inserting holes ('trous' in French) between nonzero filter taps. This technique has a long history in signal processing, originally developed for the efficient computation of the undecimated wavelet transform in a scheme also known as "algorithmeà trous" [15]. We use the term atrous convolution as a shorthand for convolution with upsampled filters. Various flavors of this idea have been used before in the context of DCNNs by [3], [6], [16]. In practice, we recover full resolution feature maps by a combination of atrous convolution, which computes feature maps more densely, followed by simple bilinear interpolation of the feature responses to the original image size. This scheme offers a simple yet powerful alternative to using deconvolutional layers [13], [14] in dense prediction tasks. Compared to regular convolution with larger filters, atrous convolution allows us to effectively enlarge the field of view of filters without increasing the number of parameters or the amount of computation. The second challenge is caused by the existence of objects at multiple scales. A standard way to deal with this is to present to the DCNN rescaled versions of the same image and then aggregate the feature or score maps [6], [17], [18]. We show that this approach indeed increases the perfor-arXiv:1606.00915v1 [cs.CV] 2 Jun 2016 mance of our system, but comes at the cost of computing feature responses at all DCNN layers for multiple scaled versions of the input image. Instead, motivated by spatial pyramid pooling [19], [20], we propose a computationally efficient scheme of resampling a given feature layer at multiple rates prior to convolution. This amounts to probing the original image with multiple filters that have complementary effective fields of view, thus capturing objects as well as useful image context at multiple scales. Rather than actually resampling features, we efficiently implement this mapping using multiple parallel atrous convolutional layers with different sampling rates; we call the proposed technique "atrous spatial pyramid pooling" (ASPP).
The third challenge relates to the fact that an objectcentric classifier requires invariance to spatial transformations, inherently limiting the spatial accuracy of a DCNN. One way to mitigate this problem is to use skip-layers to extract "hyper-column" features from multiple network layers when computing the final segmentation result [14], [21]. Our work explores an alternative approach which we show to be highly effective. In particular, we boost our model's ability to capture fine details by employing a fullyconnected Conditional Random Field (CRF) [22]. CRFs have been broadly used in semantic segmentation to combine class scores computed by multi-way classifiers with the lowlevel information captured by the local interactions of pixels and edges [23], [24] or superpixels [25]. Even though works of increased sophistication have been proposed to model the hierarchical dependency [26], [27], [28] and/or highorder dependencies of segments [29], [30], [31], [32], [33], we use the fully connected pairwise CRF proposed by [22] for its efficient computation, and ability to capture fine edge details while also catering for long range dependencies. That model was shown in [22] to improve the performance of a boosting-based pixel-level classifier. In this work, we demonstrate that it leads to state-of-the-art results when coupled with a DCNN-based pixel-level classifier.
A high-level illustration of the proposed DeepLab model is shown in Fig. 1. A deep convolutional neural network (VGG-16 [4] or ResNet-101 [11] in this work) trained in the task of image classification is re-purposed to the task of semantic segmentation by (1) transforming all the fully connected layers to convolutional layers (i.e., fully convolutional network [14]) and (2) increasing feature resolution through atrous convolutional layers, allowing us to compute feature responses every 8 pixels instead of every 32 pixels in the original network. We then employ bi-linear interpolation to upsample by a factor of 8 the score map to reach the original image resolution, yielding the input to a fullyconnected CRF [22] that refines the segmentation results.
From a practical standpoint, the three main advantages of our DeepLab system are: (1) Speed: by virtue of atrous convolution, our dense DCNN operates at 8 FPS on an NVidia Titan X GPU, while Mean Field Inference for the fully-connected CRF requires 0.5 secs on a CPU. (2) Accuracy: we obtain state-of-art results on several challenging datasets, including the PASCAL VOC 2012 semantic segmentation benchmark [34], PASCAL-Context [35], PASCAL-Person-Part [36], and Cityscapes [37]. (3) Simplicity: our system is composed of a cascade of two very well-established modules, DCNNs and CRFs. The updated DeepLab system we present in this paper features several improvements compared to its first version reported in our original conference publication [38]. Our new version can better segment objects at multiple scales, via either multi-scale input processing [17], [39], [40] or the proposed ASPP. We have built a residual net variant of DeepLab by adapting the state-of-art ResNet [11] image classification DCNN, achieving better semantic segmentation performance compared to our original model based on VGG-16 [4]. Finally, we present a more comprehensive experimental evaluation of multiple model variants and report state-of-art results not only on the PASCAL VOC 2012 benchmark but also on other challenging tasks. We have implemented the proposed methods by extending the Caffe framework [41]. We share our code and models at a companion web site http://liangchiehchen.com/projects/ DeepLab.html.
RELATED WORK
Most of the successful semantic segmentation systems developed in the previous decade relied on hand-crafted features combined with flat classifiers, such as Boosting [24], [42], Random Forests [43], or Support Vector Machines [44]. Substantial improvements have been achieved by incorporating richer information from context [45] and structured prediction techniques [22], [26], [27], [46], but the performance of these systems has always been compromised by the limited expressive power of the features. Over the past few years the breakthroughs of Deep Learning in image classification were quickly transferred to the semantic segmentation task. Since this task involves both segmentation and classification, a central question is how to combine the two tasks.
The first family of DCNN-based systems for semantic segmentation typically employs a cascade of bottomup image segmentation, followed by DCNN-based region classification. For instance the bounding box proposals and masked regions delivered by [47], [48] are used in [7] and [49] as inputs to a DCNN to incorporate shape information into the classification process. Similarly, the authors of [50] rely on a superpixel representation. Even though these approaches can benefit from the sharp boundaries delivered by a good segmentation, they also cannot recover from any of its errors.
The second family of works relies on using convolutionally computed DCNN features for dense image labeling, and couples them with segmentations that are obtained independently. Among the first have been [39] who apply DCNNs at multiple image resolutions and then employ a segmentation tree to smooth the prediction results. More recently, [21] propose to use skip layers and concatenate the computed intermediate feature maps within the DCNNs for pixel classification. Further, [51] propose to pool the intermediate feature maps by region proposals. These works still employ segmentation algorithms that are decoupled from the DCNN classifier's results, thus risking commitment to premature decisions.
The third family of works uses DCNNs to directly provide dense category-level pixel labels, which makes it possible to even discard segmentation altogether. The segmentation-free approaches of [14], [52] directly apply DCNNs to the whole image in a fully convolutional fashion, transforming the last fully connected layers of the DCNN into convolutional layers. In order to deal with the spatial localization issues outlined in the introduction, [14] upsample and concatenate the scores from intermediate feature maps, while [52] refine the prediction result from coarse to fine by propagating the coarse results to another DCNN. Our work builds on these works, and as described in the introduction extends them by exerting control on the feature resolution, introducing multi-scale pooling techniques and integrating the densely connected CRF of [22] on top of the DCNN. We show that this leads to significantly better segmentation results, especially along object boundaries. The combination of DCNN and CRF is of course not new but previous works only tried locally connected CRF models. Specifically, [53] use CRFs as a proposal mechanism for a DCNN-based reranking system, while [39] treat superpixels as nodes for a local pairwise CRF and use graph-cuts for discrete inference. As such their models were limited by errors in superpixel computations or ignored long-range dependencies. Our approach instead treats every pixel as a CRF node receiving unary potentials by the DCNN. Crucially, the Gaussian CRF potentials in the fully connected CRF model of [22] that we adopt can capture long-range dependencies and at the same time the model is amenable to fast mean field inference. We note that mean field inference had been extensively studied for traditional image segmentation tasks [54], [55], [56], but these older models were typically limited to shortrange connections. In independent work, [57] use a very similar densely connected CRF model to refine the results of DCNN for the problem of material classification. However, the DCNN module of [57] was only trained by sparse point supervision instead of dense supervision at every pixel.
Since the first version of this work was made publicly available [38], the area of semantic segmentation has progressed drastically. Multiple groups have made important advances, significantly raising the bar on the PASCAL VOC 2012 semantic segmentation benchmark, as reflected to the high level of activity in the benchmark's leaderboard 1 [17], [40], [58], [59], [60], [61], [62], [63]. Interestingly, most topperforming methods have adopted one or both of the key ingredients of our DeepLab system: Atrous convolution for efficient dense feature extraction and refinement of the raw DCNN scores by means of a fully connected CRF. We outline below some of the most important and interesting advances.
End-to-end training for structured prediction has more recently been explored in several related works. While we employ the CRF as a post-processing method, [40], [59], [62], [64], [65] have successfully pursued joint learning of the DCNN and CRF. In particular, [59], [65] unroll the CRF mean-field inference steps to convert the whole system into an end-to-end trainable feed-forward network, while [62] approximates one iteration of the dense CRF mean field inference [22] by convolutional layers with learnable filters. Another fruitful direction pursued by [40], [66] is to learn the pairwise terms of a CRF via a DCNN, significantly improving performance at the cost of heavier computation. In a different direction, [63] replace the bilateral filtering module used in mean field inference with a faster domain transform module [67], improving the speed and lowering the memory requirements of the overall system, while [18], [68] combine semantic segmentation with edge detection.
Weaker supervision has been pursued in a number of papers, relaxing the assumption that pixel-level semantic annotations are available for the whole training set [58], [69], [70], [71], achieving significantly better results than weaklysupervised pre-DCNN systems such as [72]. In another line of research, [49], [73] pursue instance segmentation, jointly tackling object detection and semantic segmentation.
What we call here atrous convolution was originally developed for the efficient computation of the undecimated wavelet transform in the "algorithmeà trous" scheme of [15]. We refer the interested reader to [74] for early references from the wavelet literature. Atrous convolution is also intimately related to the "noble identities" in multi-rate signal processing, which builds on the same interplay of input signal and filter sampling rates [75]. Atrous convolution is a term we first used in [6]. The same operation was later called dilated convolution by [76], a term they coined motivated by the fact that the operation corresponds to regular convolution with upsampled (or dilated in the terminology of [15]) filters. Various authors have used the same operation before for denser feature extraction in DCNNs [3], [6], [16]. Beyond mere resolution enhancement, atrous convolution allows us to enlarge the field of view of filters to incorporate larger context, which we have shown in [38] to be beneficial. This approach has been pursued further by [76], who employ a series of atrous convolutional layers with increasing rates to aggregate multiscale context. The atrous spatial pyramid pooling scheme proposed here to capture multiscale objects and context also employs multiple atrous convolutional layers with different sampling rates, which we however lay out in parallel instead of in serial. Interestingly, the atrous convolution technique has also been adopted for a broader set of tasks, such as object detection [12], [77], instancelevel segmentation [78], visual question answering [79], and optical flow [80].
We also show that, as expected, integrating into DeepLab more advanced image classification DCNNs such as the residual net of [11] leads to better results. This has also been observed independently by [81].
Atrous Convolution for Dense Feature Extraction and Field-of-View Enlargement
The use of DCNNs for semantic segmentation, or other dense prediction tasks, has been shown to be simply and successfully addressed by deploying DCNNs in a fully convolutional fashion [3], [14]. However, the repeated combination of max-pooling and striding at consecutive layers of these networks reduces significantly the spatial resolution of the resulting feature maps, typically by a factor of 32 across each direction in recent DCNNs. A partial remedy is to use 'deconvolutional' layers as in [14], which however requires additional memory and time.
We advocate instead the use of atrous convolution, originally developed for the efficient computation of the undecimated wavelet transform in the "algorithmeà trous" scheme of [15] and used before in the DCNN context by [3], [6], [16]. This algorithm allows us to compute the responses of any layer at any desirable resolution. It can be applied post-hoc, once a network has been trained, but can also be seamlessly integrated with training.
Considering one-dimensional signals first, the output y[i] of atrous convolution 2 of a 1-D input signal x[i] with a filter w[k] of length K is defined as: The rate parameter r corresponds to the stride with which we sample the input signal. Standard convolution is a special case for rate r = 1. See Fig. 2 for illustration. 2. We follow the standard practice in the DCNN literature and use non-mirrored filters in this definition. We illustrate the algorithm's operation in 2-D through a simple example in Fig. 3: Given an image, we assume that we first have a downsampling operation that reduces the resolution by a factor of 2, and then perform a convolution with a kernel -here, the vertical Gaussian derivative. If one implants the resulting feature map in the original image coordinates, we realize that we have obtained responses at only 1/4 of the image positions. Instead, we can compute responses at all image positions if we convolve the full resolution image with a filter 'with holes', in which we upsample the original filter by a factor of 2, and introduce zeros in between filter values. Although the effective filter size increases, we only need to take into account the non-zero filter values, hence both the number of filter parameters and the number of operations per position stay constant. The resulting scheme allows us to easily and explicitly control the spatial resolution of neural network feature responses.
In the context of DCNNs one can use atrous convolution in a chain of layers, effectively allowing us to compute the final DCNN network responses at an arbitrarily high resolution. For example, in order to double the spatial density of computed feature responses in the VGG-16 or ResNet-101 networks, we find the last pooling or convolutional layer that decreases resolution ('pool5' or 'conv5 1' respectively), set its stride to 1 to avoid signal decimation, and replace all subsequent convolutional layers with atrous convolutional layers having rate r = 2. Pushing this approach all the way through the network could allow us to compute feature responses at the original image resolution, but this ends up being too costly. We have adopted instead a hybrid approach that strikes a good efficiency/accuracy trade-off, using atrous convolution to increase by a factor of 4 the density of computed feature maps, followed by fast bilinear interpolation by an additional factor of 8 to recover feature maps at the original image resolution. Bilinear interpolation is sufficient in this setting because the class score maps (corresponding to log-probabilities) are quite smooth, as illustrated in Fig. 5. Unlike the deconvolutional approach adopted by [14], the proposed approach converts image classification networks into dense feature extractors without requiring learning any extra parameters, leading to faster DCNN training in practice.
Atrous convolution also allows us to arbitrarily enlarge the field-of-view of filters at any DCNN layer. State-of-theart DCNNs typically employ spatially small convolution kernels (typically 3×3) in order to keep both computation and number of parameters contained. Atrous convolution with rate r introduces r − 1 zeros between consecutive filter values, effectively enlarging the kernel size of a k ×k filter to k e = k + (k − 1)(r − 1) without increasing the number of parameters or the amount of computation. It thus offers an efficient mechanism to control the field-of-view and finds the best trade-off between accurate localization (small field-of-view) and context assimilation (large field-of-view). We have successfully experimented with this technique: Our DeepLab-LargeFOV model variant [38] employs atrous convolution with rate r = 12 in VGG-16 'fc6' layer with significant performance gains, as detailed in Section 4.
Turning to implementation aspects, there are two efficient ways to perform atrous convolution. The first is to implicitly upsample the filters by inserting holes (zeros), or equivalently sparsely sample the input feature maps [15]. We implemented this in our earlier work [6], [38], followed by [76], within the Caffe framework [41] by adding to the im2col function (it extracts vectorized patches from multichannel feature maps) the option to sparsely sample the underlying feature maps. The second method, originally proposed by [82] and used in [3], [16] is to subsample the input feature map by a factor equal to the atrous convolution rate r, deinterlacing it to produce r 2 reduced resolution maps, one for each of the r×r possible shifts. This is followed by applying standard convolution to these intermediate feature maps and reinterlacing them to the original image resolution. By reducing atrous convolution into regular convolution, it allows us to use off-the-shelf highly optimized convolution routines. We have implemented the second approach into the TensorFlow framework [83].
Multiscale Image Representations using Atrous Spatial Pyramid Pooling
DCNNs have shown a remarkable ability to implicitly represent scale, simply by being trained on datasets that contain objects of varying size. Still, explicitly accounting for object scale can improve the DCNN's ability to successfully handle both large and small objects [6].
We have experimented with two approaches to handling scale variability in semantic segmentation. The first approach amounts to standard multiscale processing [17], [18]. We extract DCNN score maps from multiple (three in our experiments) rescaled versions of the original image using parallel DCNN branches that share the same parameters. To produce the final result, we bilinearly interpolate the feature maps from the parallel DCNN branches to the original image resolution and fuse them, by taking at each position the maximum response across the different scales. We do this both during training and testing. Multiscale processing significantly improves performance, but at the cost of computing feature responses at all DCNN layers for multiple scales of input.
The second approach is inspired by the success of the R-CNN spatial pyramid pooling method of [20], which showed that regions of an arbitrary scale can be accurately and efficiently classified by resampling convolutional features extracted at a single scale. We have implemented a variant of their scheme which uses multiple parallel atrous convolutional layers with different sampling rates. The features extracted for each sampling rate are further processed in separate branches and fused to generate the final result. The proposed "atrous spatial pyramid pooling" (DeepLab-ASPP) approach generalizes our DeepLab-LargeFOV variant and is illustrated in Fig. 4.
Structured Prediction with Fully-Connected Conditional Random Fields for Accurate Boundary Recovery
A trade-off between localization accuracy and classification performance seems to be inherent in DCNNs: deeper models with multiple max-pooling layers have proven most successful in classification tasks, however the increased invariance and the large receptive fields of top-level nodes can only yield smooth responses. As illustrated in Fig. 5 5: Score map (input before softmax function) and belief map (output of softmax function) for Aeroplane. We show the score (1st row) and belief (2nd row) maps after each mean field iteration. The output of last DCNN layer is used as input to the mean field inference.
score maps can predict the presence and rough position of objects but cannot really delineate their borders. Previous work has pursued two directions to address this localization challenge. The first approach is to harness information from multiple layers in the convolutional network in order to better estimate the object boundaries [14], [21], [52]. The second is to employ a super-pixel representation, essentially delegating the localization task to a lowlevel segmentation method [50].
We pursue an alternative direction based on coupling the recognition capacity of DCNNs and the fine-grained localization accuracy of fully connected CRFs and show that it is remarkably successful in addressing the localization challenge, producing accurate semantic segmentation results and recovering object boundaries at a level of detail that is well beyond the reach of existing methods. This direction has been extended by several follow-up papers [17], [40], [58], [59], [60], [61], [62], [63], [65], since the first version of our work was published [38].
Traditionally, conditional random fields (CRFs) have been employed to smooth noisy segmentation maps [23], [31]. Typically these models couple neighboring nodes, favoring same-label assignments to spatially proximal pixels. Qualitatively, the primary function of these short-range CRFs is to clean up the spurious predictions of weak classifiers built on top of local hand-engineered features.
Compared to these weaker classifiers, modern DCNN architectures such as the one we use in this work produce score maps and semantic label predictions which are qualitatively different. As illustrated in Fig. 5, the score maps are typically quite smooth and produce homogeneous classification results. In this regime, using short-range CRFs can be detrimental, as our goal should be to recover detailed local structure rather than further smooth it. Using contrastsensitive potentials [23] in conjunction to local-range CRFs can potentially improve localization but still miss thinstructures and typically requires solving an expensive discrete optimization problem.
To overcome these limitations of short-range CRFs, we integrate into our system the fully connected CRF model of [22]. The model employs the energy function where x is the label assignment for pixels. We use as unary potential θ i (x i ) = − log P (x i ), where P (x i ) is the label assignment probability at pixel i as computed by a DCNN.
The pairwise potential has a form that allows for efficient inference while using a fully-connected graph, i.e. when connecting all pairs of image pixels, i, j. In particular, as in [22], we use the following expression: where µ(x i , x j ) = 1 if x i = x j , and zero otherwise, which, as in the Potts model, means that only nodes with distinct labels are penalized. The remaining expression uses two Gaussian kernels in different feature spaces; the first, 'bilateral' kernel depends on both pixel positions (denoted as p) and RGB color (denoted as I), and the second kernel only depends on pixel positions. The hyper parameters σ α , σ β and σ γ control the scale of Gaussian kernels. The first kernel forces pixels with similar color and position to have similar labels, while the second kernel only considers spatial proximity when enforcing smoothness. Crucially, this model is amenable to efficient approximate probabilistic inference [22]. The message passing updates under a fully decomposable mean field approximation b(x) = i b i (x i ) can be expressed as Gaussian convolutions in bilateral space. High-dimensional filtering algorithms [84] significantly speed-up this computation resulting in an algorithm that is very fast in practice, requiring less that 0.5 sec on average for Pascal VOC images using the publicly available implementation of [22].
EXPERIMENTAL RESULTS
We finetune the model weights of the Imagenet-pretrained VGG-16 or ResNet-101 networks to adapt them to the semantic segmentation task in a straightforward fashion, following the procedure of [14]. We replace the 1000-way Imagenet classifier in the last layer with a classifier having as many targets as the number of semantic classes of our task (including the background, if applicable). Our loss function is the sum of cross-entropy terms for each spatial position in the CNN output map (subsampled by 8 compared to the original image). All positions and labels are equally weighted in the overall loss function (except for unlabeled pixels which are ignored). Our targets are the ground truth labels (subsampled by 8). We optimize the objective function with respect to the weights at all network layers by the standard SGD procedure of [2]. We decouple the DCNN and CRF training stages, assuming the DCNN unary terms are fixed when setting the CRF parameters.
We evaluate the proposed models on four challenging datasets: PASCAL VOC 2012, PASCAL-Context, PASCAL-Person-Part, and Cityscapes. We first report the main results of our conference version [38] on PASCAL VOC 2012, and move forward to latest results on all datasets.
PASCAL VOC 2012
Dataset The PASCAL VOC 2012 segmentation benchmark [34] involves 20 foreground object classes and one background class. The original dataset contains 1, 464 (train), 1, 449 (val), and 1, 456 (test) pixel-level labeled images for training, validation, and testing, respectively. The dataset is augmented by the extra annotations provided by [85], resulting in 10, 582 (trainaug) training images. The performance is measured in terms of pixel intersection-over-union (IOU) averaged across the 21 classes.
Results from our conference version
We employ the VGG-16 network pre-trained on Imagenet, adapted for semantic segmentation as described in Section 3.1. We use a mini-batch of 20 images and initial learning rate of 0.001 (0.01 for the final classifier layer), multiplying the learning rate by 0.1 every 2000 iterations. We use momentum of 0.9 and weight decay of 0.0005.
After the DCNN has been fine-tuned on trainaug, we cross-validate the CRF parameters along the lines of [22]. We use default values of w 2 = 3 and σ γ = 3 and we search for the best values of w 1 , σ α , and σ β by cross-validation on 100 images from val. We employ a coarse-to-fine search scheme. The initial search range of the parameters are w 1 ∈ [3 : 6], σ α ∈ [30 : 10 : 100] and σ β ∈ [3 : 6] (MATLAB notation), and then we refine the search step sizes around the first round's best values. We employ 10 mean field iterations.
Field of View and CRF: In Tab. 1, we report experiments with DeepLab model variants that use different field-ofview sizes, obtained by adjusting the kernel size and atrous sampling rate r in the 'fc6' layer, as described in Sec. 3.1. We start with a direct adaptation of VGG-16 net, using the original 7 × 7 kernel size and r = 4 (since we use no stride for the last two max-pooling layers). This model yields performance of 67.64% after CRF, but is relatively slow (1.44 images per second during training). We have improved model speed to 2.9 images per second by reducing the kernel size to 4 × 4. We have experimented with two such network variants with smaller (r = 4) and larger (r = 8) FOV sizes; the latter one performs better. Finally, we employ kernel size 3×3 and even larger atrous sampling rate (r = 12), also making the network thinner by retaining a random subset of 1,024 out of the 4,096 filters in layers 'fc6' and 'fc7'. The resulting model, DeepLab-CRF-LargeFOV, matches the performance of the direct VGG-16 adaptation (7 × 7 kernel size, r = 4). At the same time, DeepLab-LargeFOV is 3.36 times faster and has significantly fewer parameters (20.5M instead of 134.3M).
The CRF substantially boosts performance of all model variants, offering a 3-5% absolute increase in mean IOU.
Test set evaluation: We have evaluated our DeepLab-CRF-LargeFOV model on the PASCAL VOC 2012 official test set. It achieves 70.3% mean IOU performance. as different learning hyper parameters vary. Employing "poly" learning policy is more effective than "step" when training DeepLab-LargeFOV.
Improvements after conference version of this work
After the conference version of this work [38], we have pursued three main improvements of our model, which we discuss below: (1) different learning policy during training, (2) atrous spatial pyramid pooling, and (3) employment of deeper networks and multi-scale processing. Learning rate policy: We have explored different learning rate policies when training DeepLab-LargeFOV. Similar to [86], we also found that employing a "poly" learning rate policy (the learning rate is multiplied by (1− iter max iter ) power ) is more effective than "step" learning rate (reduce the learning rate at a fixed step size). As shown in Tab. 2, employing "poly" (with power = 0.9) and using the same batch size and same training iterations yields 1.17% better performance than employing "step" policy. Fixing the batch size and increasing the training iteration to 10K improves the performance to 64.90% (1.48% gain); however, the total training time increases due to more training iterations. We then reduce the batch size to 10 and found that comparable performance is still maintained (64.90% vs. 64.71%). In the end, we employ batch size = 10 and 20K iterations in order to maintain similar training time as previous "step" policy. Surprisingly, this gives us the performance of 65.88% (3.63% improvement over "step") on val, and 67.7% on test, compared to 65.1% of the original "step" setting for DeepLab-LargeFOV before CRF. We employ the "poly" learning rate policy for all experiments reported in the rest of the paper.
Atrous Spatial Pyramid Pooling: We have experimented with the proposed Atrous Spatial Pyramid Pooling (ASPP) scheme, described in Sec. 3.1. As shown in Fig. 7, ASPP for VGG-16 employs several parallel fc6-fc7-fc8 branches. They all use 3×3 kernels but different atrous rates r in the 'fc6' in order to capture objects of different size. In Tab. 3, we report results with several settings: (1) Our baseline . For each variant we report results before and after CRF. As shown in the table, ASPP-S yields 1.22% improvement over the baseline LargeFOV before CRF. However, after CRF both LargeFOV and ASPP-S perform similarly. On the other hand, ASPP-L yields consistent improvements over the baseline LargeFOV both before and after CRF. We evaluate on test the proposed ASPP-L + CRF model, attaining 72.6%. We visualize the effect of the different schemes in Fig. 8. Deeper Networks and Multiscale Processing: We have experimented building DeepLab around the recently proposed residual net ResNet-101 [11] instead of VGG-16. Similar to what we did for VGG-16 net, we re-purpose ResNet-101 by atrous convolution, as described in Sec. 3.1. On top of that, we adopt several other features, following recent work of [17], [18], [39], [40], [58], [59], [62]: (1) Multi-scale inputs: We separately feed to the DCNN images at scale = {0.5, 0.75, 1}, fusing their score maps by taking the maximum response across scales for each position separately [17]. (2) Models pretrained on MS-COCO [87]. (3) Data augmentation by randomly scaling the input images (from 0.5 to 1.5) during training. In Tab. 4, we evaluate how each of these factors, along with LargeFOV and atrous spatial pyramid pooling (ASPP), affects val set performance. Adopting ResNet-101 instead of VGG-16 significantly improves DeepLab performance (e.g., our simplest ResNet-101 based model attains 68.72%, compared to 65.76% of our DeepLab-LargeFOV VGG-16 based variant, both before CRF). Multiscale fusion [17] brings extra 2.55% improvement, while pretraining the model on MS-COCO gives another 2.01% gain. Data augmentation during training is effective (about 1.6% improvement). Employing LargeFOV (adding an atrous convolutional layer on top of ResNet, with 3×3 kernel and rate = 12) is beneficial (about 0.6% improvement). Further 0.8% improvement is achieved by atrous spatial pyramid pooling (ASPP). Post-processing our best model by dense CRF yields performance of 77.69%.
Qualitative results: We provide qualitative visual comparisons of DeepLab's results (our best model variant) before and after CRF in Fig. 6. The visualization results obtained by DeepLab before CRF already yields excellent segmentation results, while employing the CRF further improves the performance by removing false positives and refining object boundaries.
Test set results: We have submitted the result of our final best model to the official server, obtaining test set performance of 79.7%, as shown in Tab. 5. The model substantially outperforms previous DeepLab variants (e.g., DeepLab-LargeFOV with VGG-16 net) and is currently the top performing method on the PASCAL VOC 2012 segmentation leaderboard.
PASCAL-Context
Dataset: The PASCAL-Context dataset [35] provides detailed semantic labels for the whole scene, including both object (e.g., person) and stuff (e.g., sky). Following [35], the [11] for semantic segmentation. Qualitative results: We visualize the segmentation results of our best model with and without CRF as post processing in Fig. 11. DeepLab before CRF can already predict most of the object/stuff with high accuracy. Employing CRF, our model is able to further remove isolated false positives and improve the prediction along object/stuff boundaries.
PASCAL-Person-Part
Dataset: We further perform experiments on semantic part segmentation [98], [99], using the extra PASCAL VOC 2010 annotations by [36]. We focus on the person part for the dataset, which contains more training data and large variation in object scale and human pose. Specifically, the dataset contains detailed part annotations for every person, e.g. eyes, nose. We merge the annotations to be Head, Torso, Upper/Lower Arms and Upper/Lower Legs, resulting in six person part classes and one background class. We only use those images containing persons for training (1716 images) and validation (1817 images).
Evaluation: The human part segmentation results on PASCAL-Person-Part is reported in Tab. 7. [17] has already conducted experiments on this dataset with re-purposed VGG-16 net for DeepLab, attaining 56.39% (with multi-scale inputs). Therefore, in this part, we mainly focus on the effect of repurposing ResNet-101 for DeepLab. With ResNet-101, DeepLab alone yields 58.9%, significantly outperforming DeepLab-LargeFOV (VGG-16 net) and DeepLab-Attention (VGG-16 net) by about 7% and 2.5%, respectively. Incorporating multi-scale inputs and fusion by max-pooling further improves performance to 63.1%. Additionally pretraining the model on MS-COCO yields another 1.3% improvement. However, we do not observe any improvement when adopting either LargeFOV or ASPP on this dataset. Employing the dense CRF to post process our final output substantially outperforms the concurrent work [97] by 4.78%.
Qualitative results: We visualize the results in Fig. 12.
Cityscapes
Dataset: Cityscapes [37] is a recently released large-scale dataset, which contains high quality pixel-level annotations of 5000 images collected in street scenes from 50 different cities. Following the evaluation protocol [37], 19 semantic labels (belonging to 7 super categories: ground, construction, object, nature, sky, human, and vehicle) are used for evaluation (the void label is not considered for evaluation). The training, validation, and test sets contain 2975, 500, and 1525 images respectively. Test set results of pre-release: We have participated in benchmarking the Cityscapes dataset pre-release. As shown in the top of Tab. 8, our model attained third place, with per- Val set results: After the initial release, we further explored the validation set in Tab. 9. The images of Cityscapes have resolution 2048×1024, making it a challenging problem to train deeper networks with limited GPU memory. During benchmarking the pre-release of the dataset, we downsampled the images by 2. However, we have found that it is beneficial to process the images in their original resolution. With the same training protocol, using images of original resolution significantly brings 1.9% and 1.8% improvements before and after CRF, respectively. In order to perform inference on this dataset with high resolution images, we split each image into overlapped regions, similar to [37]. We have also replaced the VGG-16 net with ResNet-101. We do not exploit multi-scale inputs due to the limited GPU memories at hand. Instead, we only explore (1) deeper networks (i.e., ResNet-101), (2) data augmentation, (3) LargeFOV or ASPP, and (4) CRF as post processing on this dataset. We first find that employing ResNet-101 alone is better than using VGG-16 net. Employing LargeFOV brings 2.6% improvement and using ASPP further improves results by 1.2%. Adopting data augmentation and CRF as post processing brings another 0.6% and 0.4%, respectively.
CONCLUSION
Our proposed "DeepLab" system re-purposes networks trained on image classification to the task of semantic segmentation by applying the 'atrous convolution' with upsampled filters for dense feature extraction. We further extend it to atrous spatial pyramid pooling, which encodes objects as well as image context at multiple scales. To produce semantically accurate predictions and detailed segmentation maps along object boundaries, we also combine ideas from deep convolutional neural networks and fully-connected conditional random fields. Our experimental results show that the proposed method significantly advances the state-ofart in several challenging datasets, including PASCAL VOC 2012 semantic image segmentation benchmark, PASCAL-Context, PASCAL-Person-Part, and Cityscapes datasets. | 2016-06-02T21:52:21.000Z | 2016-06-02T00:00:00.000 | {
"year": 2016,
"sha1": "cab372bc3824780cce20d9dd1c22d4df39ed081a",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/1606.00915",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "cab372bc3824780cce20d9dd1c22d4df39ed081a",
"s2fieldsofstudy": [
"Computer Science"
],
"extfieldsofstudy": [
"Computer Science",
"Medicine"
]
} |
93195179 | pes2o/s2orc | v3-fos-license | The DNA-Damage Response to Ionizing Radiation in Human Lymphocytes
The human genome is constantly subjected to DNA damage derived from endogenous and exogenous sources. Normal cellular metabolism can give raise to DNA damage through free radicals production and replication errors, whereas environmental agents, such as ultraviolet (UV) and ionizing radiation (IR), induce specific types of lesions. DNA damage can ultimately lead to genomic instability and carcinogenesis if not properly addressed, thus an elaborate network of proteins has evolved in cells to maintain genome integrity through a pathway termed the DNA-damage response (DDR). DDR allows DNA damage detection, signal propagation and transduction to a multitude of effector proteins, which promote cell survival and activate cell cycle arrest to allow DNA repair. When cells are unable to properly repair DNA, apoptosis or senescence pathways may be triggered, thus eliminating the possibility of passing on damaged or unrepaired genetic material to its progeny. The ultimate goal of DDR is to protect the integrity of genetic information and its faithful transmission, either to DNA by replication or to mRNA by transcription. Therefore, dysregulation of DDR pathway can contribute to carcinogenesis and developmental defects. Ionizing radiation represents a mutagen agent to which human population is exposed due to environmental, professional or accidental reasons. The biological effects of IR depend on the quality and the dose of radiation and on the cell type. Linear energy transfer (LET) represents the energy lost per unit distance as an ionizing particle travels through a material, and it is used to quantify the effects of IR on biological specimens. High-LET radiation (i.e. alpha-particles, neutrons, protons) are densely IR since they lose the energy throughout a small distance, causing dense ionization along their track with high localized multiple DNA damage. Low-LET radiation, such as X and -rays, are sparsely IR since they produce ionizations sparsely along their track and, hence, almost homogeneously within a cell. The biological effect of high-LET radiations are in general much higher than those of low-LET radiations with the same energy. This is because high-LET radiation deposits most of its energy within the volume of one cell and the damage to DNA is therefore larger (Anderson et al., 2002; Brenner & Ward, 1992; Prise et al., 2001). Radiation is potentially harmful to humans, because the ionization it produces can significantly alter the structure of molecules within a living cell.The exposure to ionizing radiation elicits a complex cell response to overcome the dangerous effects of DNA-radiation interaction, such as reactive oxygen species (ROS) production, base oxidation and DNA breaks formation (i.e. single-
Introduction
The human genome is constantly subjected to DNA damage derived from endogenous and exogenous sources.Normal cellular metabolism can give raise to DNA damage through free radicals production and replication errors, whereas environmental agents, such as ultraviolet (UV) and ionizing radiation (IR), induce specific types of lesions.DNA damage can ultimately lead to genomic instability and carcinogenesis if not properly addressed, thus an elaborate network of proteins has evolved in cells to maintain genome integrity through a pathway termed the DNA-damage response (DDR).DDR allows DNA damage detection, signal propagation and transduction to a multitude of effector proteins, which promote cell survival and activate cell cycle arrest to allow DNA repair.When cells are unable to properly repair DNA, apoptosis or senescence pathways may be triggered, thus eliminating the possibility of passing on damaged or unrepaired genetic material to its progeny.The ultimate goal of DDR is to protect the integrity of genetic information and its faithful transmission, either to DNA by replication or to mRNA by transcription.Therefore, dysregulation of DDR pathway can contribute to carcinogenesis and developmental defects.Ionizing radiation represents a mutagen agent to which human population is exposed due to environmental, professional or accidental reasons.The biological effects of IR depend on the quality and the dose of radiation and on the cell type.Linear energy transfer (LET) represents the energy lost per unit distance as an ionizing particle travels through a material, and it is used to quantify the effects of IR on biological specimens.High-LET radiation (i.e.alpha-particles, neutrons, protons) are densely IR since they lose the energy throughout a small distance, causing dense ionization along their track with high localized multiple DNA damage.Low-LET radiation, such as X and -rays, are sparsely IR since they produce ionizations sparsely along their track and, hence, almost homogeneously within a cell.The biological effect of high-LET radiations are in general much higher than those of low-LET radiations with the same energy.This is because high-LET radiation deposits most of its energy within the volume of one cell and the damage to DNA is therefore larger (Anderson et al., 2002;Brenner & Ward, 1992;Prise et al., 2001).Radiation is potentially harmful to humans, because the ionization it produces can significantly alter the structure of molecules within a living cell.The exposure to ionizing radiation elicits a complex cell response to overcome the dangerous effects of DNA-radiation interaction, such as reactive oxygen species (ROS) production, base oxidation and DNA breaks formation (i.e.single-promoting the maturation of DSB-associated chromatin (Huen et al., 2007;Mailand et al., 2007;Kolas et al., 2007;Wang et al., 2007).Through its direct interaction with MDC1, RNF8 is recruited to DSB sites along with the other factors in the initial wave of protein accumulation at IRIF (Mailand et al., 2007).Here, RNF8 initiates a complex and tightly regulated ubiquitylation cascade of histones H2A and H2AX at the DSB-flanking chromatin, which causes chromatin restructuring (through incompletely understood mechanisms) associated with the generation of binding sites for protein complexes that accumulate downstream of these early factors (Huen et al., 2007;Mailand et al., 2007).The covalent attachment of small ubiquitin-like modifier (SUMO) proteins to specific lysine residues of target proteins, a process termed sumoylation, is a recently discovered protein modification that plays an important role in regulating many diverse cellular processes.Sumoylation is a signalling mechanism which, analogous to and in parallel with ubiquitination, plays an important role in chromatin remodelling at DSB sites.Sumoylation is catalyzed by SUMOspecific E1, E2, E3s and is reversed by a family of Sentrin/SUMO-specific proteases, SENPs.The SUMO E3 ligases PIAS1 and PIAS4 are required for recruitment of proteins BRCA1 and 53BP1 to IRIF, respectively, and both SUMO1 and SUMO2/3 accumulate at IRIF (Galanty et al., 2009;Morris et al., 2009).Moreover, replicating protein A (RPA70) sumoylation facilitates recruitment of RAD51 to the DNA damage foci to initiate DNA repair through homologous recombination (Dou et al., 2010).
Surviving fraction, HPRT mutant frequency and molecular characterization of mutations in irradiated human lymphocytes
To contribute to the understanding of the DDR pathway following radiation-induced damage, we studied the effects of IR on human peripheral blood lymphocytes (PBL) irradiated in vitro with different doses of -rays and low-energy protons (0.88 MeV; LET: 28keV/m).Irradiated PBL were assayed for cell viability, for mutant frequency at the hypoxanthine-guanine phosphoribosyl transferase (HPRT) gene, and for molecular characterization of mutations.The HPRT gene, which in humans covers 44 kb and encodes a non-essential protein, allows a wide variety of mutations, from point mutation to total gene deletion, to be detected by using the HPRT mutation assay.Deletion of DNA segments is the predominant form of radiation damage in cells that survive irradiation and the mechanisms for producing deletion mutations appear to be very complex and dependent on target cell, gene studied, dose, dose-rate and radiation quality (Schwartz et al., 2000).Large deletions are thought to derive from two DNA double strand breaks close enough to interact each other.Thus, deletion frequency should be dependent on radiation dose and dose-rate.All PBL samples, irradiated either with -rays or protons, showed a dose-dependent cell survival decrease and a HPRT mutant frequency increase.In Table 1 we report the data of survival and HPRT mutant frequency in human PBL irradiated with different doses of rays and low-energy protons.Molecular analyses of HPRT mutants were carried out in clones derived from PBL exposed to -rays (1-4 Gy) and to low-energy protons (0.5-2Gy), and in non-irradiated clones of the same donors.Among the mutant clones obtained from -irradiated PBL, point mutations were the only kind of mutation in 1Gy irradiated clones, whereas deletions were the prevalent mutations among clones irradiated at 4Gy.In contrast, no partial or total deletions of the HPRT gene were detected in mutant clones isolated after proton irradiation.Figure 1 shows the percentages of mutation types calculated over the total number of mutations derived from human PBL irradiated with both radiation qualities.The difference of the mutational spectrum between -rays and protons probably depends on the nature of IR.Complex gene rearrangements and deletions are assumed to be a specific signature of exposure to high-LET radiation in mammalian cells.Nevertheless, the absence of these kind of mutations in PBL irradiated with protons could be due to their lower survival in comparison with -irradiated PBL, as a consequence of the more cytotoxic than mutagenic lesions induced.SF (%) ± S.E.
HPRT MF (x10 www.intechopen.com The DNA-Damage Response to Ionizing Radiation in Human Lymphocytes 7
Double strand break repair in irradiated human lymphocytes
To evaluate the repair of DSBs in PBL irradiated with -rays or low-energy protons, we analyzed -H2AX kinetics through foci formation and disappearance.The presence of nuclear foci was monitored by in situ immunofluorescence at different time points after IR.
Figure 2 shows the different -H2AX foci pattern at 2h after IR with high-and low-LET radiation, reflecting the sparsely and densely nature of IR.In irradiated PBL the kinetics of DSB repair was different according to the quality of radiation.In particular, the fraction of foci-positive cells was higher in -irradiated than in proton-irradiated lymphocytes at all times, except at 24h after IR.Early after irradiation (30 min and 2h) -H2AX foci were present in 80% and 43% of PBL, irradiated respectively with -rays and protons (Fig. 3A).This difference is mainly due to the quality of radiation: while sparsely IR as -rays lose their energy throughout all directions thus hitting all nuclei, densely IR as protons, hits the fraction of cells along their track.The preferential production of complex aberrations is related to the unique energy deposition patterns produced by densely ionizing radiation, causing highly localized multiple DNA damage.At 6h after IR the percentage of foci-positive cells decreased, revealing the repair capacity of DSBs in both kind of irradiated lymphocytes, although the repair kinetics was faster in -irradiated PBL.At 24h after IR the percentage of -H2AX foci positive cells tended to reach the value of nonirradiated PBL, either in and in proton-irradiated PBL.
The mean number of -H2AX foci per nucleus was higher in PBL irradiated with -rays than with protons, at all times after IR (Fig. 3B).In our experiments, most of PBL displayed 10-20 or more -H2AX foci/nucleus 30 min after irradiation, giving a maximum yield of 4 foci/Gy, a number similar to that reported for human PBL irradiated with X-rays (about 10 foci/Gy) (Sak et al., 2007;Schertan et al., 2008), but much lower than that determined in human fibroblasts (32.2 foci/Gy) (Hamada et al., 2006).It has been reported that the number of -H2AX foci is well consistent with the number of theoretically calculated DSB/Gy of sparsely ionizing radiation (i.e. about 40) (Ward, 1991), if one DSB is contained per focus.The lower number of foci detected in peripheral lymphocytes could depend on the large amount of heterochromatin of resting cells, from which -H2AX foci are mostly excluded (Cowell et al., 2007) as well as on the small nuclear volume, where overlapping foci are difficult to detect separately.Thus, in accordance with the observations of Scherthan et al., (2008) we hypothesize that -H2AX foci detected very early after irradiation contained more than one DSB; later on, the number of foci decreased and probably each foci contained only one DSB.Furthermore , we found a size increase of -H2AX foci in cells irradiated with protons, as compared with gamma irradiations, probably as a consequence of DSBs clusters induced by high-LET radiation.Our results are in accordance with those in melanoma cells exposed to low-and high-LET radiation (Ibañez et al., 2009).
Cellular effects of ionizing radiation in human lymphocytes cultured in microgravity condition
The cellular response to ionizing radiation besides on genetic and physiological features of the biological systems, depends also on environmental conditions occurring during DNA repair.Space missions expose humans to an exogenous environment not encountered within our biosphere, in particular the contemporary presence of radiation and a condition of weightlessness called microgravity (10 -4 -10 -6 g).One of the important aspects of risk estimation during space flights, is whether the effects of radiation on astronauts are influenced by microgravity.The combination of microgravity and ionizing radiation has been demonstrated to have a synergistic action on human cells, both in vivo and in vitro.The effects of space environment experienced by astronauts include loss of calcium and minerals from bone, decreased skeletal muscle mass and depressed immune function (Longnecker et al., 2004).Ex vivo astronaut studies, in-flight cell cultures, and ground models of microgravity studies, have consistently demonstrated inhibition of lymphocyte proliferation and suppressed or altered cytokine secretion (Lewis et al., 1998;Grimm et al., 2002).Among the biological effects of the reduced gravity in human cell cultures, were described apoptosis induction, cytoskeletal alteration, cell growth inhibition and increased frequency of chromosome aberrations (Lewis et al., 1998;Grimm et al., 2002;Cubano et al., 2000;Sytkowski et al., 2001;Mosesso et al., 2001;Durante et al., 2003).Gene expression analyses on human cells grown in microgravity during space flights or in modeled microgravity (MMG) on Earth, report changes among genes involved in apoptosis induction, cell adhesion, cytoskeletal features and cell differentiation, even if large differences in culture conditions, cell types and methods to simulate microgravity were adopted in those experiments (Hammond et al., 2000;Lewis et al., 2001, Torigoe et al., 2001, Infanger et al., 2007).While the genotoxic effects of ionizing radiation have been intensely studied, the consequence of the reduced gravity together with radiation is still unclear.Therefore, it is of special importance to verify whether DDR is affected by the combined effects of IR and microgravity, in view of the prolonged permanence of man in future space missions.To analyze the possibility that a reduced gravitational force impairs the DDR pathway, increasing the risk of the exposure to conditions occurring during spaceflight, we studied the DDR to ionizing radiation in human PBL incubated in MMG and in parallel static conditions.Microgravity was simulated by culturing PBL in the Rotating Wall Vessel bioreactor (Synthecon, Cellon, Fig. 4) placed inside a humidified incubator, vertically rotating at 23 rpm.The Rotating Wall Vessel was developed at the NASA Johnson Space Center (Houston, TX) to simulate, as accurately as possible, culture conditions predicted to occur during experiments in space.In the rotating system, the gravity is balanced by equal and opposite mechanical forces (centrifugal, Coriolis and shear components), and the gravitational vector is reduced to about 10 −2 g.In these conditions, single cells are nearly always in suspension, rotating quasi-stationary with the fluid, in a low-shear culture environment (Unsworth 1998, Maccarone et al., 2003).Ground based (1 g) PBL cultures, both irradiated and non-irradiated, were kept at the same cell density in flasks inside a humidified incubator for 24 h.
The DNA-damage response of human peripheral lymphocytes cultured in microgravity after γ-irradiation
The DNA-damage response was investigated in human PBL irradiated in vitro with different doses of gamma rays and incubated for 24 h in 1 g or in modeled microgravity (MMG).While cell survival was only slight affected by MMG, the HPRT mutant frequency significantly increased in PBL incubated in MMG after irradiation compared with those maintained in 1 g.Given the increase of HPRT mutants in MMG, we investigated whether the reduced gravity affected the progression of the rejoining of double strand breaks (DSBs) in human PBL irradiated with -rays and incubated in MMG or in 1g.The kinetics of -H2AX foci was monitored during the repair incubation, showing that DSBs rejoining was slower in MMG than in 1g at 6 and 24 h after irradiation.In addition, the mean number of -H2AX foci per nucleus was significantly higher in MMG than in 1g at the same time-points (Fig. 5).To verify whether the disappearance of -H2AX foci correlated with the rejoining of double strand breaks, we subjected irradiated lymphocytes to a non-radioactive PFGE assay (Gradzka et al., 2005).The fraction of DNA released (FR) from the plug after PFGE was considered a measure of DSB level.The kinetics of DSB removal in lymphocytes irradiated and incubated in 1g exhibits a typical fast initial component and a decreasing rate at longer repair intervals, in accordance with data from other authors (Stenerlow et al., 2000;Gradzka et al., 2005).Both the methods we used to quantify DNA fragmentation, reported a lower rate of DSB rejoining in lymphocytes incubated in MMG compared to those in 1g, in agreement with the kinetics of -H2AX foci.Our results provide evidences that MMG incubation during DNA repair delayed the rate of radiation-induced DSB rejoining, and increased, as a consequence, the genotoxic effects of ionizing radiation.
We then assessed whether MMG incubation affected IR-induced apoptosis.Human lymphocytes, irradiated and non-irradiated, were scored for the presence of fragmented nuclei and apoptotic bodies.Apoptotic index (A.I.) increased with time after irradiation and at 24 h it was significantly higher in PBL incubated in MMG compared to those in 1g (19.3% vs. 13.7%respectively, P < 0.001).Since DSBs can be induced, besides radiation, also by DNA fragmentation during early apoptosis, we measured caspase-3 activation at the same time-points by the cleavage of the peptide substrate DEVD-AFC.Caspase-3 activation was only slightly higher in PBL maintained in M M G t h a n i n 1 g , i n c o n t r a s t t o t h e h i g h persistence of foci-positive cells (P < 0.01), and foci number/nucleus (P < 0.001), suggesting that the level of H2AX phosphorylation was principally correlated to a delayed DSB resolution rather than apoptosis induction.
We then tested for the possibility that MMG incubation affects DNA damage response by altering the recruitment of the signaling proteins, 53BP1, NBS1-p343 and ATM-p1981, which co-localize with -H2AX foci to DSB sites (Fig. 6A).After irradiation ∼90% of cells became foci-positive for the three proteins in both gravity conditions (not shown).In contrast to -H2AX, the fraction of foci-positive cells persisted high up to 24 h after irradiation in 1g and no differences between the two culture conditions were detected.The number of foci/nucleus significantly decreased during post-irradiation incubation from 14-16 foci/nucleus at 30 min to 4-5 foci/nucleus at 24 h (Fig. 6B), without differences between samples in 1g and MMG.The discrepancies with the kinetics of -H2AX foci suggest that these proteins could represent the remaining scaffold structure used for DSB repair that persisted after the repair has been completed (Markova et al., 2007, van Veelen et al., 2005).
The DNA-damage response of human tumoral lymphocytes cultured in microgravity after γ-irradiation
We analyzed the DNA damage response to radiation also in human tumoral lymphocytes (TK6 cells, lymphoblastoid B cells) irradiated with rays (1, 2, 4 Gy) and incubated in 1g or in MMG during the repair time.In irradiated TK6 cells, we observed a higher survival in MMG than in 1g, and the difference was significant at 4Gy.In addition, in cells maintained in MMG rather than in 1g after γ-irradiation, higher frequency of HPRT mutants was observed at all irradiation doses, particularly at 4Gy (Figure 7A).Remarkably, at this dose, mutant frequency may often be underestimated, since cells with many and severe mutations are unable to repair DNA damage and die.Instead, in TK6 cells cultured in MMG after irradiation, mutant frequency increased with doses up to 4Gy (Figure 7A).The frequency of micronucleated cells was measured in both gravity conditions after irradiation.At the end of post-irradiation incubation (24 h time-point), the percentage of micronuclei (MN) was significantly higher in both non-irradiated and in irradiated cells incubated in MMG compared with 1g (Fig. 7B).Eighteen hours later (42 h from irradiation), the percentage of MN in cultures incubated in MMG was higher than in 1g only at 2Gy γ-ray dose.At 48 h time-point, MN frequencies observed in 1g or MMG were comparable.As expected, MN significantly increased after irradiation in both gravity conditions with respect to nonirradiated cells; a significant difference was still observed at 48 h after irradiation at both 1 and 2Gy.The significant increase of micronucleated cells in MMG suggested that MMG itself was able to induce chromosome damage.al., 1998;Hughes-Fulford 2001;Kita et al., 2000), which in turn may explain the results reported here.It remains to be determined if one upstream or several downstream genes belonging to the pathway of the radiation response are involved in the effects induced by MMG incubation.
Gene expression changes in human lymphocytes cultured in microgravity during the DNA-damage response to radiation
Gene expression changes represent an early bio-indicator of radiation exposure.Given the increase of HPRT mutants observed in human lymphocytes incubated in modeled microgravity, we investigated whether this gravity condition can alter the transcription of 14 genes representative of the main DNA repair pathways.The genes analyzed are representative of the major DNA repair pathways: four genes (Ku70, Ku80, DNA-ligase IV, XRCC4) are involved in non-homologous end joining processes (NHEJ), three genes (BRCA1, BRCA2, RAD51) in homologous recombination (HR), four genes (XRCC1, PCNA,GADD45A, p21Cip1/Waf1) in base excision repair (BER) and two genes (DDB2, XPC) in nucleotide excision repair (NER).DNA-ligase I, involved in both BER and NER repair pathways, was analyzed too.
Analyses were carried out in three pools of three donor, each by quantitative real time PCR.
Results show that almost all BER and NER genes were up-regulated in irradiated PBL, whereas the expression of HR and NHEJ genes was only slightly or not affected by radiation (Fig. 9).Incubation in modeled microgravity after irradiation did not significantly change the expression of genes involved in DNA repair, suggesting that transcriptional impairment was not responsible for the increase of mutant frequency observed in irradiated cells incubated in microgravity in comparison to the static 1 g condition.These findings in agreement with previous studies on gene expression of non-irradiated space flown and RWV cultured cells, showing that DNA repair genes were unaffected by low-gravity whereas intracellular signaling, growth regulatory, cytoskeletal and tumor suppressor genes were altered (Lewis et al., 2001;Hammond et al., 2000;Pardo et al., 2005).Recently, a new class of important gene modulators has been discovered: microRNAs.They are a large family of small non-coding RNAs of 18-24 nucleotides that negatively regulate gene expression levels by binding to microRNA-binding elements in the 3' untranslatedregion (3'UTR) of target mRNAs thereby triggering decreased protein translation mainly through mRNA degradation (Guo et al., 2010).A single miRNA may have broad effects on gene expression networks, such as regulating cell lineage specificity, cellular functions or stress response.By considering the complexity of the DNA-damage response (DDR), addressed to maintain genome integrity through cell cycle arrest, DNA repair and/or apoptosis, it is expected that miRNAs have an important role in this cellular process.Whilst miRNA-mediated DDR has been studied after UV radiation and hypoxic stress (Pothof et al., 2009;Crosby et al., 2009) that of radiation combined with microgravity has not been studied yet and should give important information about risk assessment in space environment.MicroRNAs profiling were carried out by using the platform "Human miRNA Microarray kit (V2)" (Agilent), according to the Agilent miRNA protocol.For mRNA expression profile we used the "Whole Human Genome Oligo Microarray" (Agilent), consisting of ~41.000 (60-mer) oligonucleotide probes, which span conserved exons across the transcripts of the targeted full-length genes.Identification of differentially expressed genes and miRNAs was performed with one and two class Significance Analysis of Microarray (SAM) program (Tusher et al., 2001) with default settings.Figure 10A shows a dendrogram relative to some miRNAs differentially expressed following ionizing radiation in human PBL.MiRNA expression profile was carried out at 4h and 24h after irradiation with 0.2Gy and 2Gy and incubation in 1g and MMG and compared to that of non-irradiated PBL maintained in parallel conditions.Results showed that in both gravity conditions the miRNA expression profile was dose-specific, as indicated by the low percentage of common miRNA responsive to both doses; moreover, the effects of the higher dose predominated at the late time point.Interestingly, MMG tended to decrease the number of radio-responsive miRNAs respect to 1g condition, in particular at 24h after irradiation (Figure 10B).
To predict the target genes of differentially expressed miRNAs we first performed a computational analyses using PITA algorithm available on line (Kertesz et al., 2007).However, all available software for target prediction are characterized by a large fraction of false positives, thus to identify the most likely targets, we have integrated mRNA and miRNA expression data, obtained on the same lymphocyte samples, using MAGIA (MiRNA And Genes Integrated Analysis) web tool (Sales et al., 2010).We used a non-parametric index (Spearman correlation coefficient), the most indicated statistical coefficient for a small number of measures, to estimate the degree of anti-correlation (e.g.up-regulated miRNA and corresponding down-regulated mRNA target) between any putative pairs of miRNA and mRNA (Xin et al., 2009;Wang and Li 2009).The anti-correlated transcripts were then classified according to DAVID (Database for Annotation, Visualization and Integrated Discovery) web tool (Huang et al. 2009), to determine which Gene Ontology (GO) terms were significantly enriched in our set of genes.Results of G0 analysis of anti-correlated genes showed that in MMG-incubated PBL were not enriched the categories of response to stress, to DNA damage and to apoptosis.miRNA-mRNA anti-correlations of DDR pathway were visualized by using Cytoscape software package (Shannon et al., 2003;Cline et al., 2007) (Figure 11).The results showed that, in most cases, the same mRNA was targeted by different miRNA species according to the different condition of gravity.Future research is addressed to validate several of the anti-correlations highlighted with our analyses as important in DDR pathway.In particular, we will perform a functional assay to demonstrate the regulatory effect of a particular miRNA on its putative target mRNA.The luciferase assay represents the most efficient approach to evaluate the activity of a miRNA on its anti-correlated mRNA.This assay allows to demonstrate the activity of a miRNA on its anti-correlated mRNA by the quantification of the luminescent signal derived from the luciferase reporter enzyme.Cells are co-transfected with a reporter vector containing the firefly luciferase gene together with the 3'UTR target sequence predicted for that miRNA and the miRNA precursor (pre-miRNA) or inhibitor (anti-miRNA), which respectively mimics and inhibits the endogenous miRNA.The binding of pre-miRNA to the complementary target sequence will cause the repression of luciferase gene expression, whereas the binding of anti-miRNA to the endogenous miRNA will induce the expression of luciferase gene.The quantification of the luminescent signal derived from the luciferase reporter enzyme thus allows to demonstrate the activity of a miRNA on its putative target mRNA.In addition to the luciferase assay, it would be interesting to study the role of selected miRNAs in DDR pathway by a biological approach.Usually, several end points such as cell survival, DNA repair, cell cycle progression and apoptosis induction are analyzed in cells over-or under-expressing the miRNA of interest.
The DNA-damage response of human lymphocytes to indirect effect of ionizing radiation
In addition the cellular effects arising as a direct response to ionizing radiation, in the last decade it has been suggested that extranuclear or extracellular targets can contribute to the genetic damage in non-irradiated (bystander) cells.The bystander effect (BE) is the biological response of non-irradiated cells induced by contact with irradiated cells.The contact with bystander factors may occur by direct cell-cell interaction or be mediated by the fluid surrounding the cells.It has been reported that the BE causes cell death, cell cycle arrest, apoptosis, changes in gene expression, and increases micronucleus induction, chromosomal aberrations, mutation frequency, and DNA damage in cells neighboring hit cells.In contrast to DNA damage induced by direct irradiation, bystander cell DNA damage is still poorly understood.Many data showed that early events of the radiation induced bystander effect are rapid calcium fluxes and generation of reactive oxygen species in bystander cells.Mitochondria seem to play a central role in bystander signaling: irradiated cell conditioned media can cause changes of mitochondrial distribution, loss of mitochondrial membrane potential, increases in ROS, and increase in apoptosis among the medium receptor cells, which can be blocked by treatments with antioxidants (Chen et al., 2008).Experiments carried out in hepatoma cell lines provide evidence that the BE can be modulated by the p53 status of irradiated cells and that a p53-dependent release of cytochrome-c from mitochondria may be involved in producing BE (He et al., 2011).We investigated on the mechanisms of the medium-mediated bystander response induced by low doses of -rays in human tumoural lymphocytes (TK6 cells), a cell line growing in suspension, in which gap-junction communications are not involved in transferring bystander signals and only medium-mediated molecules may be responsible for BE induction.Cell cultures were irradiated and the culture medium discarded immediately after irradiation and replaced with a fresh one to eliminate ROS originating during irradiation.Irradiated cells were incubated for 6h in fresh medium, which, at the end of incubation time, is referred as conditioned medium (CM) and used to incubate nonirradiated TK6 cells for different times (2-48 h).In bystander cultures, cell mortality at the fixed incubation times ranged between 24 and 19%, very similar values to that of directly irradiated cells (28 and 20%).The mortality percentages for all incubation times were significantly higher with respect to that of the controls (0Gy and 0Gy CM).The survival fraction of directly 1Gy irradiated or CM incubated cells was determined by the clonogenic assay.The data show that both irradiated and bystander TK6 cells had a lower cloning efficiency than their respective controls.Figure 12 reports the results about cell mortality and survival (given as the ratio of the cloning efficiency of treated vs. untreated control cells) in TK6 cells exposed directly to IR or to CM. Apoptosis induction was tested by the presence of fragmented nuclei and apoptotic bodies at 2, 24 and 48h after 1Gy irradiation or CM incubation.The apoptotic index (A.I.) ranged between 7 and 9 % in irradiated cells and between 6 and 7.5 % in bystander cells, and was significantly higher than the relative controls at all times (Figure 13).The induction of apoptosis was also analyzed by the activation of caspase-3, the principal effector caspase, assayed by the cleavage of the peptide substrate DEVD-AFC, at 1, 2, 24 and 48h after irradiation or CM incubation.In bystander cells caspase-3 activation increased from 1.4-to 2.7-fold during the 48h of CM incubation, suggesting that bystander apoptosis increases after 48h.Bystander apoptosis in TK6 cells was sensitive to the inhibitor of caspase-8, the Z-IETD-fmk, added during CM treatment or post-irradiation incubation.The presence of the inhibitor significantly decreased the induction of apoptosis to the control level, but it did not significantly decrease the level of apoptosis in either irradiated or non-irradiated controls (Figure 14).These results suggest that caspase-8 activation is triggered by signaling molecules present in the conditioned medium.The addition of the ROS scavenger Cu-Zn superoxide dismutase and Nacetylcysteine to the conditioned medium allowed to investigate the involvement of oxidative stress in inducing bystander apoptosis.ROS scavengers did not significantly decrease the apoptotic index in CM cultures; by treating non-irradiated TK6 cells with medium irradiated without cells (IM), we evaluated the contribute of ROS produced by irradiation in inducing bystander apoptosis.IM incubation for 2h increased the apoptotic index which was totally inhibited by ROS scavengers and little affected by incubation with the caspase-8 inhibitor, whereas at 24 and 48h no significant differences among samples incubated with IM were observed.DSBs induced by ionizing radiation can easily be detected by the extensive H2AX phosphorylation occurring near DNA lesions, forming foci that co-localize with several repair proteins (Fernandez-Capetillo et al., 2003).85% of TK6 were -H2AX foci positive at 2h after irradiation with 1Gy, then this percentage decreased to the level of non-irradiated cells 24h later, fitting DNA repair kinetics.The incubation of cells with CM for 2h significantly increased the percentage of -H2AX foci positive cells (9-11%) but, when the CM was kept in contact with bystander cells for 24h the number of positive cells decreased to control levels, suggesting that DNA lesions induced at the beginning of CM incubation are repaired and no new damage accumulates later.Data from other human cells show that -H2AX foci induction in bystander cells persists in time, probably as a consequence of the www.intechopen.comformation of bystander factors that themselves generate ROS, leading to a self-sustaining system responsible for long-lasting effects (Yang 2005, Sokolov 2005, Kashino 2004, Lyng 2006).In irradiated TK6 cells both 53BP1 and NBS1p343 proteins co-localized with -H2AX foci, whereas in bystander cells co-localization was partial or absent (Figure 15).We suggest that the short-lived ROS released in the medium by irradiated cells are responsible for DNA lesions, unlike double strand breaks, which activate H2AX phosphorylation but do not require the 53BP1 and NBS1p343 proteins to be repaired.It is possible that in our experiments DNA damage induced by CM treatment consisted of a few DSBs, the repair of which requires the recruitment of 53BP1 and NBS1p343 proteins and mainly in other types of DNA lesions, in which repair occurs without these proteins.Recent studies suggest that there are important differences between the DNA damage response in directly irradiated cells and non-targeted cells via bystander signals.The DNA damage in bystander cells seems to persist for a prolonged time (Burdak-Rothkamm et al., 2007), differently from DNA damage induced directly by irradiation which is repaired completely within few hours depending on radiation dose.Studies carried out in p53 wild-type (TK6), p53 null (NH32),and p53 mutant (WTK1) lymphoblastoid cells using siRNA to knockdown DNA PKcs demonstrated the central role of non-homologous end-joining in processing bystander damage, in contrast to the role of homologous recombination which seems to be essential only in inducing sister chromatid exchanges in bystander cells (Zhang et al., 2008).ATM-and Rad3-related (ATR) protein kinases have a central role into DNA damage signaling in bystander cells, with ATM activation occurring downstream of ATR.DNA-PK is not essential in mortality inducing in bystander cells neither for bystander γH2AX foci induction (Burdak-Rothkamm et al., 2007).These differences between bystander and direct DNA damage response offer new potential targets for repair inhibitors, with the aim to protect bystander normal tissues during cancer radiotherapy.
Conclusions
The DNA-damage response pathway relies on the recruitment and modification of many different proteins that sense and signal the damage, activate transducer and effector proteins involved in cell cycle arrest, DNA repair and apoptosis.A correct DDR safeguards cells, whereas perturbations/defects in this pathway might contribute to the occurrence or to the acceleration of carcinogenesis.Our results have contributed to highlight cell response of human lymphocytes to DNA damage induced directly or indirectly by ionizing radiation.
In particular, novel aspects of low-and high-LET radiation effects on human lymphocytes have been described, such as double strand break repair kinetics, mutational effects, micronuclei induction, apoptosis induction, cell cycle alterations, gene and microRNA expression changes.In addition, we have reported new findings about the cell response of human lymphocytes when ionizing radiation exposure occurred in microgravity, condition which has been experimentally simulated by the Rotating Wall Vessel.The results clearly indicate that modeled microgravity affects the cell response to radiation, thus contributing to increase the risk of radiation exposure during space missions.By considering that the levels of DNA repair genes were not significantly changed in MMG condition, we suppose that perturbations in the cell response to ionizing radiation are due to the altered activity of proteins playing an important role in DDR pathway.Evidences are accumulating on the strict dependence between efficiency in DNA repair and chromatin structural organization (Gontijo et al., 2003, Rübe et al., 2011).The elaborate higher-order organization of chromatin appears to be important in assembling the repair machinery, improving the accessibility of DNA lesions to repair complexes.Modifications of cell structure and perturbations of nuclear architecture induced by microgravity may affect the accessibility in chromatin to DNA repair machinery.The preliminary results obtained from miRNA-mRNA profiling www.intechopen.com represent new insights about the radio-responsiveness in MMG.They seem promising to clarify the role of miRNAs in DNA-damage response to radiation in microgravity, thus improving the scientific approach towards environmental exposure risk.The studies on molecular mechanisms of bystander effect could have great implications in evaluating radiation risk of IR exposures, and also have the potential to reassess radiation damage models currently used in radiotherapy.The radiation-induced bystander effect was shown to occur in a number of experimental systems both in vitro and in vivo and it is supposed to be realized through several pathways of transmission of the stress signal: a direct cellular contact, interaction through gap-junctions and through the culture medium of the irradiated cells.In our experimental system the conditioned medium was the main way by which the irradiated cells communicate their stressed condition to the non-irradiated cells.ROS released by irradiated TK6 cells into the culture medium were short-lived and probably other soluble molecules are necessary to maintain high the level of cell mortality in bystander cells.Recent studies, investigating on the nature of such molecules, suggest that fragments of extracellular genomic DNA, probably released from the apoptotic irradiated cells in the culture medium, are able to induce the bystander effects (Ermakov et al., 2011).Such DNA fragments bind to the Toll-like receptors family, leading to a signaling mechanism whose outcome is the dynamic transformation of the cytoskeleton and alteration in the spatial localization of chromatin portions in the nucleus.Thus, in bystander cells, as for microgravity-incubated lymphocytes, modifications in the nuclear structural organization may affect the assembly of the DNA repair machinery.
Acknowledgment
We gratefully acknowledge Dr.
Fig. 2 .
Fig. 2. Visualization by in situ immunofluorescence of -H2AX foci in human PBL irradiated with -rays or low-energy protons.The pattern of -H2AX localization within the nucleus is strictly dependent on the quality of radiation.Low-LET radiation, such as -rays, hit the cells throughout all directions, and DSBs are sparsely distributed; on the contrary, high-LET radiation such as protons, give raise to clustered DNA damage along tracks.
Fig. 3 .
Fig. 3. Kinetics of -H2AX foci in PBL irradiated with -rays and low energy protons during the time after irradiation.A) Fraction of cells positive for -H2AX foci and B) mean number of -H2AX foci per nucleus.
Fig. 7 .
Fig. 7. A) Mutant frequency at the HPRT locus of irradiated and non-irradiated TK6 cells incubated for 24h in 1g or in modeled microgravity.B) Micronucleus frequencies (%) in irradiated and non-irradiated TK6 cells incubated in 1g of MMG for the first 24h after irradiation and then cultured in 1g up to 48 h.*P<0.05;**P<0.01;***P<0.001(G test).The effect of MMG incubation on cell cycle alteration induced by γ−ray exposure was assessed by flow cytometry analysis.Figure8shows the cell cycle distribution of TK6 cells at various time-points from irradiation and incubation in MMG or 1g by representative DNA histograms.-rayirradiation induced an increase in G2/M-phase cells and a reduction in S-phase cells, both in TK6 maintained in 1g and MMG after irradiation.At the end of MMG or 1g incubation (24 h time-point), the percentages of cells in G1-phase were higher in cultures irradiated with 2-4 Gy and incubated in MMG compared with cells maintained in 1g.Moreover, the G2/M block after irradiation was less evident in MMG than in 1g condition.Also radiation-induced apoptosis was affected in TK6 cells by MMG incubation.Induction of apoptosis was significantly lower in irradiated TK6 cells incubated in MMG compared with cells irradiated with the same dose and incubated in 1g.The differences were more pronounced in cells analyzed at long post-incubation times (72 h time-point).The observed decrease of apoptotic response in MMG incubated cultures could allow severely damaged cells, which in 1g condition should be eliminated by selection, to survive, with negative consequences on genomic integrity.Alterations in cell response to ionizing radiation due to MMG incubation during the DNA repair period may be caused by the reduced activity of some proteins, which play a crucial role in damage signaling.Previous data have shown that absence or reduction of gravity can alter gene expression(Walther et
Fig. 9 .
Fig. 9. Expression ratios in PBL of pools B-D incubated in 1 g and modeled microgravity after X-irradiation.(A) R values of BER and NER genes in 1g; (A) R values of BER and NER in MMG; (B) R values of HR and NHEJ genes in 1g; (B) R values of HR and NHEJ genes in MMG.
Fig. 10 .
Fig. 10.A) Dendrogram showing several miRNAs differentially expressed in human PBL at 4 and 24h after irradiation with 0.2Gy.Range of expression value is determined as the log2 ratio of irradiated/non-irradiated sample.Down-regulated and up-regulated miRNAs correspond to green and red boxes, respectively.B) Fraction of radio-responsive miRNAs (%) in human PBL irradiated with 0.2 and 2Gy and incubated for 4 and 24h-in 1g or in modeled microgravity (MMG).
Fig. 11 .
Fig. 11.Example of visualization of inversely correlated miRNA-mRNA relationships in irradiated human PBL.Circles represent transcripts and triangles miRNAs, shown with the color corresponding to the expression value.
Fig. 15 .
Fig. 15.Non-irradiated, irradiated and bystander TK6 cells were fixed and co-stained with anti-H2AX (green), anti-53BP1 and anti-NBS1-p343 (red), at 2 h from irradiation or CM incubation.The red and green images were merged and subjected to co-localization analysis.Arrows indicate γH2AX foci without co-localization of 53BP1and NBS1-proteins.Nuclei were counterstained with DAPI.
Cristiano De P i t t à a n d D r .C h i a r a R o m u a l d i o f t h e Department of Biology, University of Padova, for miRNA and mRNA expression profiling and statistical assistance.We acknowledge Dr. Vito Barbieri of the Department of Oncological and Surgical Sciences of Padova's University for cell irradiation with -rays and Roberto Cherubini of the INFN, Laboratori Nazionali di Legnaro, Padova, for cell irradiation with low-energy protons.
Table 1
. Surviving fraction (SF) and HPRT mutant frequency (± standard error, S.E.) in human PBL irradiated with -rays and low-energy protons. | 2017-09-14T05:17:15.609Z | 2011-10-26T00:00:00.000 | {
"year": 2011,
"sha1": "2f56005ab0e247f20476d91d80936eb8e03943d2",
"oa_license": "CCBY",
"oa_url": "https://www.intechopen.com/citation-pdf-url/22707",
"oa_status": "HYBRID",
"pdf_src": "ScienceParseMerged",
"pdf_hash": "29c53e5e274e10ed6249f443d9c4b02b74ca7e18",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Chemistry"
]
} |
220526090 | pes2o/s2orc | v3-fos-license | Covid‐19 and UK Universities
Abstract Universities UK (UUK) has suggested that there may be very significant losses to higher education as a consequence of Covid‐19. However, losses are likely to be substantially lower than the potential losses estimated by UUK. But the magnitude of losses is very uncertain. The UUK’s proposal to restrict undergraduate enrolment per university to stop institutions poaching students is not in the interests of the most highly regarded universities, or that of students. Some rationalisation of the sector should be the price of further government support. Now is also the time to reconsider how university research is funded.
UNIVERSITIES UK (UUK) asked the government for £2.2 billion to help the sector deal with the impact of the Covid-19 pandemic. UUK estimated that the education sector annually generates more than £95 billion for the UK economy, employs the equivalent of 940,000 people, and earns £13.1 billion in export earnings. UUK further estimates that it will lose £790 million in revenues in 2019-20 and potentially £6.9 billion in 2020-21 if foreign students, who are critical to the financial health of the sector, fail to enrol. 1 UUK claims that without the support requested, some institutions will fail and others will be forced to reduce provision. The institutions most likely to fail are those with higher levels of external funding, lower levels of cash reserves, and a higher proportion of BAME students. That is, those institutions that were in a more fragile financial state before the pandemic. Competition for students will 'be destabilising, creating pressure to switch from their chosen institution'. Without help from the government 'access to higher education would be decreased, disadvantaged students would be worse off and less able to select a university that best suits their learning needs'. 2 UUK also argues that research and STEM programmes will be particularly hard hit because they are cross-subsidised by the higher fees paid by foreign students and the global position of UK higher education will suffer as a consequence. In exchange for additional government support, the universities have promised to cut costs, accept restructuring, and rein in predatory admissions policies. Although higher education (HE) provides social and cultural benefits to students and society, UUK has emphasised its economic benefit and so here we focus on economic arguments.
The Labour Party is sympathetic to UUK and has called on the government to guarantee that no university be allowed to go bankrupt, because this would cost jobs, reduce social mobility, and limit the training of key health sector staff. 3 Although the Department for Education supports the request from UUK, the Treasury does not. The Treasury has agreed only to short-term stabilisation by advancing the sector up to £2.6 billion in tuition fees and £100 million in research money. This is not new money: it is an advance. There will be no guaranteed bailout of institutions and further emergency help will be on a case by case basis, and only as a last resort. The Treasury will continue to monitor the financial situation of HE, but at this stage only short-term assistance has been offered, with no safety net. HE is just one sector among many requesting additional assistance.
UUK's statement raises many questions about British universities and the impact of Covid-19. Is HE important enough that it deserves special attention? Is the financial loss to HE likely to be as great as UUK The Political Quarterly, Vol. 91, No. 3, July-September 2020 implies? Which institutions are most severely affected? Will they fail? Should they be saved? Will students be unable to select a university that best suits their learning needs? Is the Treasury's measured support appropriate?
How important is HE?
Is HE important to individuals and society? The Institute for Fiscal Studies (IFS) has shown that 80 per cent of students gain financially from attending university (85 per cent of women and 75 per cent of men). 4 That is, their earnings are higher, on average, than if they had not gone to university. But one in five students would have been better off not attending university. For some subjects and universities, the figures are worse. For example, on average-and controlling for academic preparation and family background-for women the net discounted lifetime returns from studying creative arts is zero and for men it is negative. On average, attending university pays off financially for most students in many subjects and most institutions, but there is considerable variation around that average. The IFS also estimated the economic benefit to the nation. Although HE is expensive to the taxpayer, the average financial gain to the nation is £110,000 per male graduate and £30,000 per female graduate.
Thus, there is solid evidence that HE is a good investment for most students and the government, but there is also solid evidence of courses and institutions for which there is little or no economic return. And this was before Covid-19. Although programmes with no economic return may be restored, Covid-19 should not be an excuse for retaining these zombie courses and institutions. Restructuring of HE has been offered by UUK and the Treasury has indicated that it expects restructuring to take place. It is time to act on recommendations in the Augar Report on how one can justify the continued support of programmes or institutions where the graduate ends up financially worse off. 5 Disadvantaged and BAME students are not made better off by enrolling in courses that have no economic return-nor is social mobility furthered by such action. I have argued in this journal that higher education is not exactly like other businesses, but this does not necessarily mean that it is exempt from all policies that are applied to other businesses. 6 Will the financial loss be as great as UUK claims?
While universities will suffer a financial loss from Covid-19 in 2019-20 from a reduction in accommodation, catering, and conference income, it is unclear how large a loss they might suffer in 2020-21. Although UUK states that the universities could lose £6.2 billion in revenues if all foreign students stay away, that is highly unlikely. A study by London Economics for the University and College Union estimates a 16 per cent decline in domestic students costing £612 million, and a 47 per cent decline in EU students causing a loss of £350 million in 2020-21. 7 Not all institutions were estimated to be affected equally by Covid-19. Another estimate, from Times Higher Education, estimated a loss of around £3 billion if enrolment drops by 40 per cent, about £1.7 billion for a 20 per cent drop, and around £1 billion for a 10 per cent drop. 8 So, estimates to date suggest a potential fall in tuition fee income of between £2 billion and £3 billion.
Some universities are primarily teaching institutions, heavily reliant on student fees, and some are diversified research and teaching institutions. Some are highly selective and have an international reputation, some are not. The London Economics study emphasised the potential loss of teaching income not total income. If universities are to be compensated for the impact of Covid-19, should they be compensated on the basis of the percentage of student fee income lost or the percentage of total income lost? The ordering of recipients for support would be very different depending upon the criterion used.
The estimates of reduced student enrolment in the London Economics study are driven by lower enrolment of foreign students, but also part-time students and graduate students. London Economics did not find any association between graduate enrolment and declines in economic activity, but decided-given the likely depth of the recession-to assume there would be an impact on graduate enrolment. The estimated decline in foreign student enrolment is based on a finding that a 1 per cent decline in global GDP was associated with a 0.485 per cent decline in enrolment of international students in UK HE. This assumption is particularly important for more highly ranked universities which attract many international graduate students. It is critical to understand how sensitive the London Economics estimate is to the underlying assumptions. Unfortunately, we do not have this information.
There are also other factors that could offset the predicted decline in enrolment and funding. Demographic change will help HE because the number of eighteen year-olds is forecast to increase from 600,000 in 2020 to 760,000 in 2030. For example, postgraduate work rights are a major attraction for international students and an extension of such rights, as advocated by former Minister for Universities Chris Skidmore, could help attract more students to the UK. Another factor that is likely to affect international enrolment is how well each receiving country is seen to handle the pandemic (the 'fear factor'). At present, the US and UK are lagging behind Canada, Australia, and New Zealand. Thus, there is enormous uncertainty about the financial impact of Covid-19. It is likely to be far less than that implied by UUK and there are reasons to believe that the London Economics estimate is too high. Given this significant uncertainty, it is no surprise that the Treasury was not forthcoming with additional support at this time. Future support is likely to be contingent on the sector's actual financial position and what structural reforms it agrees to make.
International student numbers have received by far the greatest attention, but it is also important to consider the enrolment of UK-domiciled students. Will it fall? Nick Hillman, head of the Higher Education Policy Institute, and David Willetts, former Minister for Universities, both argue that domestic enrolment will not fall. As Hillman noted 'recessions tend to mean people want more education because the alternativesunderemployment or unemployment-are worse and having more skills can protect you against economic chill winds'. 9 This view is supported by research in the US and the UK. In the US an enrolment surge occurred during and immediately after the Great Recession, and a recent analysis published by the Brookings Institution found that a one percentage point increase in the US unemployment rate is associated with a 1.6 percentage point increase in university enrolment. 10 Damon Clark estimated 'youth unemployment effects that are positive, statistically significant, and large in magnitude -at least twice as large as previous estimates'. 11 Clark's study is important because it encompasses earlier recessions-those of 1980 to 1983 and that of the early 1990swhen employment fell by more than in the Great Recession, and it focusses on the effects of local labour markets for youth. When job prospects are poor in the local labour market, students enter or stay on in higher education. Thus, London Economics' assumption of a small increase in enrolment of UK-domiciled students is probably too pessimistic and the agreed caps on enrolment will not help young people when the labour market for them is likely to be very poor.
While Covid-19 has clearly shown the financial dependence of many UK universities on international-especially Chinesestudents, the British Council had already forecast declines in Chinese students studying in the UK because of increasing quality of Chinese institutions, slower growth in household incomes, and a declining number of university-age people. Care must be taken not to compensate for revenue losses that were to occur anyway. This could be an opportunity to adjust to possible future declines in Chinese students.
Who should be protected?
If the government decides to offer further support to HE but it is insufficient to cover all institutions, who should be first in line? Is it wise to protect lower ranked institutions at a cost to higher ranked institutions? The London Economics study uses the categories established by a study by Vicki Boliver, who asked whether it was possible to identify distinctive clusters of higher and lower status UK universities. She found that Oxford and Cambridge stand out among the 'old universities' and form an 'elite tier' based upon research activity, economic resources, academic selectivity, and social mix, but score much more modestly on teaching quality. 12 The Russell Group universities cluster with the majority of the old universities to form a middle status tier. A quarter of the new post-1992 universities form a distinct bottom tier and the rest a third tier. Expansion of higher education, restructuring, marketisation, internationalisation, and different educational policies across countries within the UK, have not altered the hierarchy of universities, because this hierarchy is deeply embedded in social structures and wider processes of social selection and social reproduction. And the same hierarchy is deeply embedded in the minds of foreign students. One could argue that any support offered to the HE sector because of Covid-19 should be concentrated on the higher status institutions, for that is where the international students are concentrated, where they crosssubsidise research, and it is these institutions upon which the UK's reputation for excellence is based. However, these institutions would generally suffer lower percentage falls in total income (although not necessarily in discretionary income). Institutions that primarily serve UK-domiciled students should have relatively small losses and, if enrolment increases, may actually be in a stronger financial position than they expected to be.
However, UUK wants to protect all institutions, at least in the short term. The Treasury is unlikely to agree. Saving institutions that were failing before Covid-19 is not in the national interest, nor in the interest of students. What would be in the interest of students is courses and institutions that suit their needs, where those needs often consist of getting a good job at the end of the course. Those courses and institutions do not all need to be pale imitations of the old universities. Some universities are moving towards serving regional markets and providing applied courses. This should be encouraged, as should asking the question: 'do all university courses need to be three years long?' And, 'are there students who would benefit from further education instead of university?' This may be seen as elitist or going back to the pre-1992 HE system, but it may be in the best interest of students and the nation.
Will enrolment caps help students?
Without enrolment caps, stronger institutions will attempt to replace foreign students by recruiting more domestic students. To do so, universities would enrol students who would have gone to a lower ranked institution in the absence of Covid-19. Such 'poaching' reduces the financial impact on more highly ranked universities, but it increases the losses of lower ranked institutions unless total enrolment increases. One can see that from an institutional perspective, capping enrolment is a cost to more highly ranked institutions, but a potential benefit to lower ranked institutions. But from a student's perspective, it takes away the possibility of attending a more highly ranked institution and, for some, the possibility of attending HE at all. Given that lifetime earnings are positively related to the quality of university attended, capping enrolment by institution provides support to lower ranked institutions at the expense more highly ranked institutions and students.
The most disadvantaged students may benefit most from a drop in international students and may actually be able to attend a university that better suits their needs than was likely pre-Covid-19. If universities can admit more UK-domiciled students, about 1,000 high-attaining disadvantaged students could be placed at better universities than if the currently agreed enrolment caps are put in place. If universities are able to replace overseas students with UK-domiciled students, disadvantaged students may benefit by being admitted to more selective universities. Imposing enrolment caps, as proposed by UUK and agreed to by the Treasury, will reduce upward moves, particularly for the most disadvantaged students.
The HE sector in the UK is clearly of significant economic importance. On average, there are significant private and public returns from HE. UUK has predicted the likelihood of very significant losses to the sector in income from international students as a consequence of Covid-19. However, losses will probably be substantially lower than the potential losses suggested by UUK. But the magnitude of losses is very uncertain. The losses from a fall in international students are concentrated on the most highly ranked universities, but because many of them are diversified, the percentage impact on total income is much less than the percentage impact on student fee revenues. For this reason, there may be pressure on wealthier institutions, as there has been in the US, not to accept government money. The UUK's proposal is to restrict undergraduate enrolment per university to stop institutions poaching students from universities less selective than themselves. This proposal is not in the interest of the most highly regarded universities or that of students. Some rationalisation of the sector should be the price of any further government support and some suggestions for such rationalisation are contained in the Augar Report. Covid-19 has shown the value of university research to the health of the nation, but it has also shown the fragility of that research because it is cross-subsidised by tuition fees from international students. Now is the time to reconsider how university research is funded. The HE sector is of clear importance to the nation, but so too are many other sectors. Treasury support is likely to be selective and limited, so HE needs to outline why the sector is more deserving of additional government support than other sectors. And HE needs to be clear about how it will use any funds from the government and what it will give in return. | 2020-07-02T10:40:28.986Z | 2020-07-01T00:00:00.000 | {
"year": 2020,
"sha1": "50000334d7c148e8767384961b3a3416e5b48050",
"oa_license": null,
"oa_url": "https://www.ncbi.nlm.nih.gov/pmc/articles/PMC7361847",
"oa_status": "GREEN",
"pdf_src": "PubMedCentral",
"pdf_hash": "f29d59c90e38ee4e21ca0b635d0f584a8f961cde",
"s2fieldsofstudy": [
"Education",
"Economics",
"Political Science"
],
"extfieldsofstudy": [
"Business"
]
} |
221131976 | pes2o/s2orc | v3-fos-license | Making the Most of a Crisis: A Proposal for Network-Based Palliative Radiation Therapy to Reduce Travel Toxicity
A multipronged model is proposed to improve the delivery of palliative radiotherapy by increasing access to care and reducing travel burden for patients.
Introduction
During the Great Financial Crisis, former Obama Chief of Staff Rahm Emanuel famously stated "you never want a serious crisis to go to waste." 1 The coronavirus disease 2019 pandemic, in addition to upheaving societal norms, has pushed radiation oncologists to reconsider the utilization of more efficient treatment regimens. 2,3 Colleagues further defined a 3-tiered system to determine which patients receiving palliative radiation therapy (PRT) necessitated urgent versus delayed care. 4 Though contentious, 5 such frameworks are useful to constrained departments asking, "When to treat?" Yet, the question of "where to treat?" may actually be of more importance to PRT. As travel distance is a known barrier to RT, 6 the current pandemic provides additional impetus to improve patient-centered care by coordinating access to PRT closer to home or in less endemic regions.
Delays in care may lead to worse outcomes 7 and could be mitigated by establishing an accredited referral network of community practice physicians providing high-quality PRT. In doing so, patients whose PRT would be delayed at urban centers owing to resource constraints or exposure risks may receive expeditious treatment at local facilities with trusted providers. This network would not only minimize travel burden in a patient population with limited life expectancy, but may reduce costs, 8 lessen financial toxicity, 9 and improve quality of life. 10 We thus propose a multipronged restructuring of PRT delivery that considers travel and exposure burdens. This includes the establishment of a national network of PRT providers, implementation of travel burden assessment, and the allowance for PRT on Sources of support: No financial support was provided for the conduct of research or preparation of this manuscript.
Disclosures: The authors have no conflicts of interest to report.
research protocols at any facility (private practice or academic). The development of an established provider network would facilitate efficient referrals to local facilities offering PRT of comparable quality with less burden on our most vulnerable patients.
Referral Network
The network providers would adhere to established PRT principles, including minimizing travel burden (ie, same day set-up and treatment), offering low-complexity treatments (2-dimensional or 3-dimensional techniques), prescribing single/hypofractionated regimens when appropriate, and offering supportive therapies to maximize quality-of-life.
The initial network would be comprised of facilities accredited through the American Society for Radiation Oncology Accreditation Program for Excellence, the American College of Radiation Oncology, or the American College of Radiology, which evaluate practice consistency with evidence-based guidelines and consensus statements. As such practices are often community-based, patients currently traveling great distances to receive PRT with their academic provider may benefit from receiving similar care locally.
Optimal use of this network would be facilitated by routine implementation of travel burden assessment by academic/urban centers. Additional barriers can be removed if research protocols would allow for PRT to be delivered at any accredited facility, particularly for studies where the primary question is not radiation related.
Conclusions
We propose restructuring our PRT delivery model through the development of a robust network of accredited providers to improve access for patients and reduce travel burden. Although the coronavirus disease 2019 pandemic has spurred rapid practice changes surrounding patient prioritization and treatment decisions, the lessons from this global crisis can be a platform upon which sustainable changes can be implemented to improve access to, cost, and quality of PRT. | 2020-08-16T13:06:11.174Z | 2020-08-15T00:00:00.000 | {
"year": 2020,
"sha1": "5e18e10f7a1fdee5e25cb19001c5d3c7fd3d3e72",
"oa_license": "CCBYNCND",
"oa_url": "http://www.advancesradonc.org/article/S2452109420302049/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "43a443e36c358c4cd6d98e598989a2b7b8c15427",
"s2fieldsofstudy": [
"Medicine",
"Environmental Science",
"Engineering"
],
"extfieldsofstudy": [
"Medicine"
]
} |
213515203 | pes2o/s2orc | v3-fos-license | Selection and application of aptamers with high-affinity and high-specificity against dinophysistoxin-1
Diarrhetic shellfish toxins (DSTs) are marine toxins distributed widely in the world, which pose a major threat to the health of mankind. Dinophysistoxin-1 (DTX-1) has the most potent toxicity in DSTs. However, the current detection methods have ethical problems and technical defects. Further research is needed, to develop a more suitable alternative to the supervision system. In this work, we successfully obtained an aptamer with high affinity and specificity bound to DTX-1 for the first time. After optimization, a core sequence of the aptamer with a higher KD of 64 nM was obtained, while the binding mode of the core sequence and DTX-1 was explored. Based on this aptamer, we developed a biolayer interferometry (BLI) biosensor platform for DTX-1 detection. The aptasensor exhibited a broad detection range from 40 to 600 nM DTX-1 (linear range from 80 to 200 nM), and the low detection limit was 614 pM. Morever, the aptasensor showed good reproducibility and stability, which indicated that this novel aptasensor had broad development prospects for the sensitive and rapid detection of DTX-1.
Introduction
With the rst recognition of diarrhetic shellsh poisoning (DSP) in Japan, DSP has been considered a global public health problem. It is a gastrointestinal illness caused by the consumption of shellsh contaminated with diarrhetic shell-sh toxins (DSTs). 1 Traditionally, DSP has mainly occurred only in several coastal areas, such as Western Europe, Eastern Japan and Latin America. 2,3 However, recent studies indicate that DSP is emerging in more regions in the world. [4][5][6] The emergence of DSP also leads to the downturn of the shellsh industry, and the length of time is unexpected. 7 Not only does it have a serious economic impact, but it also poses a great threat to human health. The major toxins causing severe diarrhea are okadaic acid (OA) and its analogues, dinophysistoxin-1 (DTX-1) and dinophysistoxin-2 (DTX-2), which are produced by marine dinoagellates. These toxins have been testied to be potent inhibitors of several phosphatases, especially inhibitors of PP1 and PP2A. 8,9 They are possibly carcinogenic, 10 with symptoms such as diarrhea, nausea, and vomiting. 11 They can also act on other protein phosphatases, including protein phosphatase 4 (PP4), protein phosphatase 5 (PP5) and protein phosphatase 2B (PP2B). 12,13 In the early stage, OA has been thoroughly studied. In recent years, DTX-1 was found to be more toxic, 14 especially with respect to oral toxicity. 15 DTX-1 can destroy the integrity of epithelial cells, and play an important role in apoptosis, 16 which induces the same cytotoxic effects with 1/5 concentrations of OA. 17 While the effect of DTX-1 on pro-inammatory and carcinogenicity of macrophages is ten times more than that of OA. 16 Studies have shown that the deposition rate of DTX-1 in the shellsh digestive gland is higher than that of OA. 18 However, DTX-1 cannot be detected effectively. Although DSP was detected by the mouse bioassay (MBA) in early 1980s, it required 24-48 hours of observation on experimental animals. 19 This method also lacks of specicity and sensitivity. Since 1997, LC-MS has already replaced MBA. 20 However, this method requires the Temperature Equivalent Factor (TEF) of each compound to estimate the sample's total toxicity, in order to determine whether the regulatory limit has been exceeded, 19,21 which is time-consuming and expensive. Another method is to use, enzyme linked immunosorbent assay (ELISA). But it has severe cross-reactivity, because the antibodies are mainly directed against inhibitors of PP. 22 In summary, there is still a lack of a sensitive, reliable, realtime and inexpensive monitoring system for detecting DTX-1. The biosensors are ideal detection alternatives, which can avoid most of the shortcomings the other methods have. The development of a stable, high-sensitivity, inexpensive probe is an important basis for the preparation of biosensors.
Aptamers are functional single-stranded DNA (ssDNA) or RNA sequence that fold into complex tertiary structures and bind to specic molecular targets in a manner similar to that of antibodies. 23,24 Thus, aptamers are called "chemical antibodies". 25 The vast majority of aptamers are obtained from the Systematic Evolution of Ligands by Exponential Enrichment (SELEX) method in vitro. 26 Aptamers have a wide range of targets, including metal ions, compounds, proteins, cells, and even whole microorganisms, and can be easily modied by various functional tags to induce immobilization onto many surfaces. 27 As a potential existence that can replace antibodies, aptamers have the advantages of high affinity, no batch to batch variations, chemical stability, low immunogenicity, and permissible modication. 28,29 These have led to signicant development in the eld of environmental screening, 30 therapeutics, 31 drug delivery. 32 However, one great potential of aptamers is that these nucleic acid sequences can be at the heart of some emerging devices, such as sensors and actuators. 27 In recent years, a number of studies have shown that aptamers can be used as probes in combination with other techniques (e.g. surface enhanced Raman spectroscopy, quartz crystal microbalance, electrochemical methods, colorimetry, uorescent modication and biolayer interferometry, etc.) to produce highly efficient aptamer-based sensors. 23,33-38 Among them, biolayer interferometry (BLI) technology has a unique advantage as a new type of optical sensor, which is of label-free, highly specicity, real-time and inexpensive. 38,39 These makes the method a suitable choice for on-site rapid detection and analysis of the marine biotoxin DTX-1.
Materials and reagents
All nucleic acid sequences (Table S1 † . Bio Gel P-2 was procured from Bio-Rad (Hercules, USA). The BLI sensor chips were obtained from Forte Bio (Shanghai, China). DNA Urea-PAGE buffer, DNA PAGE buffer and binding buffer (pH 7.5, 50 mM Tris-HCl, 150 mM NaCl, 2 mM MgCl 2 ) were procured from Tiandz (Beijing, China). All solutions were prepared using Milli-Q ultrapure water.
Preparation of GTX1/4 magnetic beads
(1) Pre-treatment of magnetic beads: the magnetic beads were placed in an end-over-end rotation and mixed for 30 minutes. 600 mL of magnetic beads were taken therefrom and washed 4 times with 25 mM MES buffer (pH 5.0). Freshly prepared 500 mL EDC solution (50 mg mL À1 ) and 500 mL NHS solution (50 mg mL À1 ) were added to the magnetic beads and incubated for 30 min with rotation. (2) Incubation of magnetic beads with DTX-1: 150 mL DTX-1 solution (100 mg mL À1 ) was added to the above treated magnetic beads, and incubated at room temperature for 2 h. Aer the incubation, the beads were washed twice with 25 mM MES buffer and then washed 3 times with DTX-1 binding buffer (pH 7.5, 50 mM Tris-HCl, 150 mM NaCl, 2 mM MgCl 2 ). (3) Preparation negative magnetic beads: 200 mL of magnetic beads were taken for pre-treatment in (1). They were then washed twice with 25 mM MES buffer and then washed 3 times with DTX-1 screening buffer (pH 7.5, 50 mM Tris-HCl, 150 mM NaCl, 2 mM MgCl 2 ). (4) Magnetic beads blocking: the newly prepared EDC solution, NHS solution and 0.5 M oxalic acid solution were mixed at a ratio of 1 : 1 : 1. Two thousand and ve hundred mL of a mixture of EDC, NHS, and oxalic acid was added to 600 mL of the positive selection magnetic beads; 750 mL of EDC, NHS, and oxalic acid mixed solution was added to 200 mL of the counter selection magnetic beads. The two magnetic beads were incubated for 30 min at room temperature, then washed 3 times with DTX-1 binding buffer, respectively. Six hundred mL and 200 mL of DTX-1 binding buffer were added and placed at 4 C until use.
In vitro selection of the DTX-1 aptamer
This selection uses Magnetic Bead SELEX, the specic selection strategy is shown in the Fig. 1. Appropriate amount of ssDNA (as shown in the selection scheme) was subjected to denaturation and renaturation treatment in a microcentrifuge tube (95 C water bath for 10 min, ice bath for 5 min, room temperature for 5 min). The prepared magnetic beads were mixed with ssDNA, and the system was supplemented with a binding buffer to 600 mL and incubated at room temperature. Aer the incubation, rinse with binding buffer several times until the concentration of ssDNA in the supernatant could not be detected. 80 mL of ddH 2 O was added to the magnetic beads bound to the specic ssDNA, and the mixture was centrifuged at 95 C for 20 min. The magnetic frame was magnetically adsorbed for 4 min to recover the ssDNA in supernatant, which was repeated 3 times. The recovered ssDNA was assayed by using Qubit® 2.0, and the recovery efficiency was calculated. The recovery library was amplied by PCR, and the amplication system was 50 mL, 40 tubes each time. The specic parameters were set as follows: 95 C for 5 min, followed by 20 cycles of 95 C for 30 s, 54 C for 45 s, 72 C for 30 s, and a nal elongation step of 5 min at 72 C. The dsDNA obtained by PCR was prepared by urea-denaturing polyacrylamide gel electrophoresis, and the length of the two strand primers was different, so that it could be easily separated during electrophoresis. Recover ssDNA from the gel excising from polyacrylamide (PAGE) gel electrophoresis selective.
The preparation of ssDNA
The ssDNA obtained from gel was puried and recovered. All procedures were strictly in accordance with the protocol of a gel recovery kit. (6) and (7) and combined the eluates. Then the ssDNA was quantied and used in the next round of selection.
Cloning and sequencing of selected DNA
When the recovery rate no longer rises, it is considered that the selection platform is reached and then the selection is stopped. The last round of ssDNA was taken for PCR amplication using normal primers. The amplied product was puried using a gel recovery kit and the dsDNA was sent to Sangon Biotech Co. Ltd (Shanghai, China) for cloning and sequencing.
Determination of affinity and specicity of aptamer by BLI
The affinity and specicity of aptamer N59a were identied by BLI using an OctetRED 96 system (ForteBio, Shanghai). The operating principle and analysis procedure of the specic BLI has been claried. 40 The assay procedure covered ve steps: baseline (2 min), loading (2 min), washing (3 min), association (5 min), and dissociation (5 min This journal is © The Royal Society of Chemistry 2020 RSC Adv., 2020, 10, 8181-8189 | 8183 association solution (i.e. baseline solution) and dissociation solution (i.e. baseline solution) were added, respectively, into the corresponding wells of a 96-well microtiter plate. Considering the impact of non-specic binding and buffer-induced interferometry spectrum shis, the response data obtained from the reaction surface were normalized by subtracting the signal simultaneously acquired from the control surface by the Octet Data Analysis Soware CFR Part 11 Version 6.x. The affinity parameter K D was obtained in this way, a 1 : 1 binding mode with mass transfer tting was used to obtain the kinetic data.
Results and discussion
In vitro selection of DTX-1 aptamer The process of MB-SELEX includes forward selection and reverse selection (Fig. 1A). The detailed process strictly follows Fig. 1B (Table S2 †). In order to improve the efficiency of SELEX, we adjusted to the incubation time and the number of elution times during the process, gradually increasing the selection pressure. 41 From the sixth round, we introduced the negative magnetic beads to eliminate non-specic adsorption of ssDNA by magnetic beads. Aer 12 rounds of selection, the recovery rate of ssDNA reached a stable level, and this indicated that the selection terminal point was reached. 24,26 The selection cycles were stopped (Fig. 1C).
The sequences of nal round were sent to sequence, 80 clones were opted randomly, and 80 sequences were obtained. We classied these sequences into 10 families (A-J) based on multiple sequence alignments (Fig. S3A †). Then, we choose the sequences with the highest homology or the minimum free energy by the mfold Web Server (http://unafold.rna.albany.edu/ ?q¼mfold/DNA-Folding-Form) and the Guide trees (Fig. S3B †) for further binding affinity studies with DTX-1, as shown in Table 1. From the ten selected sequences, we found that seven of the sequences showed binding affinity for DTX-1 ranging from 0.17-1890 mM, 42 while the three sequences showed no binding.
Optimization of aptamer N59
Empirically speaking, the sequence with the highest affinity was chosen for further research. 23 The results indicated N59 exhibited the highest affinity to DTX-1 of 0.17 mM, so N59 was chosen. First of all, we truncated the immobilized primer region of N59, and reserved the random sequence (N59a) to verify the inuence of the primer region of the aptamer. Considering that the primer sequences were also likely to participate in the binding of the aptamer to the target, we predicted the secondary structures of N59 using the mfold Web Server, which was shown in Fig. S4a. † At the present stage, the special secondary structure such like stem-loop, bulge, pseudoknot and three-way helix structure, might mainly lead to the binding affinity. 43 Based on this, we further cut off the stem loop A or B of N59 based on the secondary structure to obtain variants N59b, N59c.
Upon examination, we found that when the immobilized primer region was truncated, the affinity of aptamer N59a had been signicantly improved. 44 This situation did not appear on the N59b or N59c, whose affinity had no signicant difference with N59 (Table S3 †). This result indicated that the primer region did not take part in the binding of the aptamer to the target, nor hindered the combination. However, the stem-loop A and B did not seem to have a crucial inuence on the combination. 39 The secondary structure analysis was also performed on N59a by the mfold Web Server, and the result was shown in Fig. S4b. † Based on this result, we truncated the stem loop C and got aptamer N59a1, which lost its binding ability to DTX-1. We believe that the structure of the stem-loop C in N59a might form a unique spatial structure, which was the key part of the aptamer that binds to DTX-1, while the remaining basic group improved the affinity of DTX-1 and aptamer. 45 Identication of affinity and specicity of aptamer N59a We used GTX, STX, NOD-R, PTX, OA (1 mM) as non-specic targets to analyse the association and dissociation process of DTX-1 (1 mM) (Fig. 2). GTX and STX are alkaloids. 46,47 NOD-R is a peptide toxoid. 48 PTX and DTX-1 are polyethers. 49 OA is the homologue of DTX-1, which belongs to DSTs. 50 In addition, random sequences acted as aptamer controls, blank samples containing running buffer were used as a reference.
The results revealed that DTX-1 interacted with N59a with a K ON (1/M s ) value of 5.39 Â 10 7 , a K DIS (1/s) value of 3.45 Â 10 0 , and a K D (M) value of 6.40 Â 10 À8 . Furthermore, the random sequence showed no binding to DTX-1, might due to the fact that DTX-1 specically bound only to N59a. At the same time, GTX, STX, NOD-R and PTX did not cause any reaction. However, OA interacted with N59a with a K D (M) value of 3.05 Â 10 À5 , which has binding efficiency was much lower than that of DTX-1 and N59a. That is because the structures of OA and DTX-1 are same expect for one methyl. 50 On the other hand, we did not introduce OA as counter-target during the selection. Therefore, there must be some cross-reaction between DTX-1 and OA, but such a huge difference in affinity can also indicate that the association of N59a aptamer to DTX-1 is specic.
Research on the core structure of aptamer N59a Observing these above-described aptamers from selection, we noticed that all of them contained four pairs of cytosine bases. That indicated that N59a could form a unique spatial structure based on this unusual arrangement mode. 51 G-quadruplexes scoring of the N59af (the reverse complement of N59) revealed that N59af had the potential to form several kinds of Gquadruplexes structures, and N59a itself had the potential to form an i-motif fold structure by itself theoretically. 52,53 To verify these, we disrupted the arrangement mode of the bases. Therefore, N59a was truncated to obtain N59a2, N59a3, N59a4, N59a5, N59a6, N59a7 (Table S4 †). Among them N59a5, N59a6 and N59a7 had already completely lost the potential to form into i-motif fold structure. It turned out that among these six aptamers, only N59a5, N59a6 and N59a7 did not have the ability to binding to DTX-1, which was in line with our inference. Therefore, we believe that the four pairs of cytosine bases in the sequence may be the basic skeleton for the formation of i-motif fold, which is also a key part of the binding of aptamers to DTX-1. At the same time, the affinity of N59a was much higher than that of N59c, indicating that the bases on both sides was able to accelerate the induction of binding of aptamers to DTX-1.
Analysis of the conformation of N59a
We investigated the conformational change of aptamer N59a in selection buffer (pH 7.5) by circular dichroism (APL, Chiascan). As shown in the Fig. 3A, the CD spectrum of N59a without DTX-1 is similar to the characteristic CD spectrum of the i-motif structure under neutral conditions, 54 which might be due to that N59a forms an i-motif structure in this buffer. Aer the addition of DTX-1, the peak value of the CD spectrum was changed signicantly from statistical point of view. The reason for this change is that the existence of DTX-1 induces a structural change of N59a, which also demonstrates that N59a can bind to DTX-1 specically. We speculated the model diagram of N59a-DTX-1 complex according to the results of molecular calculation and simulation (Fig. 3B). 55
BLI aptasensor for DTX-1 detection
In view of the current deciency of DTX-1 detection, we established a an optical BLI sensor. Its key structure was based on aptamer N59a. BLI is an emerging sensor platform with great potential. 56 It can monitor the surface property of the sensor, when the immobilized ligand is combined with the target molecule by biolayer interferometry technology in real time. 57 At this stage, there have been reports of the BLI sensors used on small molecule detection. 38,50 We performed a series of measurements on the newly developed BLI sensor in the concentration range of 40-600 nM to evaluate the feasibility and stability of the sensor (Fig. 4A). As the concentration of DTX-1 increases, the optical thickness and mass density of the biolm surface gradually increase. The wavelength shis more obviously, eventually leading to an increase in response ( Fig. 4A and B). The experiment was repeated several times for each sample. A calibration curve was obtained with DTX-1 concentration of BLI response 300 s. Fitting the curve to a sigmoidal logistic four-parameter equation: y ¼ (R max À R min )/[(1 + (x/EC 50 ) b )] + R min . Here, R max and R min are the maximal and minimal response. EC 50 is the DTX-1 concentration leading to 50% of the maximum response. b is the correction factor. Aer repeated trials, we obtained the equation: y ¼ (0.3842 À 0.00728)/[1 + (x/89.66) (À1.564) ] + 0.00728. The correlation coefficient R 2 was 0.9934 (Fig. 3B). In the Repeatability and specicity are essential for the aptasensor. In order to test the repeatability of the aptasensor, we executed repeated measurements on the 100 nM DTX-1 solution and collected the response. The coefficient of variation (CV) was 2.59%, indicating that the sensor had good repeatability. We designed cross-reactivity experiments though 1 mM GTX, STX, NOD-R, PTX, OA. The results were shown in Fig. 3D, DTX-1 caused a response of 0.411 nm, and OA caused a response of 0.16 nm. The other toxin samples caused responses of less than 0.01 nm. Because the aptamer has an affinity for OA, it will denitely cause a response. Meanwhile, the response of toxin mixtures (including DTX-1, GTX, STX, NOD-R, PTX and OA, all toxin concentrations was 1 mM, respectively) was 0.413 nm, indicating that the response was only caused by DTX-1. Even in the presence of OA, DTX-1 competitively inhibits the association of aptamers to OA, so the sensor can be specically detected DTX-1 even in an environment with different kinds of toxins. The marine environment is complex and variable, so it is crucial to accurately detect DTX in a variety of toxin mixtures. 38
Detection of DTX-1 in seawater
To assess the applied value of the newly developed BLI aptasensor in practical samples, we investigated the feasibility of detecting different concentrations of DTX-1 in seawater samples. Seawater samples (pH 7.8) including the different concentration of DTX-1 (80 nM, 100 nM, 200 nM) were detected, respectively. As shown in Table 2, a recovery percentage of 100.5 to 106.9% is obtained. In the meantime, the response value is also in line with expectations, indicating that the seawater does not signicantly interfere with the detection of the aptasensor.
In summary, we can conclude that the aptasensor has the potential for detection of DTX-1 in practical samples.
Comparison with existing biosensors of DSP detection
We compared the existing reported biosensors for DTXs detection ( Table 3) and found that three of the biosensors for detecting OA were cross-reactive with DTX-1. However, there have no reports of biosensors with high specicity for DTX-1 at this stage. Among the method capable of detecting DTX-1, this aptasensor has a lower detection line and a wider detection range, which can achieve the requirements of the national standard of DTX-1. Compared with the conventional detection method, the aptasensor has its unique merits. First of all, the nucleic acid and the probe used in the method are very stable and inexpensive, which can be conveniently used in large-scale transportation and preservation. 27 Secondly, the method is efficient. The results can be obtained within half an hour. Next, its xed-running program reduces labour costs, and aptasensor can be reused aer regenerating with washing buffer for approximately 180 s; last but not least, the sensor is of high specicity and repeatability. However, none of the technologies are perfect. For this BLI aptasensor, the disadvantage is that the detection environment is slightly harsh. Therefore, some complex seawater samples may require pre-treatment which requires our further research and improvement. Therefore, we have no reason to query that this aptasensor has great potential in the detection of marine toxins. 25,58 Conclusions In summary, this work is the rst report of successful selection, optimization and identication of DNA aptamers that bind with high affinity and specicity to DTX-1. Meanwhile, we truncated DTX-1 aptamer and obtained the aptamer core sequence with a higher K D of 64 nM (R 2 ¼ 0.9832, X 2 ¼ 0.0082). We have conducted a preliminary investigation into the complicated structure formed by N59a and its binding mechanism of DTX-1. However, more evidences need to be further studied.
We also constructed a BLI aptasensor for detection of DTX-1 with a detection limit as low as 614 pM, which demonstrated good linear response between 60 and 200 nM DTX-1. The high affinity and stability of the BLI aptasensor proves it may offer an alternative to traditional analytical methods for the rapid and sensitive detection of DTX-1.
Conflicts of interest
The authors have no conicts to declare. | 2020-02-27T09:16:36.803Z | 2020-02-24T00:00:00.000 | {
"year": 2020,
"sha1": "ce0a7f0b9f6d8a68fbd92e295184ad599bb48983",
"oa_license": "CCBYNC",
"oa_url": "https://pubs.rsc.org/en/content/articlepdf/2020/ra/c9ra10600f",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "7e882d90e3e34573732292415cbd8f7feeddd991",
"s2fieldsofstudy": [
"Chemistry"
],
"extfieldsofstudy": [
"Chemistry"
]
} |
14497552 | pes2o/s2orc | v3-fos-license | Gauge equivalence in QCD: the Weyl and Coulomb gauges
The Weyl-gauge ($A_0^a=0)$ QCD Hamiltonian is unitarily transformed to a representation in which it is expressed entirely in terms of gauge-invariant quark and gluon fields. In a subspace of gauge-invariant states we have constructed that implement the non-Abelian Gauss's law, this unitarily transformed Weyl-gauge Hamiltonian can be further transformed and, under appropriate circumstances, can be identified with the QCD Hamiltonian in the Coulomb gauge. We demonstrate an isomorphism that materially facilitates the application of this Hamiltonian to a variety of physical processes, including the evaluation of $S$-matrix elements. This isomorphism relates the gauge-invariant representation of the Hamiltonian and the required set of gauge-invariant states to a Hamiltonian of the same functional form but dependent on ordinary unconstrained Weyl-gauge fields operating within a space of ``standard'' perturbative states. The fact that the gauge-invariant chromoelectric field is not hermitian has important implications for the functional form of the Hamiltonian finally obtained. When this nonhermiticity is taken into account, the ``extra'' vertices in Christ and Lee's Coulomb-gauge Hamiltonian are natural outgrowths of the formalism. When this nonhermiticity is neglected, the Hamiltonian used in the earlier work of Gribov and others results.
The Weyl-gauge (A a 0 = 0) QCD Hamiltonian is unitarily transformed to a representation in which it is expressed entirely in terms of gauge-invariant quark and gluon fields. In a subspace of gauge-invariant states we have constructed that implement the non-Abelian Gauss's law, this unitarily transformed Weyl-gauge Hamiltonian can be further transformed and, under appropriate circumstances, can be identified with the QCD Hamiltonian in the Coulomb gauge. We demonstrate an isomorphism that materially facilitates the application of this Hamiltonian to a variety of physical processes, including the evaluation of S-matrix elements. This isomorphism relates the gaugeinvariant representation of the Hamiltonian and the required set of gauge-invariant states to a Hamiltonian of the same functional form but dependent on ordinary unconstrained Weyl-gauge fields operating within a space of "standard" perturbative states. The fact that the gauge-invariant chromoelectric field is not hermitian has important implications for the functional form of the Hamiltonian finally obtained. When this nonhermiticity is taken into account, the "extra" vertices in Christ and Lee's Coulomb-gauge Hamiltonian are natural outgrowths of the formalism. When this nonhermiticity is neglected, the Hamiltonian used in the earlier work of Gribov and others results.
I. INTRODUCTION
In earlier work on QCD in the Weyl gauge (A a 0 = 0), we have constructed gauge-invariant operator-valued quark and gluon fields; [1] these include the gauge-invariant quark field ψ GI (r) = V C (r) ψ(r) and ψ † GI (r) = ψ † (r) where and where the λ a designate the Gell-Mann matrices. In these expressions X α (r) = [ ∂j ∂ 2 A α j (r)], so that ∂ i X α (r) is the i-th component of the longitudinal gauge field, [2] and Y α (r) is defined as Y α (r) = [ ∂j ∂ 2 A α j (r)]. A α j (r), which we refer to as the "resolvent field", is an operator-valued functional of the gauge field, and is represented in Refs. [1] and [3] as the solution of an integral equation. Constructing a gauge-invariant quark field by attaching V C (r) to the quark field ψ represents an extension, into the non-Abelian domain, of a method of creating gauge-invariant charged fields originated by Dirac for QED; [4] and, like Dirac's procedure, this non-Abelian construction is free of path-dependent integrals. An explicit demonstration that ψ GI (r) is invariant to non-Abelian gauge transformations has been given by implementing gauge transformations with the generator exp{−i dyĜ a (y)ω a (y)} whereĜ a is the non-Abelian "Gauss's law operator"Ĝ and ω a is a number-valued gauge function. With the use of this generator, under which ψ(r) → ψ ′ (r) = exp −iω α (r) λ α 2 ψ(r) (5) and it has been shown that V C (r) also gauge-transforms as V C (r) → V C (r) exp iω α (r) λ α 2 and V −1 C (r) → exp −iω α (r) λ α 2 V −1 C (r) (7) so that ψ GI (r) remains gauge-invariant. [1] The resolvent field A b j also has an important role in the gauge-invariant gauge field which can be shown to be the transverse field [1] Eq. (8), as well as the fact that A b GI i (r) andĜ c (x) commute, demonstrate that A b GI i (r) is gauge-invariant -more precisely, invariant to "small" gauge transformations. We can also define a gauge-invariant chromoelectric field E a GI i = −Π a GI i . [5] A natural definition of Π a GI i in this formulation is or, equivalently, where Π a i is the momentum conjugate to the gauge field A a i in the Weyl gauge. With the use of the commutator Ĝ c (x), R ab (y) = igf cbq R aq (y)δ(x − y), obtained in Ref. [5], it is easy to verify that Π a GI i (y) commutes withĜ c (x) and therefore also is gauge-invariant. In this work we will use a representation, which we discuss in Section II, in which the Weyl-gauge QCD Hamiltonian is expressed entirely in terms of gauge-invariant fields. Since the gauge-invariant gauge field is transverse, it is of interest to relate this gauge-invariant formulation to the Coulomb gauge. We address this question in Section II B. In Section II we also show that the Weyl-gauge QCD Hamiltonian in this representation -in which all operator-valued fields are gauge-invariant -must be applied to a set of gauge-invariant states that are solutions of the non-Abelian Gauss's law. In Section III, we address the problem that these states, which solve Gauss's law in QCD, are complicated constructions that are difficult to use. We demonstrate an isomorphism in this section between this Hamiltonian, which operates on gauge-invariant states, and a corresponding Hamiltonian that is a functional of gauge-dependent Weyl-gauge fields and that operates on a set of "standard" perturbative states. Also, in Section III, we relate these Hamiltonians to those obtained from Coulomb-gauge formulations of QCD. We discuss the implications of our work in Section IV. THE GAUGE-INVARIANT REPRESENTATION OF THE WEYL GAUGE TO THE COULOMB GAUGE.
II. RELATION OF
The QCD Hamiltonian in the Weyl gauge has been expressed in terms of gauge-invariant operator-valued fields. [5,6] In this work, extensive use has been made of the unitary equivalence ofĜ a -the "Gauss's law operator" given in Eq. (4), which imposes the non-Abelian Gauss's law -to the "pure glue" version of that operator as shown by where This unitary equivalence has been used to establish a new representation -the N representation in which G a represents the complete Gauss's law operatorĜ a , and ψ represents the gauge-invariant quark field because it commutes with G a . The N representation is unitarily equivalent to the C representation in whichĜ a and ψ GI designate the Gauss's law operator and the gauge-invariant spinor (quark) field respectively. In the N representation, j a 0 (r) = gψ † (r) λ a 2 ψ(r) and j a i (r) = gψ † (r)α i λ a 2 ψ(r) are the gauge-invariant quark color charge and quark color current densities respectively.
The Weyl-gauge QCD Hamiltonian can be transformed from its familiar C-representation form to the N representation, as shown byĤ This similarity transformation leaves the gauge field untransformed, but it transforms the quark field and the negative chromoelectric field as shown by [7] and The transformed, N -representation HamiltonianĤ can be expressed entirely in terms of gauge-invariant variables by making use of the identities The QCD Hamiltonian in the N representation, expressed in terms of gauge-invariant fields, iŝ where from which it follows that BecauseĤ GI is in the N representation, ψ and ψ † denote the gauge-invariant quark fields. D ab (x, y) is the inverse Faddeev-Popov operator, which we will discuss in Section II A, and J a 0 (GI) (r) is the gauge-invariant gluon color charge density, defined as AlthoughĤ GI is hermitian, Π a GI i is not, because, as can be seen from Eq.
The last part of the QCD Hamiltonian is where G a GI is the gauge-invariant Gauss's law operator [5] consists solely of gauge-invariant fields, every one of which commutes with G a , the Gauss's law operator in the N representation; G a GI is hermitian because R ab and G b commute. [5]. Eq. (20) resembles the QCD Hamiltonian in the Coulomb gauge. The only direct interaction between color currents j a i and the gauge field involve the transverse current only. The other interactions in which quarks participate are nonlocal, involve the quark color-charge density j a 0 , and are mediated by Green's functions that are the non-Abelian generalizations of the Abelian ∂ −2 . These interactions still involve the longitudinal component of the gauge-invariant chromoelectric field, but we will show how this can be eliminated in Section II B.
A. The inverse Faddeev-Popov operator.
The Faddeev-Popov operator in the gauge-invariant representation of the Weyl gauge is
∂ i and D i commute because A q GI i is transverse. The Faddeev-Popov operator has a formal inverse, which can be represented as the series where f αbh (n) represents the chain of SU(3) structure constants and where repeated superscripted indices are summed from 1→8; for n = 1; the chain reduces to f αbh 1 = f αbh ; and for n = 0, f αbh is a special case of a general form T α (n) (r)ϕ h (r) for an arbitrary ϕ h (r) given by with By expanding D bh (y, x) and combining terms of the same order in g, it can be observed that, as will be proven in Appendix A, where D ah = ∂ i δ ah + gf aγh A γ GI i and that where and and the ← symbol indicates that ∂ 2 and ∂ i differentiate to the left. In demonstrating Eqs. (30) and (31), it can be helpful to use the expanded form of the n-th order term of the inverse Faddeev-Popov operator series with and Integration by parts with respect to the z(i) and the identity f αah (n) = (−1) n f αha (n) demonstrate that It is apparent from Eqs. (34)-(36) that D bh (y, x) obeys the integral equation [8] D bh (y, which has these equations as an iterative solution. Eq. (26) enables us to express the commutator of the gauge-invariant gauge field and the negative gauge-invariant chromoelectric field as Eq. (39) and the commutator, obtained in Ref. [5], are in agreement with those given by Schwinger for the Coulomb gauge, [9] except for some differences in operator order. This fact suggests that the gauge-invariant Weyl-gauge field and the Coulomb-gauge field discussed by Schwinger are very similar. The differences in operator-order should be expected because, in Ref. [9], ambiguities in operator order in the Coulomb gauge are resolved by symmetrizing noncommuting operator-valued quantities so that Coulomb-gauge operators are kept hermitian. In our work in the gauge-invariant formulation of the Weyl gauge, ambiguities in operator order do not arise. When, because of a non-symmetric ordering of gauge fields and chromoelectric fields, some gauge-invariant operator-valued quantities turn out not to be hermitian, we leave them that way in order to avoid ad hoc changes in operator order. Eq. (40) leads to the commutation rule for the transverse parts of Π b GI j (y), [10] Eq. (39) leads to the commutator of the transverse part of Π b GI j (y) and A a GI i (x) (which is transverse) Eq. (39) can be shown to be consistent with ∂ i A a GI i = 0 because trivially. The Faddeev-Popov operator has a well-documented importance in non-Abelian gauge theories. Gribov has shown that gauge fields that have been gauge-fixed to have a vanishing divergence can differ from each other, [11,12] and that the Faddeev-Popov operator does not have a unique inverse. In that same work, Gribov makes the suggestion that the zeros of the Faddeev-Popov operator ∂ 2 δ ac + gf abc A b i ∂ i might so intensify the interaction between color charges that the effect could account for confinement. Subsequent authors have reiterated this suggestion, [13,14] and connections between the zeros of the Faddeev-Popov operator and color confinement have been discussed by other authors as well. [15,16,17] Eqs. (30) and (31) are based on a series representation of the operator-valued D bh (y, x); they are obtained by combining all terms of equal order in g and noting cancellations within each order. They do not, however, establish that D bh (y, x) is the unique inverse of the Faddeev-Popov operator. Questions about uniqueness can readily be formulated about number-valued functions, but are very difficult to address for operator-valued quantities. Eqs. (30) and (31) establish that D bh (y, x) is an operator-valued inverse of ∂·D ah (y) (acting on the left) and of ⇐= ∂·D ha (x) (acting on the right) without addressing the question of its uniqueness. However, although A a GI i is an operator-valued quantity, the SU(2) versions of its constituents -the Weyl-gauge field A a i and the resolvent field A a i -can be, and often have been, represented by number-valued realizations as functions of spatial variables. Such realizations have been used extensively to study the topology of gauge fields. [11,12,18] When the integral equation for the resolvent field referred to in Section I is expressed in terms of a number-valued hedgehog representation, it can be transformed into a nonlinear differential equation that was shown to have multiple solutions. [3] Moreover, this nonlinear differential equation was shown to be very nearly identical in form to the one used by Gribov as a specific illustration of the fact that the Faddeev-Popov operator for the transverse SU(2) gauge field does not have a unique inverse. With this number-valued realization we were able to establish that the gauge-invariant field, which is transverse, has a Gribov ambiguity, [3] even though there are no Gribov copies of the gauge-dependent Weyl-gauge field. [19,20,21] In the context of the quantized theory -for example, inĤ GI -we will represent D bh (y, x) as the operator-valued series described in Eqs. (26) and (34). Since each term in this series has unambiguous and self-consistent commutation relations with all other operator-valued quantities, the series representation of D bh (y, x) is entirely satisfactory for determining the commutators ofĤ GI with other gauge-invariant operators -and therefore determining their time dependence -even though number-valued realizations of the gauge-invariant gauge field lead to nonlinear integral equations that do not have unique solutions.
It may seem surprising that, starting in the Weyl gauge and expressing the QCD Hamiltonian in that gauge in terms of gauge-invariant variables can lead to a form of the Hamiltonian that, while never actually having been gaugetransformed, has the same dynamical effect as the QCD Hamiltonian in the Coulomb gauge. But a remarkably similar state of affairs obtains in QED. When QED is formulated in the temporal gauge, and a unitary transformation is carried out that is the Abelian analog of the one that leads to the Hamiltonian described in Eqs. (20)-(24), the following result is obtained: [22,23] The QED Hamiltonian in the temporal gauge, unitarily transformed by exp i 1 ∂ 2 ∂ i A i (r)j 0 (r)dr -the Abelian analog of the transformation U C described in Eq. (15) -takes the form A T i designates the transverse Abelian gauge field -which, in Abelian theories, is also the gauge-invariant fieldand H g can be expressed as H g is the Abelian analog of H G , described in Eq. (24). The Abelian Gauss's law operator,Ĝ = ∂ i Π i + j 0 , transforms into ∂ i Π i in the representation in which ψ represents the gauge-invariant electron field; and the states that implement Gauss's law, which originally are selected by G(r) |Ψ(r) = 0, are given by ∂ i Π i (r)|Φ = 0 in the transformed representation (or, as is more appropriate for Abelian gauge theories, by G (+) (r) |Ψ = 0 and ∂ i Π (+) i (r)|Φ = 0 respectively, where (+) designates the positive-frequency parts of operators). [22,24] As can be seen,Ĥ QED also consists of two parts: the Hamiltonian for QED in the Coulomb gauge; and H g , which has no effect on the time evolution of states that implement Gauss's law, but which "remembers" the fact thatĤ QED is the transformed Weyl-gauge Hamiltonian by preserving the field equations for that gauge. An identical transformation applies to covariant-gauge QED, the sole difference being in the form of the H g produced by the transformation.
As we can see from Eqs. (20), (24), (45) and (46), and as will become even more evident in Eq. (60), QCD and QED are strikingly similar in the relation between their Hamiltonians in different gauges when these are represented in terms of gauge-invariant fields. Nevertheless, there are important differences between QED and QCD in the significance of this relationship. One such difference is that, in QED, we may safely use the original untransformed Weyl-gauge or covariant-gauge Hamiltonian in a space of perturbative states when evaluating S-matrix elements, even though these gauge-dependent perturbative states fail to implement Gauss's law. This means that, for perturbative calculations in QED, we can safely use the Lagrangian with L g = −A 0 G for the Weyl gauge and L g = −G∂ µ A µ + 1 2 (1 − γ)G 2 for the covariant gauge, without paying any attention to Gauss's law whatsoever. A corresponding practice in Weyl-gauge QCD is the use of the Weyl-gauge Hamiltonian H in a Fock space of perturbative states that are not annihilated by G a . There is, however, the following important difference between QED and QCD. The use of perturbative states in QED without implementing Gauss's law is permissible because, in QED, a unitary equivalence can be established between ∂ i Π i and ∂ i Π i + j 0 , so that ∂ i Π i can be interpreted as ∂ i Π i + j 0 in a new representation. [22,23] In this way, it can be shown that perturbative states that implement only ∂ i Π i (r) ≈ 0 instead of ∂ i Π i + j 0 (r) ≈ 0 may be used when evaluating S-matrix elements in QED; the only effect on S-matrix elements from this substitution consists of changes to the renormalization constants, which are unobservable. [25] But this dispensation to ignore Gauss's law in perturbative calculations has not been shown to extend to QCD, because D i Π a i + j 0 (r) is unitarily equivalent only to D i Π a i , but not to ∂ i Π a i ; and states that implement the Gauss's law D i Π a i ≈ 0 cannot be perturbative states. In particular, the use ofĤ GI for perturbative calculations using a space of perturbative states does not enjoy the same protection that the corresponding practice has in QED. In Section III, we will establish an isomorphism between the gauge-invariant states that implement the non-Abelian Gauss's law and perturbative states. This isomorphism enables us to substitute "standard" calculations with perturbative states for prohibitively difficult ones with gauge-invariant states. By this means, we provide for QCD a substitution rule, similar to the one available in QED, that permits the use of perturbative Fock states in scattering calculations with the assurance that the results of these calculations will agree with what would have been obtained if gauge-invariant operators and states had been used.
Another difference between QCD and QED is related to the fact that states that obey the condition are not normalizable. We can see this easily by constructing, for example, the commutator of G 8 (r) and an integral operator is an arbitrary c-number-valued function. Since and since G 8 (r) is hermitian so that Ψ|G 8 (r) = 0 as well as G 8 (r)|Ψ = 0, this leads to Ψ|Ψ = 0, in contradiction to the assumption that |Ψ is normalizable. This argument is a simple extension of one that was applied to the Fermi subsidiary condition for QED. [26] In the case of QED, however, this difficulty can be remedied because the non-normalizability of the states that are annihilated by the Abelian Gauss's law operator is entirely caused by the unobservable longitudinal nonpropagating photon "ghost" modes, which coincide exactly with the pure gauge degrees of freedom, and which can be kept separate from the gauge-invariant transversely polarized propagating photons in a variety of ways. In QCD, however, transverse modes can be pure gauge, and we do not know of a similarly satisfactory resolution of the non-normalizability of the state vectors that satisfy Eq. (48). [27,28] The previously-mentioned isomorphism, which will be demonstrated in Section III, mitigates this difficulty by establishing an equivalence between matrix elements evaluated with gauge-invariant states that are not normalizable, and corresponding ones evaluated with perturbative states.
B. Relation to QCD in the Coulomb gauge.
Unlike the Weyl-gauge formulation of QCD, in which one can simply set A a 0 = 0 and impose canonical quantization rules on the remaining fields, [29,30] the quantization of Coulomb-gauge QCD requires that constraints be explicitly taken into account. In constrained quantization -one procedure for implementing consistency with constraintsthis consistency is maintained by means of the so-called "Dirac-brackets", which replace the canonical equal-time commutation rules. When constrained quantization, such as the Dirac-Bergmann procedure, [31] is applied to the Coulomb gauge, the generator of infinitesimal gauge transformations becomes a constraint; it then must commute with all fields, which therefore are invariant to small gauge transformations. Under these circumstances, the gauge field would automatically be invariant to small gauge transformations, although it might have discrete numbers of gauge copies.
However, carrying out the constrained quantization of QCD in the Coulomb gauge is problematical; one impediment stems from operator-ordering ambiguities of multilinear operator products. For example, in constrained quantization, the matrix of constraint commutators must be inverted. There are noncommuting operators in that matrix, and it is at best problematical to keep track of operator order in the process of finding this inverse. As a result, the Dirac brackets of some operators are not unambiguously specified. Because of the difficulties associated with the quantization of QCD in the Coulomb gauge, a number of workers have avoided the direct quantization of Coulomb-gauge QCD, and have proceeded by treating the A a 0 = 0 gauge fields as a set of Cartesian coordinates and the Coulomb-gauge fields as a set of curvilinear coordinates, and have transformed from the former to the latter by using the familiar apparatus for such coordinate transformations. [32,33,34,35] In our work, we transform from the Weyl gauge to a representation in terms of gauge-invariant operator-valued fields. Our purpose is to implement gauge invariance, not to carry out a gauge transformation. We do not impose transversality on the the gauge-invariant A b GI i ; in our work, A b GI i is transverse, but the transversality is not imposed as a condition -it emerges as a consequence of its gauge invariance. And the Gauss's law operator G a does not vanish identically; in our work, Gauss's law is a condition on a set of states (the implementation of Gauss's law by imposing it on a set of states is also discussed in Refs. [32,34,36,37]).
Because our formulation of QCD in terms of gauge-invariant fields differs significantly from those whose purpose is to construct the QCD Hamiltonian in the Coulomb gauge, it is of interest to inquire how closely the resulting Hamiltonians resemble each other. In order to examine this question further, we will make some additional transformations ofĤ GI that assume that the Hamiltonian acts only on states that implement Gauss's law. WhenĤ GI appears in a matrix element between two states |Ψ α and Ψ β | that obey G c (x) |Ψ α = 0 and Ψ β |G c (x) = 0, further transformations that eliminate the longitudinal component of Π a GI i are possible. For the case that Π c L GI i appears adjacent to and directly to the left of such a state |Ψ , we can make the replacement and, therefore, also where where ≈ indicates that the replacement is valid only when the operators act on states |Ψ that implement Gauss's law. When J a † 0 (GI) stands directly to the right of Ψ| states, we can similarly make the replacement where and where the arrows indicate that differentiation is applied to the left. Similarly, Π a GI i (r) and Π a † GI i (r) can be expressed as and respectively. We can combine Eqs. (26) with (51) and (52) to obtain and Eqs. (54) and (55) can be expressed as and respectively, where J a T † 0 (GI) (x) represents the hermitian adjoint of J a T 0 (GI) (x). We can define an "effective" Hamiltonian (Ĥ GI ) phys , which is obtained by making the replacements described by Eqs. (56) -(59)) inĤ GI and excluding H G , since the latter will not contribute to any matrix elements in the physical space in which Gauss's law is implemented. With these replacements, we obtain (Ĥ GI ) phys is not identical toĤ GI . But (Ĥ GI ) phys can substitute forĤ GI as the generator of time-evolution when we embed the theory within a space of states |Ψ ν that satisfy the non-Abelian Gauss's law, G a (x)|Ψ ν = 0. Because G a (x) is hermitian, the same state |Ψ ν that obeys Eq. (48) also obeys Ψ ν |G a (x) = 0. Eq. (20) demonstrates that whenĤ GI appears in any "allowed" matrix element, Π a GI i and J a 0 (GI) always are situated where they abut a "ket" state vector |Ψ α to their right; and Π a † GI i and J a † 0 (GI) always are situated where they abut a "bra" state vector Ψ β | to their left. SinceĤ GI will always be bracketed between two states Ψ β | and |Ψ α that implement Gauss's law, Π a can similarly be replaced as shown in Eqs. (56) and (57) respectively. (Ĥ GI ) phys can therefore always be substituted forĤ GI in matrix elements, as long as attention is paid to the need to restrict the space of state vectors to those that implement Gauss's law. For example, exp(−iĤ GI t)|Ψ α can be replaced by exp −i(Ĥ GI ) phys t |Ψ α , since both will be required to project onto states that implement Gauss's law, as shown by and Each matrix element Ψ µi |Ĥ GI )|Ψ µj in Eq. (62) can be replaced by Ψ µi |(Ĥ GI ) phys |Ψ µj , so that exp(−i(Ĥ GI )t)|Ψ α can safely be replaced by exp −i(Ĥ GI ) phys t |Ψ α . The time evolution imposed byĤ GI on a state vector |Ψ α for which G c (x)|Ψ α = 0 takes place entirely within the space of states that implement Gauss's law. In the case of a state vector |χ for which G c (x)|χ = |χ ′ where |χ ′ is nonvanishing, because G c (x) andĤ GI commute. This requires the part of χ that fails to implement Gauss's law to be orthogonal to exp(−iĤ GI t)|Ψ α . The only limitation on the validity of this argument is the non-normalizability of the states that implement Gauss's law, which complicates the algebraic properties of the {|Ψ α } vector space. Nevertheless, Eqs. (61)-(63) show that we can restrict the space in which time evolution takes place to state vectors that implement Gauss's law without compromising the unitarity of the time-evolved |Ψ α (t) or of the S-matrix evaluated with such states. These considerations are also instrumental in allowing us to replace exp(−iĤ GI t) with exp(−i(Ĥ GI ) phys t).Ĥ GI and (Ĥ GI ) phys both commute with G a (x) for all values of a, so that as well as The state vectors exp(−i(Ĥ GI )t)|Ψ α and exp(−i(Ĥ GI ) phys t)|Ψ α therefore are gauge-invariant and implement Gauss's law just as |Ψ α does.
In comparing (Ĥ GI ) phys with expressions for the Coulomb-gauge Hamiltonian in the literature, we note that the only significant difference between (Ĥ GI ) phys and the Coulomb-gauge Hamiltonian reported in Ref. [32] is that Π b T † GI j , the hermitian adjoint of the transverse gauge-invariant chromoelectric field, appears in Eq. (60) where the expression GI j J appears in Ref. [32], where J = det[∂ i ·D i ]. We will prove in Appendix B that by using Eq. (11) and the identity where the trace in Eq. (67) extends to the coordinates and the color indices. With this demonstration, we see that Eq. (60) and the Coulomb-gauge Hamiltonian described in Eq. (4.65) in Ref. [32] are identical. It is also of interest to compare Eq. (60) with the Coulomb-gauge Hamiltonian in Ref. [11] as well as in the work of a number of other authors who used the same form of the Hamiltonian. The Hamiltonian in Ref. [11] differs from the Hamiltonian described by Eq. (4.65) in Ref. [32] in the fact that Π b T GI j rather than Π b T † GI j appears in Ref. [11] in place of J −1 Π b T GI j J in Ref. [32]; there is also the trivial difference that Ref. [11] deals with "pure glue" QCD so that the quark field is not included.
This discrepancy raises the question of the hermiticity of the operator-valued transverse gauge-invariant chromoelectric field Π b T GI j , which is of considerable importance for determining the dynamical effects of (Ĥ GI ) phys . One way of addressing this question is to use Eq. (11) and Eq. (65) in Ref. [5] to obtain where the partial derivative acts on only the first y argument in D ch (y, y). We might have expected that the transverse parts of Π b † GI j (y) and Π b GI j (y) would be identical since any functionals of the form (δ i,j − ∂i∂j ∂ 2 )∂ j ξ(y) would necessarily vanish. Such a conclusion would not, however, be correct in this case, because in ∂ ∂yj D qh (y, y), the partial derivative differentiates only the first y in D qh (y, y). We can make use of Eq. (38) and the fact that f hcb δ hc = 0 to express Eq. (68) as and we can extract the transverse parts to obtain Eq. (70) makes it clear that Π b T † GI j (y) − Π b T GI j (y) is not the transverse projection of a gradient and therefore cannot be presumed to vanish.
Equally compelling evidence that Π b T GI j is not identical to its hermitian adjoint is provided by the observation that the commutators Π a T GI i (x) , Π b T † GI j (y) and Π a T GI i (x) , Π b T GI j (y) differ. The latter vanishes, as is shown by Eqs. (40)-(41). However, use of Eq. (11) and the commutation rules for the underlying Weyl-gauge fields lead to and an alternate derivation based on Eqs. (42) and (68) confirms that result. Similarly to what we observed in connection with Eq. (68), the derivatives ∂ ∂yj and ∂ ∂xi each differentiate part, but not all of the y and x dependence, respectively, of the product D dh (y, x)D cp (x, y) in Eq. (71). The transverse projections of ∂ ∂y l D dh (y, x) ∂ ∂x k D cp (x, y) therefore will not vanish, and Π a T GI j (y) and Π b T † GI j (y) cannot be identical.
III. ISOMORPHISM AND ITS IMPLICATION FOR THE SCATTERING AMPLITUDE.
In the preceding sections we have obtained a description of QCD that took the Weyl-gauge formulation as its point of departure, and arrived at a Hamiltonian in which all operator-valued fields -the gauge field, the chromoelectric field, as well as the quark field -are gauge-invariant, and only the transverse components of the chromoelectric fields appear in the Hamiltonian (Ĥ GI ) phys . It was necessary, however, to restrict use of this Hamiltonian to a space in which all state vectors implement the non-Abelian Gauss's law; and these state vectors are complicated constructions that are not easy to use. In this section, we will show how isomorphisms can be established that enable us to identify (Ĥ GI ) phys with a Hamiltonian that can be used in a space of ordinary, conventional perturbative states.
To review the relation between gauge-invariant and perturbative states: In Ref. [1], a set of states was constructed in the form where the operator-valued Ψ was given as |φ i designates one of a set of states that is annihilated by ∂ j Π b j . These |φ i states -the so-called "Fermi" statesare related to "standard" perturbative states |p i by Ξ was given in Ref. [38], where it was also shown that ∂ j Π b j (r) Ξ|p i = 0, where |p i designates one of a set of "standard" perturbative states annihilated by all annihilation operators for fermion and transverse gauge field excitations. This set of perturbative states will be described more fully later in this section, and will turn out to be identical to perturbative states in QED, except for the fact that the gluon operators carry a Lie group index, while the photons do not. Since ∂ j Π b j annihilates any |φ i state, we can see that, in |Ψ i states, the negative chromoelectric field Π q(ℓ) k(ℓ) (r ℓ ) in Ψ can be replaced by its transverse part Π q(ℓ) T k(ℓ) (r ℓ ), because the longitudinal parts vanish when acting on a |φ i state. Furthermore, in Eq. (74), every transverse Π q(ℓ) T k(ℓ) (r ℓ ) is integrated with an A q(ℓ) k(ℓ) (r ℓ ) in each variable, r ℓ , and only the transverse components A q(ℓ) T k(ℓ) (r ℓ ) will survive this integration in the |Ψ i states, which become In Ref. [1], it was shown that In Appendix D, we will use Eq. (76) to show that Since the Hamiltonian (Ĥ GI ) phys consists of transverse fields only, Eqs. (77) and (78) afford us an opportunity to shift (Ĥ GI ) phys from the left-hand side of Ψ to the right, with a concomitant substitution of transverse Weyl-gauge fields for the corresponding gauge-invariant fields. The one impediment to this process is that Π b T † GI j , the hermitian adjoint of Π b T GI j , also appears in (Ĥ GI ) phys , and Eq. (78) only applies to Π b T GI j and not to Π b T † GI j . To remove that impediment, we use Eq. (66) to substitute J −1 Π b T GI j J for Π b T † GI j , and express (Ĥ GI ) phys as We can define a "hermitized" transverse gauge-invariant negative chromoelectric field As can be seen from Eq. (66), P b T j is hermitian, since An important consideration for this argument is the fact that J 1 2 is hermitian, which is proven in Appendix C. In the same appendix, we also prove that the canonical commutation relations between Π bT j 's and A a GIj 's and that among Π bT j 's remain unmodified with Π bT j 's replaced by P bT GIj 's. We then find that Eq. (82) transforms from the nonhermitian Π b T GI j and Π b T † GI j to the hermitian P b T j (not, however, unitarily, since J 1 2 is hermitian and not the hermitian adjoint of J − 1 2 ). Transformations of this kind have previously been used by other workers; [32,39] It would be possible to make a compensating transformation on the states, but we prefer to leave the states untransformed and to extract from Eq. (79) in order to obtain a non-interacting part of (Ĥ GI ) phys that consists of hermitian gauge-invariant fields and that can define interaction picture operators. As we will show in Appendix E, this process leads to the expression where and where U and V as well as k b 0 (x) andJ b T 0 (GI) are defined in Appendix E in Eqs. (E7), (E11), (E16), and (E15) respectively.
[H] 0 , [H] 1 , and [H] 2 are hermitian, and all consist entirely of gauge-invariant, hermitian, transverse gauge fields and gauge invariant quark fields, which all obey "standard" commutation rules. Since P b T j (y) and Π b T † GI j (y) have the same commutator with A a GI i (x), Eq. (42) also determines the commutation rule The sum [H] 0 + [H] 1 is identical in form to the Coulomb-gauge QCD Hamiltonian used by Gribov [11,12], as well as by numerous other authors who have followed him in using this Hamiltonian.
[H] 2 consists of additional terms that are required because the transverse gauge-invariant negative chromoelectric field Π b T GI j is not hermitian. The elimination of Π b T GI j and Π b T † GI j in favor of the hermitian P b T j is essential for the establishment of the isomorphism between (Ĥ GI ) phys and a Hamiltonian that can be used in a Fock space of perturbative states. We now proceed to the demonstration of this isomorphism.
Since both A a GI i (r) and P b T j (r) are hermitian and obey the commutation rule displayed in Eq. (87), we can represent them as and where n is summed over two transverse helicity modes and α a n (k) , α b † ℓ (q) = δ n,ℓ δ a,b δ k,q and α a n (k) , α b ℓ (q) = α a † n (k) , α b † ℓ (q) = 0 .
Eqs. (88) and (89) can be inverted, leading to and Eqs. (91) and (92) show that α c n (k) and α c † n (k) are gauge-invariant and commute with the Gauss's law operator G a (r). Eqs. (77), (78) and (80) demonstrate that any functional F A a GI i , P b T j will have the transformation property leading to and so that the isomorphism established in Eq. (93) between the gauge-invariant fields A a GI i , P b T j and the gauge-dependent Weyl-gauge fields A a T i , Π b T j respectively is transferred to a similar relation between the gauge-invariant creation and annihilation operators for transverse gluons, α c † n (k) and α c n (k), and corresponding "standard" perturbative creation and annihilation operators a c † n (k) and a c n (k). We can proceed by using the standard representation for the transverse part of the Weyl-gauge fields, and which demonstrate that Any α c n (k) will annihilate the gauge-invariant vacuum state J − 1 2 Ψ Ξ|0 , because the transverse excitation operators a c n (k) and a c † n (k) trivially commute with Ξ. At this point, we can establish an isomorphism between two Fock spaces: The "standard" Weyl-gauge Fock space consists of with K the normalization constant and the gauge-invariant states that implement the non-Abelian Gauss's law can be represented as where |0 designates the perturbative vacuum annihilated by a c n (k) as well as by the annihilation operators for quarks and antiquarks, q p,s andq p,s respectively. The additional normalization constant C −1 must be introduced to compensate for the fact that |C| 2 = |J − 1 2 ΨΞ|0 | 2 = 0|Ξ ⋆ Ψ ⋆ J −1 ΨΞ|0 , which formally is a universal positive constant, is not finite; and the state J − 1 2 ΨΞ|0 is not normalizable. However, once C is introduced, the |k 1 · · ·k i · · ·k N states form a satisfactory Fock space that is gauge-invariant as well as isomorphic to the space of |k 1 · · · k i · · · k N states. We can now use Eqs.
with the subscript s labeling the color, flavor and herlicity of the quarks. In this form, [H] 0 can be seen to describe the energy of non-interacting gauge-invariant transverse gluons of energy k and quarks and anti-quarks respectively of energy E p = m 2 + |p| 2 . We can also define another in the inverse Faddeev-Popov operator D ab (x, y) -so that H is characteristic of the Coulomb gauge, but nevertheless is a functional of transverse Weyl-gauge unconstrained fields. For example, H 0 is We can then use Eq. (93) to establish that as well as and The state vector |n represents one of the |k 1 · · · k i · · · k N , the "standard" perturbative eigenstates of H 0 . We can use the relations between Weyl-gauge and gauge-invariant states we have established in the preceding discussion to extend the isomorphism we have demonstrated to include scattering transition amplitudes. For this purpose, we define The transition amplitude between gauge-invariant states is given by where |i and |f each designate one of the |n states; |i represents an incident and |f a final state in a scattering process. With the results of the preceding discussion, we can express this as where we sum over the complete set of perturbative states |n n|. The second line of Eq. (111) follows from and the observation that the last term on the second line of Eq. (112) vanishes trivially. With the isomorphism of the states |k 1 · · · k i · · · k N and |k 1 · · ·k i · · ·k N that we have established, where is a transition amplitude that can be evaluated with Feynman graphs and rules, because it is based on "standard" perturbative states that are not required to implement Gauss's law and need not be gauge-invariant.
In the remainder of this section, we will discuss the relation of our formulation of the scattering transition amplitude to approaches to this problem in Coulomb-gauge formulations of QCD. As was pointed out in section IIB, the effective Hamiltonian (Ĥ GI ) phys described in Eq. (60) is identical to one obtained by Christ and Lee, [32] who treated gauge fields as coordinates and applied the apparatus of transformations from Cartesian to curvilinear coordinates to the problem of formulating Coulomb-gauge QCD. Here, we will show thatĤ GI -the precursor of (Ĥ GI ) phys , described in Eq. (20) -is identical in form to the Hamiltonian given in Eq. (6.15) in Ref. [32], which leads to the Coulomb-gauge perturbative rules formulated by Christ and Lee. For this purpose,Ĥ GI will be expressed in terms of P b T j and A a GI i , and then Weyl-ordered. The equivalence of Christ and Lee's results with Schwinger's [36] was already confirmed in Ref. [32].
Eq. (82) demonstrates that the functional dependence ofĤ GI on P b T j and A a GI i is the same as the functional dependence ofH and A a GI i .H was used by Christ and Lee to generate the path integral representation of the Coulomb gauge, [32] and they showed thatH where the superscript W designates Weyl-ordering with respect to Π b T GI j and A a GI i . The additional terms V 1 and V 2 are given by and where the partial derivative ∂ j to the left of D ab (r, r) acts only on its first argument. When a partial derivative with a left arrow on top appears to the right of D ab with two identical arguments, it acts only on its second argument. The case of of two identical arguments of D is understood as ∂ j D ab (x, x) ≡ lim y→x ∂ ∂xj D ab (x, y) and D ab (x, x) ← ∂ j ≡ lim y→x ∂ ∂xj D ab (y, x), where the limit is taken after the partial derivative has been evaluated. This convention will be followed consistently in the following discussions. Since the commutator of P b T j and A a GI i is identical to that of Π b T GI j and A a GI i , an equation parallel to Eq. (116),Ĥ will be proven below. The superscript W designates Weyl-ordering, but in this case with respect to P b T j and A a GI i . The parallel structure refers to the fact that, as was pointed out above,Ĥ GI has the same functional dependence on P b T j and A a GI i asH has on Π b T GI j and A a GI i . Since the fermion variables commute with P b T j and A a GI i , we may drop them for the proof of Eq. (119); we will also drop H G , since it makes no contributions in the space of gauge-invariant states. It follows from Eq. (58) that for a physical state |Ψ , In terms of the Weyl-ordered chromoelectric field operator of Schwinger, [9] we have The HamiltonianĤ GI , in the absence of the fermion field and without H G , can be written aŝ where the kinetic energy with To evaluate ∆ b j (r), we observe that and, therefore, that With Eqs. (120) and (68), this leads to In Appendix F, we shall prove that In the form given in Eq. (124) with K as described in Eq. (125), the effective Hamiltonian (Ĥ GI ) is identified with that of Schwinger. [36] The next step towards the proof of (119) follows from the operator identity given in Ref. [32] (133) Using the commutation relation (E10), we can show that the second term on the right hand side of Eq. (133) is the same as V 2 (the same proof is also given in Ref. [32]) and Eq. (119) is established.
IV. DISCUSSION
In this work, we have used earlier results [1,5,6] to express the Weyl-gauge Hamiltonian entirely in terms of operator-valued fields that are gauge-invariant as well as path-independent. These gauge-invariant fields have many features in common with Coulomb-gauge fields: Their commutation rules agree with those given by Schwinger in his Coulomb-gauge formulation of QCD, [9,36] except for differences in operator order; these differences can be ascribed to the fact that Schwinger imposed Weyl order in his work while we do not make any ad hoc changes in operator order. The gauge-invariant gauge field is transverse and hermitian; but the gauge-invariant chromoelectric field is neither transverse nor hermitian. Even the transverse part of the gauge-invariant chromoelectric field is not hermitian. That fact is important for relating the Hamiltonian we obtained in Eq. (84) with those given by Gribov, [11] Schwinger, [36] and Christ and Lee. [32] The relation between the Coulomb-gauge Hamiltonian for QCD and the Weyl-gauge Hamiltonian expressed in terms of gauge-invariant fields closely parallels the relation between the two corresponding QED Hamiltonians. The Weyl-gauge Hamiltonian for QCD is represented entirely in terms of gauge-invariant fields in Eqs. (20) and (24). When formulated in terms of gauge-invariant fields, QCD must be embedded in a space of gauge-invariant states that obey the non-Abelian Gauss's law. Within such a space of gauge-invariant states, further transformation of the QCD Hamiltonian we have constructed can be effected. Thus transformed, the Hamiltonian consists of two parts. One part, (Ĥ GI ) phys -displayed in Eq. (60) -is identical to the Coulomb-gauge Hamiltonian. It is a functional of transverse gauge-invariant chromoelectric fields, gauge-invariant gauge fields (which are inherently transverse), as well as gauge-invariant quark fields. The other part, H G -displayed in Eq. (24) -makes only vanishing contributions to matrix elements within the space of gauge-invariant states that is required for the Hamiltonian to act consistently as the time-evolution operator. H G does affect the field equations and "remembers" that the formulation is for the Weyl, and not the Coulomb gauge. This situation is precisely the same as in QED, in which the Weyl-gauge Hamiltonian, expressed in terms of the gauge-invariant field (in that case, simply the transverse part of the gauge field), is the sum of two terms, given in Eqs. (45) and (46); the former is the Coulomb-gauge Hamiltonian, and the latter makes only vanishing contributions to matrix elements within the space of gauge-invariant states, but is necessary for reproducing the Euler-Lagrange equations for Weyl-gauge QED.
In spite of the similarity between QCD and QED in the relation between the Weyl and Coulomb gauges summarized in the preceding paragraph, there is an important difference between the gauge-invariant states for the two theories: Gauge-invariant and perturbative states in QED are unitarily equivalent; and in a Hamiltonian formulation, this unitary equivalence permits us to use perturbative states in evaluating scattering amplitudes in QED in algebraic and covariant gauges without compromising the implementation of Gauss's law. [22,23] But there can be no unitary equivalence between gauge-invariant states and perturbative states in QCD. And the gauge-invariant states in QCD are complicated, not normalizable, and very cumbersome to use. In order to make effective use of the Weyl-gauge QCD Hamiltonian represented in terms of gauge-invariant fields, some relation is required that allows us to circumvent the absence of the unitary equivalence between gauge-invariant and perturbative states that afflicts non-Abelian gauge theories. In Section III we establish such a relation in the form of an isomorphism that enables us to consistently carry out calculations in QCD with an equivalent Hamiltonian that is a functional of the original gauge-dependent Weyl-gauge fields and that is used with standard perturbative states. In the case of QCD, this isomorphism has been demonstrated for the Weyl gauge only. An extension to a somewhat larger class of algebraic gauges defined by A 0 + γA 3 = 0 with γ≥0 should not be difficult; [40] but, in contrast to QED, there is no indication that further extensions -to covariant gauges, for example -are possible. Finally, in Section III, we show that the effective Hamiltonian (Ĥ GI ) phys -and therefore also H = H 0 + H 1 + H 2 -can be expressed in appropriately Weyl-ordered forms and shown to be equivalent to results obtained by Schwinger [36] and by Christ and Lee [32]. The Hamiltonian used by Gribov in Ref. [11] is equivalent to only H = H 0 + H 1 . H 2 does not appear in that work, because the nonhermiticity of the transverse chromoelectric field was not taken into account. = −iR bc (x) dy dzD mn (y, z) δ δA c j (x) ∂ · D nm (z)δ(z − y) = igf lnm R bc (x) dy dzD mn (y, z) δA l GIi (z) δA c j (x) where we have used Eq. (67). Substituting Eq. (39) and using we obtain that where the gradient acts only on the first argument of D ac and the longitudinal term comes from the second term of Eq. (39), with ∂ ∂yi acting on the first argument of D ac (y, y). Comparing the transverse part of Eq. (B5) with that of Eq. (68), Eq. (66) is proved.
APPENDIX C
To prove the hermiticity of the Faddeev-Popov determinant J as an operator in the Hilbert space of states, we recall the criterion that an operator is hermitian if its expectation values with respect to all states are real. In the coordinate representation of states for which A GI is diagonalized and corresponds to a c-number field configuration, the expectation value of an operator which is a functional of the operator A GI is equal to the same functional of the c-number field configuration A GI . For each c-number field configuration, the Faddeev-Popov operator, ∂ j D j , with D j denoting the covariant derivative, D ab j = δ ab ∂ j − gf abc A c GIj , becomes an operator with respect to space coordinates and group indices. We have and with the dagger referring to space coordinates and group indices. Therefore where the last step follows from the transversality of A GI . Therefore the Faddeev-Popov operator is hermitian with space coordinates and group indices for any field configuration. Its determinant, J must be real. The hermiticity of J in the Hilbert space of states is established according to our criterion, and the hermiticity of J 1 2 is an obvious corollary.
To derive the commutation relations among P l j (x)'s and A l GIj (x)'s, we notice that P l j (x) = Π lT GIj (x) + 1 2 Π lT GIj (x), ln J (C4) | 2014-10-01T00:00:00.000Z | 2002-10-03T00:00:00.000 | {
"year": 2002,
"sha1": "5a632e4b448d198e4184770eb72f11d48ae6fb7d",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/hep-ph/0210059",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "c0a9ab0eb9fc999b1bd9c2d998b6cc1f09d2dec0",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
} |
85089516 | pes2o/s2orc | v3-fos-license | γ-Cleavage Is Dependent on ζ-Cleavage during the Proteolytic Processing of Amyloid Precursor Protein within Its Transmembrane Domain*
β-Amyloid precursor protein apparently undergoes at least three major cleavages, γ-, ϵ-, and the newly identified ζ-cleavage, within its transmembrane domain to produce secreted β-amyloid protein (Aβ). However, the roles of ϵ- and ζ-cleavages in the formation of secreted Aβ and the relationship among these three cleavages, namely ϵ-, ζ-, and γ-cleavages, remain elusive. We investigated these issues by attempting to determine the formation and turnover of the intermediate products generated by these cleavages, in the presence or absence of known γ-secretase inhibitors. By using a differential inhibition strategy, our data demonstrate that Aβ46 is an intermediate precursor of secreted Aβ. Our co-immunoprecipitation data also reveal that, as an intermediate, Aβ46 is tightly associated with presenilin in intact cells. Furthermore, we identified a long Aβ species that is most likely the long sought after intermediate product, Aβ49, generated by ϵ-cleavage, and this Aβ49 is further processed by ζ- and γ-cleavages to generate Aβ46 and ultimately the secreted Aβ40/42. More interestingly, our data demonstrate that γ-cleavage not only occurs last but also depends on ζ-cleavage occurring prior to it, indicating that ζ-cleavage is crucial for the formation of secreted Aβ. Thus, we conclude that the C terminus of secreted Aβ is most likely generated by a series of sequential cleavages, namely first ϵ-cleavage which is then followed by ζ- and γ-cleavages, and that Aβ46 produced by ζ-cleavage is the precursor of secreted Aβ40/42.
The mechanism of the formation of the -amyloid protein (A) 2 is the central issue in Alzheimer disease research, not only because A is the major constituent of senile plaques, one of the neuropathological hallmarks of Alzheimer disease, but also because A formation may be a causative event in the disease (1). A is proteolytically derived from a large single transmembrane protein, the -amyloid precursor protein (APP), as a result of sequential cleavages by and ␥-secretases (1). -Secretase has been identified as a type I membrane aspartyl protease (2,3). Although the exact nature of ␥-secretase is still a matter of debate, accumulating evidence supports the idea that ␥-secretase is a multiple molecular complex composed of, at least, presenilins, nicastrin, Aph-1, and Pen-2 and that presenilin may function as the catalytic subunit (4).
In understanding the mechanism by which the C termini of secreted A are generated during the processing of APP, three major intramembranous cleavages have been established. The first one is the cleavage now specifically referred to as ␥-cleavage (5), which produces the C termini of most of the secreted A species that end at amino acids 40 (A40) or 42 (A42) of the A sequence. The second one is the ⑀-cleavage occurring between A residues 49 and 50, which produces the N terminus of most of the APP intracellular domain (AICD) (5)(6)(7)(8). The identification of this ⑀-cleavage site raises a question as to whether this ⑀-cleavage is obligatory for the generation of the C terminus of A, and this also raises a question as to the relationship between ⑀and ␥-cleavages, i.e. whether they are independent of each other or sequential. One of the obstacles in addressing these questions is that neither the intermediate A peptide, which ends at the ⑀-cleavage site, nor the C-terminal fragment, which starts with an N terminus generated by ␥-cleavage, has ever been detected. In a recent study, we reported the identification of an intracellular long A species, namely A 46 , and this led to the discovery of the third major cleavage site, the -cleavage site at A 46 between the known ␥and ⑀-cleavage sites (9). The presence of -cleavage site at A46 is further supported by a very recent study showing that A 46 is the predominant form among the longer A species detected intracellularly (10). However, the finding that the known ␥-secretase inhibitors, such as DAPT, DAPM, and compound E, inhibit the formation of secreted A 40/42 , and on the other hand cause the accumulation of A 46 , raises the question as to whether A 40/42 and A 46 are produced by the same enzyme or by different enzymes (9). Moreover, the roles of ⑀and -cleavages in the formation of secreted A and the relationship among these three cleavages, namely ⑀-, -, and ␥-cleavages, also remain elusive. To address these key issues, the objectives of this study were focused on the following: (a) determining precursor and product relationship between A 46 and A 40/42 ; and (b) establishing the roles of ⑀and -cleavages in the formation of secreted A 40/42 .
Cell Lines and Plasmids-N2a cells stably expressing either wild type presenilin 1 (PS1wt) alone or both PS1wt and Swedish mutant APP (APPsw) were kindly provided by Drs. Sangram S. Sisodia and Seong-Hun Kim (University of Chicago) and were maintained as described previously (11). The plasmid APPsw645, which expresses a C-terminal truncated APP ending at the ⑀-cleavage site A 49 , was constructed using the site-directed mutagenesis kit (Stratagene). APPsw (12), kindly provided by Dr. Gopal Thinakaran (University of Chicago), was used as a template. A pair of oligonucleotides (E49, CGTCATCACCTTGTA-GATGCTGAAGAAG; E49-r, CTTCTTCAGCATCTACAAGGT-GATGACG), which are complementary to each other and contain a stop codon at position 50 of the A sequence, were used as primers.
Cell-free Assay-In vitro turnover of A 46 by ␥-secretase activity was assayed in a cell-free assay system established previously (13), following the procedure described previously (7) with minor modifications. Briefly, N2a cells were cultured in the presence of DAPM for 12 h and harvested in 9 volumes of homogenization buffer (10 mM MOPS, pH 7.0, 10 mM KCl) containing protease inhibitors (Complete, Roche Applied Science) and homogenized by passing through a 20-gauge needle 30 times. After removal of unbroken cells and nuclei by centrifugation at 800 ϫ g at 4°C for 10 min, membranes were pelleted by centrifugation at 20,000 ϫ g at 4°C for 30 min. The membranes were washed once with homogenization buffer and resuspended in assay buffer (150 mM sodium citrate pH 6.4, protease inhibitor mixture). Aliquots of equal amounts of membranes were then incubated at either 0 or 37°C. After 1 h of incubation, aliquots (25 l) were removed for Western blotting, and the remaining reaction mixtures were subjected to centrifugation at 20,000 ϫ g for 30 min at 4°C to yield the supernatant and pellet fractions. After addition of an equal volume of IP buffer (50 mM Tris/HCl, pH 7.4, 150 mM NaCl, 0.5% Nonidet P-40, 5 mM EDTA, and protease inhibitor mixture), the supernatant was subjected to immunoprecipitation using 6E10. The pellet fraction was solubilized with 1% Nonidet P-40 in IP buffer. After centrifugation at 20,000 ϫ g at 4°C for 15 min, the supernatant was diluted with equal amounts of IP buffer to lower the concentration of Nonidet P-40 to 0.5% and then subjected to immunoprecipitation using 6E10. The intracellular A species were immunoprecipitated using 6E10. Both immunoprecipitates were analyzed by 10% Bicine/urea-SDS-PAGE, followed by Western blot analysis using 6E10 as described below.
Detection of A 49 -Note that in all of the experiments throughout this study, A 46 was determined by directly analyzing the cell lysates without immunoprecipitation. To determine the presence of the possible A 49 , cells were cultured in the absence of any inhibitors and lysed with 1% Nonidet P-40 in IP buffer. After centrifugation at 20,000 ϫ g at 4°C for 15 min, the supernatant was diluted with an equal amount of IP buffer, and the A 49 and other intracellular A species were immunoprecipitated using 6E10 in the presence or absence of DAPT. Of note, based on our previous data (9) and unpublished data, 3 it was found that all the tested nontransition state inhibitors, such as DAPT, DAPM, and compound E, cause intracellular accumulation of A 46 in the same fashion. However, in comparison with DAPM, the inhibitory effects of DAPT and compound E last longer, and this is probably because the enzyme binding activity of the latter two is stronger than that of DAPM. Therefore, DAPT and compound E were used in the in vitro assay and during the immunoprecipitation procedure. DAPM was used to cause the accumulation of A 46 in cells that would be used for determining the turnover of A 46 either in intact cells or in a cell-free system in which the inhibitor used for causing the accumulation of A 46 needs to be removed.
Immunoprecipitation and Western Blotting-Immunoprecipitation and Western blotting were carried out as described previously (9) with the exception that in some cases the immunoprecipitation was carried out in the presence of 500 nM DAPT as indicated in the figure legends. Briefly, 24 h after splitting, cells were treated with inhibitors at various concentrations or with vehicle only as a control. Eight hours after treatment, cells were harvested and lysed in Western blot lysis buffer (50 mM Tris-HCl, pH 6.8, 8 M urea, 5% -mercaptoethanol, 2% SDS, and protease inhibitors). Secreted A was immunoprecipitated from conditioned media using a monoclonal A-specific antibody 6E10 (Senetek). The immunoprecipitates were analyzed by 10% Bicine/urea-SDS-PAGE and transferred to a polyvinylidene fluoride membrane (Immobilon-P, Millipore). The membranes were then probed with 6E10, and the immunoreactivity bands were visualized using ECL-Plus (Amersham Biosciences).
Fractionation and Co-immunoprecipitation-In order to determine the formation of the possible complex of A 46 and presenilin, the following procedure, which was originally described in a previous study (14), was employed with slight modification. Briefly, N2a cells expressing APPsw695/PS1wt cultured in the presence of 3 nM compound E (or 500 nM L-685,458; see Fig. 4B) for 10 -12 h were harvested and then homogenized in homogenization buffer A (20 mM HEPES, pH 7.4, 50 mM KCl, 2 mM EGTA, 10% glycerol, protease inhibitor mixture (Roche Applied Science)) containing 10 nM compound E (or 2.5 M L-685,458) by passing through a 20-gauge needle 30 times. The homogenized samples were subjected to centrifugation at 800 ϫ g for 10 min to remove the unbroken cells and nuclei. The postnuclear supernatant was further centrifuged at 20,000 ϫ g for 1 h resulting in the supernatant and the pellet fractions. The resultant pellet, which contains both A 46 and PS1 (Fig. 4A), was solubilized in buffer B (50 mM PIPES, pH 7.0, 150 mM KCl, 5 mM MgCl 2 , 5 mM CaCl 2 , and protease inhibitor mixture)) (15) containing 1% CHAPSO and 10 nM compound E (or 2.5 M L-685,458), for 1 h at 4°C and then subjected to centrifugation again at 20,000 ϫ g for 25 min to remove the insoluble materials. The supernatant was diluted with an equal volume of solubilization buffer B to adjust CHAPSO to a final concentration of 0.5%. After pre-clearing with protein A-Sepharose beads for 3 h, the supernatant was incubated with anti-PS1N, a rabbit polyclonal antibody raised against the N terminus of PS1 (9) in the presence of compound E (or L-685,458) with rotation at 4°C for 3-4 h, and then an appropriate amount of protein A-Sepharose beads was added and incubated overnight. After washing twice with solubilization buffer B containing 0.5% CHAPSO and ␥-secretase inhibitors, and then twice with PBS, the immunocomplex was eluted with SDS-PAGE Sample loading buffer and separated by 10 -18% SDS-PAGE followed by Western blotting using 6E10 to detect the co-immunoprecipitated A 46 and CTF.
RESULTS
L-685,458 Inhibits the Formation of A 46 -In our recent study, we have shown that treatment of cells with nontransition state ␥-secretase inhibitors, such as DAPT, DAPM, and compound E, caused an increase in the accumulation of intracellular A 46 , indicating that these inhibitors have no effect, or little effect, on the newly identified -cleavage. On the other hand, when the cells were cultured in the presence of transition state analogs, such as L-685,458 and 31C, A 46 was not detectable, strongly suggesting that these inhibitors inhibit the -cleavage and block the formation of A 46 (9). However, it cannot be ruled out that the absence of A 46 in cells treated with L-685,458 may be due to the inability of this inhibitor to block the turnover of A 46 . To address these issues, N2a cells expressing both PS1wt and APPsw were treated with DAPT, compound E, and L-685,458, either individually or in combination. Both the cell lysate and the secreted A 40/42 immunoprecipitated from conditioned medium (CM) were analyzed by 10% Bicine/urea-SDS-PAGE as described previously (16), followed by Western blotting using 6E10. As shown in Fig. 1 16 -19, shows the dose-dependent effects of compound E and DAPM on the reduction of secreted A 40/42 and the concomitant accumulation of intracellular A 46 , respectively. Of note, by directly analyzing the cell lysate, A 46 can also be detected in cells not treated with any inhibitor after prolonged exposure of the Western blot ( Fig. 1, lane 11), as has been shown in our recent study (9). On the other hand, treatment with L-685,458, at a range of concentrations from 0.5 to 2.5 M, completely abolished the formation of secreted A 40/42 in the CM (Fig. 1, lanes 4 -6, lower panel), whereas it did not cause the accumulation of intracellular A 46 (upper panel). To determine whether the absence of A 46 was a result of the failure of L-685,458 to block the turnover of A 46 , cells were treated with L-685,458 plus 0.5 M DAPT or plus 5 nM compound E, both of which have been shown to block the turnover of A 46 (9), see also Fig. 1, lanes 2 and 3. As shown in Fig. 1, the addition of DAPT (lanes 7 and 8) or compound E (lanes 9 and 10) did not lead to the accumulation of A 46 in the presence of L-685,458. This result clearly indicates that the absence of A 46 , in cells treated with L-685,458, is due solely to its inhibition of the formation of A 46 , rather than its failure to block the turnover of A 46 .
A 46 Is Processed into A 40/42 in Vitro-As reported in our recent study (9), at a low range of concentrations, DAPM, DAPT, and compound E cause a dose-dependent decrease in secreted A 40/42 and a concomitant increase in intracellular A 46 (see also Fig. 1), suggesting a possible precursor-product relationship between A 46 and A 40/42 . This hypothesis is also supported by the facts that A 46 contains the ␥-cleavage site at A40/42 and that A 46 is detectable in living cells in the absence of any inhibitors (Fig. 1, lanes 1 and 11), which suggests that -cleavage occurs prior to ␥-cleavage, otherwise the -cleavage product A 46 would not have had a chance of being formed. To explore the possible precursor-product relationship between A 46 and A 40/42 , we first determined whether A 46 is processed into A 40/42 . To address this issue, a system that contains pre-existing A 46 is required. For this purpose, a cell-free system, which has been established and used in many previous studies to assay the in vitro ␥-secretase activity (7, 13), was employed. Cells were cultured in the presence of 100 nM DAPM, which has been shown to cause the accumulation of A 46 (Fig. 1), and the membranes were prepared as described under "Materials and Methods." As shown in Fig Membrane preparation and the cell-free assay were performed as described under "Materials and Methods." After incubation in the absence or presence of inhibitors, the reaction mixtures were subjected to the centrifugation, and the resulting supernatants (lanes 1-4) and pellets (lanes 5-8) were then subjected to immunoprecipitation using 6E10 in IP buffer. The immunoprecipitates were analyzed by 10% urea-SDS-PAGE followed by Western blotting using 6E10. Lanes 1 and 5 are immunoprecipitates from the supernatant and pellet, respectively, obtained from membranes incubated at 0°C for 1 h. Lanes 2 and 6 are immunoprecipitates from the supernatant and pellet, respectively, obtained from membranes incubated at 37°C for 1 h. Lanes 3 and 7 are immunoprecipitates from the supernatant and pellet, respectively, obtained from membranes incubated at 37°C for 1 h in the presence of DAPT. Lanes 4 and 8 are immunoprecipitates from the supernatant and pellet, respectively, obtained from membranes incubated at 37°C for 1 h in the presence of L-685,458 (L685). Lane 9 is the mixture of synthetic A 40/42 , and lane 10 is the synthetic A 46 standard. Since 6E10 was used for both immunoprecipitation and Western blotting, the bands between the fAPP and CTF bands are the heavy chain (HC) and light chain (LC) of mouse IgG. The middle panel is the light exposure of the Western blot for observing the changes in the amount of CTF. Bottom panel, prior to centrifugation, aliquots of the reaction mixture were separated by 10 -18% regular SDS-PAGE and probed with C15, an APP C-terminal specific antibody (9), to detect AICD. As discussed in Fig. 3, the two CTF␣ bands correspond to the Myc-tagged recombinant CTF␣ and the endogenous CTF␣. Similarly, the two AICD bands correspond to the Myctagged recombinant AICD and the endogenous AICD. indicates that L-685,458 has no effect on the turnover of A 46 . It was noted that only a trace amount of detectable A 40/42 was immunoprecipitated from the supernatants (Fig. 2, lanes 2 and 4, upper panel). One possibility is that the secretion of A 40/42 from the living cells might be an energy-dependent procedure. Therefore, the secretion of A 40/42 from the membrane of a cell-free system is not as efficient as in the living cells. Notably, small amounts of A 46 , CTF, and full-length APP were also detected in these supernatants. It is possible that the A 40/42 , as well as other APP derivatives detected in the supernatants, may be associated with the low density membranes contained in the supernatant fraction. The other possibility is that once the A 40/42 is released from the membrane, or more precisely from the ␥-secretase complex, it might be rapidly degraded by the proteases released, during resuspension and incubation, from the membrane, which contains many kinds of protease-containing vesicles. In contrast, the A 40/42 , which still remains in the membrane or, more precisely, before being released from the ␥-secretase complex, was protected from the degradation. In this regard, it has been reported that the association of CTF with PS1 protects CTF from random degradation (17). This degradation of A 40/42 released from the membrane may also account for the fact that the amount of A 40/42 detected is smaller than expected, compared with the decrease in precursor A 46 .
A 46 Is Processed into A 40/42 in Living Cells in the Presence of L-685,458-As shown in Fig. 2, A46 was indeed processed into A 40/42 in a cell-free system. Data presented in Fig. 2 also clearly demonstrate that L-685,458, the transition state analog, did not block the turnover of A 46 into A 40/42 in a cell-free system, indicating that L-685,458 has no detectable effect on ␥-cleavage that produces A 40/42 from A 46 Fig. 1, at the specified concentrations, DAPM completely blocked the formation of secreted A 40/42 and caused marked accumulation of A 46 (Fig. 1, lane 18), and L-685,458 completely blocked the formation of A 40/42 and A 46 (Fig. 1, lane 4). After 12 h of incubation, L-685,458 (0.5 M) was added to the cells in Fig. 3A, lane 2. L-685,458 was also added to the cells in Fig. 3A, lanes 5 and 7, in addition to the existing DAPM, and was continuously incubated for 40 min to completely stop the generation of new A 46 in these cells, because at this concentration, L-685,458 blocked the formation of A 46 (Fig. 1, lane 4). Since L-685,458 has no effect on the turnover of A 46 (Fig. 2), this treatment also allows the complete turnover of the A 46 possibly existing in the cells of lane 2 of Fig. 3A. As a control, cells in Fig. 3A, lane 1, were cultured in the presence of Me 2 SO throughout Note, CTF and CTF␣ generated from exogenous APP, which is expressed with a Myc tag fused to its C terminus, were designated as CTF⅐myc and CTF␣⅐myc, respectively; CTF␣ generated from endogenous APP were designated as CTF␣(end), as described in a previous study (27) ␥-Cleavage Depends on -Cleavage the course of the experiment. All cells were then washed twice with fresh medium containing the appropriate inhibitor, which was to be used in the next incubation step, and cultured for an additional 2 h either in the presence or absence of inhibitors as indicated. A 40/42 was immunoprecipitated using 6E10 from CM of the last 2-h cultures. Both cell lysates and A 40/42 immunoprecipitated from CM were analyzed by 10% Bicine/urea-SDS-PAGE followed by Western blotting using 6E10.
As shown in Fig. 3A, 2nd panel, secreted A 40/42 was detected in cells cultured in the absence of any inhibitors throughout the course of the experiment (lane 1, 2nd panel). Secreted A 40/42 was also detected in cells cultured in the absence of inhibitors during the last 2-h incubation period, with a concomitant decrease in both CTF and A 46 (Fig. 3A, lane 4). As expected, secreted A 40/42 was not detected in cells cultured in the presence of L-685,458 either throughout the course of the experiment (Fig. 3A, lane 6) or during the last two incubation periods (40 min and 2 h) (lane 2). Also, secreted A 40/42 was not detected in cells cultured in the presence of DAPM throughout the course of the experiment (Fig. 3A, lane 3). However, when the DAPM was replaced by L-685,458 during the last 2-h incubation period, secreted A 40/42 was detected in the media, and concomitantly, the pre-accumulated A 46 disappeared (Fig. 3A, (Fig. 3A, lane 6, bottom panel) or during the last two incubation periods (40 min and 2 h) (lanes 2 and 5, bottom panel), indicating that L-685,458 prevented CTF from turnover. Therefore, the A 40/42 detected in Fig. 3A, lanes 5 and 7, should have been produced solely from the pre-accumulated A 46 by DAPM during the prior 12 h of culture in the presence of DAPM. This is also supported by the fact that without pre-accumulation of A 46 during the first 12 h of culture, A 40/42 was not detected in cells (Fig. 3A, lane 2) cultured in the presence of L-685,458 during the last two incubation periods (40 min and 2 h). Since both A 46 and CTF decreased, the secreted A 40/42 detected in Fig. 3A, lane 4, is apparently the sum of the A 40/42 produced from both pre-accumulated A 46 and CTF, and the CTF was most likely first converted to A 46 , and the resulting A 46 was further processed to A 40/42 . A small amount of AICD was also detected in cells cultured in the presence of DAPM throughout the course of the experiment (Fig. 3A, lane 3, bottom panel). This result further confirmed that the nontransition state inhibitor DAPM has less effect on the turnover of CTF by ⑀and -cleavages. As discussed below, the accumulation of CTF in the presence of DAPM is possibly the result of the partial inhibitory effect of DAPM on ⑀-cleavage or, alternatively, results from the accumulation of A 46 , which remains tightly associated with PS1 (Fig. 4) and which prevents CTF from accessing the ␥-secretase. It was noted that the amount of AICD detected was smaller than expected, compared with the decrease in CTF and CTF␣ in Fig. 4, lanes 1 and 4. This is likely because of the rapid degradation of this CTF fragment in living cells, as reported previously (18). Nevertheless, the detection of AICD in DAPM-treated cells clearly indicates that at the tested concentration DAPM has less effect on ⑀-cleavage, even though at the same concentration DAPM completely blocked the formation of secreted A 40/42 . In contrast, L-685,458 completely inhibits the formation of AICD and this is in agreement with a previous report (5).
To confirm further the finding that A 40/42 was produced solely from pre-accumulated A 46 in the presence of L-685,458, a time course experiment was performed. As shown in Fig. 3B, and as described in 1, 2, 6, and 7) for 12 h. Then L-685,458 (0.5 M) was added to the cells in Fig. 3B, lanes 2 and 7. In addition to the existing DAPM, L-685,458 was also added to the cells in Fig. 3B, lanes 5 and 10, and was continuously incubated for 40 min to stop completely the generation of new A 46 in these cells. As controls, cells in Fig. 3B, lanes 1 and 6, were cultured in the presence of Me 2 SO throughout the course of the experiment. All cells were then washed twice with fresh medium containing the appropriate inhibitor, which was to be used in the next incubation step, and were cultured for an additional 1 (Fig. 3B, lanes 1-5) or 2 h (lanes 6 -10), either in the presence or absence of inhibitors as indicated. By using 6E10, A 40/42 was immunoprecipitated from CM of the last 1-and 2-h cultures. Both cell lysates and A 40/42 immunoprecipitated from CM were analyzed by 10% Bicine/urea-SDS-PAGE followed by Western blotting using 6E10. As shown in Fig. 3B, lanes 1 and 6 (lower panel), in the absence of inhibitors, A 40/42 is apparently produced in a time-dependent manner during the last 1 (lane 1) and 2 h (lane 6) of culture. As shown in Fig. 3B, lanes 2 and 7, CTF accumulated in a time-dependent manner, but neither secreted A 40/42 nor intracellular A 46 was detected in the cells treated with L-685,458 during the last two incubation periods (40 min and 1 or 2 h). As shown in Fig. 3B, lanes 3 and 8, no A 40/42 was detected in the CM of cells treated with DAPM throughout the course of the experiment (lower panel). Instead, an accumulation of intracellular A 46 and CTF was observed in these cells (Fig. 3B, upper panel). In contrast, when DAPM was removed during the last 1 and 2 h of incubation, the accumulated A 46 and CTF, with concomitant increase in secreted A 40/42 , were decreased in a time-dependent manner (Fig. 3B, compare lane 9 with lane 4 of both upper and lower panels). More interestingly, when DAPM was replaced with L-685,458 during the last 1 and 2 h of incubation, the time-dependent decrease in pre-accumulated A 46 (Fig. 3B, compare lane 10 with lane 5, upper panel) and the concomitant increase in secreted A 40/42 (compare lane 10 with lane 5, lower panel) was also observed. It is notable that the accumulated CTF remained unchanged (Fig. 3B, compare lane 10 with lane 5, upper panel) during this time course. These results clearly indicate that the secreted A 40/42 detected in these cells was solely produced from the pre-accumulated A 46 . This conclusion is also supported by the observation that without the pre-accumulating A 46 , no secreted A 40/42 was detected in CM of cells cultured in the presence of L-685,458 during the last 1 and 2 h (Fig. 3B, lanes 2 and 7). The secreted A 40/42 detected in Fig. 3B, lane 9, is the sum of the A 40/42 produced from both accumulated A 46 and CTF.
The fact that the secreted A 40/42 is produced from A 46 in the presence of L-685,458, in both cell-free and living cell systems, clearly indicates that L-685,458 has no direct inhibitory effect on the ␥-cleavage. Therefore, the absence of secreted A 40/42 in cells treated with L-685,458, which blocks the formation of A 46 from CTF by -cleavage, indicates that A 40/42 cannot be generated directly from CTF by ␥-cleavage. In other words, formation of A 46 by -cleavage is an indispensable step during the course of ␥-secretase-mediated processing of CTF to produce A 40/42 . A 46 Is Associated with PS1-A previous study has shown that as a substrate of ␥-secretase, CTF forms a complex with PS1, which is the putative catalytic subunit of the ␥-secretase complex, at the sites of A formation (19). If A 46 is the precursor of A 40/42 , then A 46 , as an intermediate, may still be associated with PS1. To determine whether A 46 is still associated with PS1, the co-immunoprecipitation experiment was performed. As described under "Materials and Methods," lysates of cells treated with compound E, which causes the accumulation of A 46 , were first subjected to 800 ϫ g centrifugation to remove the unbroken cells and nuclei. The resulting postnuclear supernatant was subjected to further centrifugation at 20,000 ϫ g resulting in the supernatant, which contains the low density microsomal and cytosolic fractions (20), and the pellet, the crude membrane fraction containing the trans-Golgi network (TGN) and plasma membrane (21). As shown in the upper panel of Fig. 4A, A 46 was detected in the whole cell lysate (lane 2) and the crude membrane fraction of 20,000 ϫ g (lane 3), but not in the supernatant fraction of 20,000 ϫ g (lane 4). Most interestingly, as shown in the lower panel of Fig. 4A, PS1 was also detected in the whole cell lysate (lane 2) and the fraction of pellet at 20,000 ϫ g (lane 3), but not in supernatant at 20,000 ϫ g (lane 4), indicating that A 46 co-fractionates with PS1 into the crude membrane fraction. Therefore, as described under "Materials and Methods," after solubilization of the pellet fraction of 20,000 ϫ g, co-immunoprecipitation was carried out by using anti-PS1N, an antibody specific to the N terminus of PS1 (9). As shown in Fig. 4B, A 46 was indeed co-immunoprecipitated with PS1 from the crude membranes prepared from cells treated with compound E (lane 3) but not in cells treated with L-685,458 (lane 5), which inhibits the formation of A 46 from CTF and causes accumulation of CTF (Fig. 1). In agreement with the previous study (19), CTF was co-immunoprecipitated with PS1 in the L-685,458-treated cells (Fig. 4B, lane 5, lower panel). As controls, neither A 46 nor CTF was immunoprecipitated by pre-immune rabbit IgG (Fig. 4B, lanes 2 and 4). The observation that A 46 is tightly associated with PS1 in the TGN-containing membrane fraction is in agreement with the previous report that TGN is the major site for A formation (22).
Detection of the Possible A 49 -The data presented above clearly demonstrate that A 46 is an intermediate precursor of secreted A 40/42 . We next attempted to determine the possible presence of A 49 generated by ⑀-cleavage. Lysates of cells cultured in the absence of inhibitor were subjected to immunoprecipitation followed by Western blotting FIGURE 5. A, detection of A 49 . N2a cells stably expressing both PS1 and APPsw were cultured in the absence of inhibitor. Cell lysis, and immunoprecipitation were performed as described under "Materials and Methods." The immunoprecipitates (IP) were analyzed by 10 -18% regular SDS-PAGE followed by Western blotting using 6E10 (lanes 2 and 3). Note, IP* indicates that immunoprecipitation was carried out in the presence of DAPT. ␥-Cleavage Depends on -Cleavage using 6E10. As shown in Fig. 5A, A 46 was immunoprecipitated from untreated cells (lane 2). Most interestingly, when the immunoprecipitation was carried out in the presence of DAPT, in addition to the band of A 46 , a band with a slower migration rate was detected (Fig. 5A, lane 3). Possibly due to the lower concentration and the hydrophobicity, mass spectrometric analysis of this A species was unsuccessful. To estimate its molecular size, we synthesized three A peptides, A 46 , A 48 , and A 49 . As the new A species migrates at the same rate as that of the synthetic A 49 (Fig. 5A, lane 4), it is most likely the long sought after intermediate, A 49 , generated by ⑀-cleavage. This conclusion is also supported by the fact that the majority of AICD starts from A 50 as reported by previous studies (5)(6)(7)(8). It should be noted that under normal conditions, A 49 can only be detected after enrichment by immunoprecipitation carried out in the presence of DAPT, indicating its rapid turnover.
A 49 Is the Precursor of A 46 -Data presented in Fig. 3A clearly indicate that L-685,458 inhibits ⑀-cleavage that produces AICD. To determine further the effect of L-685,458 on the -cleavage, which produces A 46 , and the relationship between A 49 and A 46 , we created a construct, APPsw645, that expresses a C-terminal truncated APPsw ending at the ⑀-cleavage site A 49 . N2a cells, which stably express wild type PS1, were stably transfected with APPsw645. As shown in Fig. 5B, secreted A 40/42 was detected in the medium of cells cultured in the absence of any inhibitors (lane 2, middle panel). However, when the cells were treated with compound E, no secreted A 40/42 was detected in the medium with a concomitant accumulation of intracellular A 46 (Fig. 5B, lane 3, top panel). This result clearly indicates that formation of secreted A 40/42 from A 49 is also mediated by the formation of the intermediate A 46 . Most interestingly, when cells were treated with L-685,458, neither A 40/42 nor A 46 was detected (Fig. 5B, lane 4, top and middle panels), with concomitant accumulation of intracellular A 49 (lane 4, top panel). Given the fact that L-685,458 has no effect on the ␥-cleavage that produces A 40/42 (Figs. 2 and 3), this result indicates that the blockage of the formation of A 40/42 from A 49 , by L-685,458, is not due to inhibition of ␥-cleavage but rather due to inhibition of -cleavage that produces A 46 , which can be further processed into A 40/42 even in the presence of L-685,458. It was noted that a low amount of A 49 was also detected in cells cultured in the absence of inhibitors (Fig. 5B, lane 2, top panel). This may be a result of the lower efficiency of ␥-secretase processing because of the lack of AICD, which may be required for APP to efficiently initiate the interaction with the ␥-secretase complex. This inefficient interaction of A 49 with ␥-secretase complex may also account for the detection of the unprocessed A 49 secreted into the medium (Fig. 5B, lane 2, middle and bottom panels). In this regard, it was also noted that in the presence of inhibitors, specifically in the presence of L-685,458, a significant amount of unprocessed A 49 was detected in the media (Fig. 5B, lanes 3 and 4, middle and bottom panels). One possibility is that in the presence of these inhibitors, the ␥-secretase-bound A 49 or the intermediate A 46 occupies the binding site of ␥-secretase complex and prevents other unprocessed A 49 from binding to ␥-secretase complex, resulting in the secretion of these unprocessed A 49 into the media.
DISCUSSION
By using the combination of L-685,458 with compound E or L-685,458 with DAPT, we clearly demonstrated that the absence of A 46 in cells treated with L-685,458 is not due to its failure to block the turnover of A 46 , but instead is due exclusively to its inhibition of the formation of A 46 . Our data further demonstrate that in both the cellfree system and in living cells, L-685,458 has no detectable effect on the turnover of A 46 under the current experimental conditions. A similar inhibition profile was also observed for 31C (data not shown), indicating that these inhibitors, known as transition state analogs, share the same inhibitory specificity, i.e. they specifically inhibit the formation of AICD by ⑀-cleavage and the formation of A 46 by -cleavage, but have no effect on ␥-cleavage which produces secreted A 40/42 from A 46 . The observation that L-685,458 has no effect on the turnover of A 46 is important because this made it possible to determine the following: 1) the precursor and product relationship between A 46 and A 40/42 ; 2) the key role of -cleavage in the formation of A; and 3) the sequential relationship among the three major intramembrane cleavages, namely the ␥-cleavage, the ⑀-cleavage, and the newly identified -cleavage.
By using the differential inhibition approach, our data presented in Figs. 2 and 3 clearly reveal the important findings that in the presence of L-685,458, A 46 undergoes further ␥-cleavage to produce secreted A 40/42 , both in a cell-free system and in living cells. These results indicate that L-685,458 does not directly inhibit ␥-cleavage. Therefore, the fact that inhibition of the formation of A 46 by L-685,458 also blocks the formation of A 40/42 from CTF (Fig. 3) and A 49 (Fig. 5B, lane 4) indicates that without the formation of the intermediate A 46 by -cleavage, A 40/42 cannot be directly generated from CTF or A 49 by ␥-secretase, i.e. A 46 is the intermediate precursor of A 40/42 . However, it cannot be totally ruled out that A 40/42 can be generated directly from CTF or A 49 by a distinct ␥-cleavage, which is inhibited by L-685,458.
To confirm further the notion that A 46 is the intermediate precursor of A 40/42 or, in other words, A 46 is the intermediate product of the ␥-secretase-mediated proteolytic processing of CTF, we performed co-immunoprecipitation experiments and found that, as an intermediate product during the intramembranous processing by ␥-secretase, A 46 is indeed tightly associated with PS1. In agreement with the previous study (19), CTF was also co-immunoprecipitated with PS1 (Fig. 4B, lanes 3 and 5). It was noted that in compound E-treated cells, only a small amount of CTF was detected in the co-immunoprecipitate (Fig. 4B, lane 3). One possibility is that, in the presence of compound E, which prevents the turnover of A 46 into A 40/42 , the accumulated intermediate A 46 occupies the binding site of the ␥-secretase complex and thus prevents the further binding of CTF to the ␥-secretase complex and results in less CTF co-immunoprecipitating with PS1. This possibility is also supported by the observation that most of the accumulated CTF is not associated with PS1 but is detected in the subcellular fraction that is free of PS1 (Fig. 4A, lane 4). In contrast to the low amount of CTF co-immunoprecipitating with PS1 in L-685,458treated cells, the observation that a significantly high amount of A 46 was co-immunoprecipitating with PS1 in compound E-treated cells indicates that the complex formed between PS1 and A 46 is more stable than that formed between PS1 and CTF. The tight association of A 46 with PS1 in the TGN-containing membrane fraction, which has been reported to be the major site of A generation (22), provides further strong support for the notion that A 46 is an intermediate product formed during the ␥-secretase processing and that further turnover of A 46 must be dependent on a PS1-containing enzyme, i.e. most likely the same ␥-secretase. Taken together, these results indicate that once bound to presenilin, the initial substrate, CTF, and, specifically, the intermediate product, A 46 , are closely associated with presenilin until the release of the final product of secreted A 40/42 . Thus, all the data presented support our hypotheses that A 46 is the precursor of the secreted A 40/42 and that the -cleavage, which produces A 46 , plays a key role in A formation.
The fact that L-685,458 has no effect on the turnover of A 46 (Figs. 2 and 3) and that L-685,458 blocks the formation of A 46 from A 49 by -cleavage (Fig. 5B), and also blocks the formation of AICD by ⑀-cleavage (Fig. 3), indicates that the transition state analog L-685,458 specifically inhibits both ⑀and -cleavages but has no effect on ␥-cleavage. Moreover, the finding that L-685,458 does not directly inhibit ␥-cleavage strongly supports an important notion that this inhibitor inhibits the formation of A 40/42 in living cells by a mechanism other than directly inhibiting the ␥-cleavage, namely by indirectly inhibiting the formation of A 46 by -cleavage. This idea is supported by the fact that A 46 is detectable in cells cultured in the absence of any inhibitor, indicating that -cleavage must occur prior to ␥-cleavage, i.e. -cleavage is upstream of ␥-cleavage. Therefore, the finding that inhibition of upstream -cleavage by L-685,458, which does not directly inhibit ␥-cleavage, completely prevents the downstream ␥-cleavage from taking place strongly suggests that in living cells ␥-cleavage not only occurs secondarily but is also dependent on -cleavage occurring first. In this regard, the fact that the putative A 49 , which contains the -cleavage site at A46, is detectable in living cells (Fig. 5A, lane 3) strongly suggests a possibility that ⑀-cleavage occurs prior to -cleavage, otherwise the ⑀-cleavage product A 49 would not have had a chance of being formed. Moreover, our data clearly demonstrate that A 49 cannot be processed directly into A 40/42 by ␥-cleavage. It has to be first processed into A 46 by -cleavage and then the A 46 undergoes further processing by ␥-cleavage to produce A 40/42 (Fig. 5B). Taken together, as illustrated in Fig. 6, our data strongly suggest the possibility that under normal conditions, after or ␣-cleavage of APP, the resulting CTF and CTF␣ first undergo ⑀-cleavage, followed by a sequential but rapid -cleavage, and then by a ␥-cleavage, commencing at the site closest to the membrane boundary and proceeding toward the site in the middle of the transmembrane domain of APP. Support for this sequential action model also comes from the notion that water molecules play an important role in the peptide bond hydrolysis catalyzed by a protease, and ␥-secretase has been proposed to be an aspartyl protease (4). According to the catalytic mechanism of aspartyl protease, in order to hydrolyze the peptide bond of the substrate, one of the two aspartate residues in the enzyme active site, disposed on opposite faces of the peptide bond to be cleaved, needs to first act as general base to activate the water molecule. The activated water molecule then attacks and breaks the peptide bond, in cooperation with the second aspartate, which acts as a general acid to protonate the departing amine product. The ⑀-cleavage site is close to the membrane boundary and is easily accessed by water mole-cules in the cytosol. The initial ⑀-cleavage may not only release the AICD but may also create a path for the water molecule to have access to the next cleavage site, namely -cleavage site and then ␥-cleavage site. Accordingly, without removal of the three C-terminal residues from A 49 by -cleavage, water molecules may not be able to access the ␥-cleavage site, resulting in the prevention of ␥-cleavage from taking place. Thus, the blockage of water access may account, at least in part, for the fact that ␥-cleavage depends on -cleavage occurring first.
Regarding the relationship between ⑀and -cleavages, as discussed above, one possibility is that ⑀-cleavage occurs before -cleavage. However, since L-685,458 blocks both ⑀and -cleavages and the inhibitor, which specifically inhibits ⑀-cleavage or -cleavage, has not yet been identified, it cannot be ruled out that ⑀-cleavage and -cleavage may also occur simultaneously. Nevertheless, even though it is currently not clear whether -cleavage is dependent on ⑀-cleavage, the finding that the generation of A 40/42 from A 49 has to be mediated by the formation of A 46 by -cleavage indicates that once ⑀-cleavage occurs it has to be followed by -cleavage to produce A 46 , which is then further processed into A 40/42 by ␥-cleavage.
Regarding the catalytic mechanism of the ␥-secretase, the finding that ⑀-/-cleavages and ␥-cleavage can be differentially inhibited by transition state analogs and nontransition state inhibitors, respectively, suggests several possibilities. First, these cleavages may be catalyzed by two enzymes, and second, these cleavages may be catalyzed by one enzyme that has two inhibitor binding sites, one for the transition state analogs, such as L-685,458, and the other for the nontransition state inhibitors, such as compound E, as suggested by a recent inhibitor binding kinetic study (23). The sequential relationship of these cleavages and specifically the finding that ␥-cleavage is dependent on ⑀and -cleavages occurring first, suggest that ␥-, -, and ⑀-cleavages are catalyzed by a single enzyme. The single enzyme model is further strongly supported by the fact that the intermediate A 46 is tightly associated with PS1, the putative catalytic subunit of the ␥-secretase complex. The one enzyme model is also supported by the fact that both groups of the inhibitors have been shown to bind to presenilins (24 -26). According to this model and the hypothesis that transition state analogs and nontransition state inhibitors bind to different sites (23), the transition state analogs may inhibit the initial cleavage, namely the ⑀-cleavage, by directly binding to the catalytic site. As a result, the downstreamand ␥-cleavages are also prevented from taking place. On the other hand, the nontransition state inhibitors may bind to a remote site of the enzyme and induce conformational changes in the enzyme, resulting in the preferential inhibition of ␥-cleavage, with less effect on the ⑀and -cleavages, by preventing the ␥-cleavage site from having access to the catalytic site of the enzyme. However, the one catalytic site model fails to account for the fact that A 46 is still processed to A 40/42 by ␥-cleavage in the presence of the transition state analog (Figs. 2 and 3), which is assumed to bind the catalytic site (23). Therefore, the third possibility is likely that these sequential cleavages may be catalyzed by an enzyme that has two catalytic sites, one engaged in carrying out the ⑀and -cleavages and the other engaged in carrying out the ␥-cleavage. Regardless of whether there is one or two catalytic sites, according to the one enzyme model, at high concentrations the nontransition state inhibitors, which preferentially inhibit ␥-cleavage, may also inhibit ⑀and -cleavages by an allosteric mechanism, i.e. these compounds may induce conformational changes into the ␥-secretase complex, resulting in partial inhibition of ⑀and -cleavages. The other possibility that may account for the accumulation of CTF in cells treated with nontransition state inhibitors is that in the presence of nontransition state inhibitors, which inhibit the turnover of A 46 into A 40/42 , the accumulated intermediate A 46 FIGURE 6. Schematic illustration of APP processing. L-685,458 specifically inhibits ⑀and -cleavages (-); DAPT strongly and preferentially inhibits ␥-cleavage (O) and also inhibits ⑀and -cleavages at higher concentration (----). ⑀P3 and P3 are the putative P3-related fragments produced from CTF␣ by ⑀and -cleavages, respectively.
␥-Cleavage Depends on -Cleavage
occupies the binding site of the ␥-secretase complex and prevents the further binding of CTF to the ␥-secretase complex, resulting in accumulation of unprocessed CTF. | 2019-03-22T16:16:37.822Z | 2005-11-11T00:00:00.000 | {
"year": 2005,
"sha1": "e2134de08d6564d0abb4484781aaa6316f11c35e",
"oa_license": "CCBY",
"oa_url": "http://www.jbc.org/content/280/45/37689.full.pdf",
"oa_status": "HYBRID",
"pdf_src": "Highwire",
"pdf_hash": "553519fed4c4b9e32d20ce585637a31ffa50a5c2",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Biology"
]
} |
266944617 | pes2o/s2orc | v3-fos-license | Interpenetrating network hydrogels for studying the role of matrix viscoelasticity in 3D osteocyte morphogenesis
During bone formation, osteoblasts are embedded in a collagen-rich osteoid tissue and differentiate into an extensive 3D osteocyte network throughout the mineralizing matrix. However, how these cells dynamically remodel the matrix and undergo 3D morphogenesis remains poorly understood. Although previous reports investigated the impact of matrix stiffness in osteocyte morphogenesis, the role of matrix viscoelasticity is often overlooked. Here, we report a viscoelastic alginate–collagen interpenetrating network (IPN) hydrogel for 3D culture of murine osteocyte-like IDG-SW3 cells. The IPN hydrogels consist of an ionically crosslinked alginate network to tune stress relaxation as well as a permissive collagen network to promote cell adhesion and matrix remodeling. Two IPN hydrogels were developed with comparable stiffnesses (4.4–4.7 kPa) but varying stress relaxation times (t1/2, 1.5 s and 14.4 s). IDG-SW3 cells were pre-differentiated in 2D under osteogenic conditions for 14 days to drive osteoblast-to-osteocyte transition. Cellular mechanosensitivity to fluid shear stress (2 Pa) was confirmed by live-cell calcium imaging. After embedding in the IPN hydrogels, cells remained highly viable following 7 days of 3D culture. After 24 h, osteocytes in the fast-relaxing hydrogels showed the largest cell area and long dendritic processes. However, a significantly larger increase of some osteogenic markers (ALP, Dmp1, hydroxyapatite) as well as intercellular connections via gap junctions were observed in slow-relaxing hydrogels on day 14. Our results imply that fast-relaxing IPN hydrogels promote early cell spreading, whereas slow relaxation favors osteogenic differentiation. These findings may advance the development of 3D in vivo-like osteocyte models to better understand bone mechanobiology.
Dmp1-GFP reporter tracking
To follow the osteoblast-to-osteocyte transition during culture under osteogenic conditions using the Dmp1-GFP marker, expansion cells were seeded at a density of 3500 cells per cm 2 onto collagen coated 20 mm glass coverslips in a 12-well plate.On days 1, 7, 14 and 28 in differentiation medium, three replicate wells were fixed with 4% paraformaldehyde (PFA, Thermo Scientific, 28908) for 10 minutes at 37°C.At room temperature (RT), the samples were first blocked with 1% bovine serum albumin (BSA, Sigma-Aldrich, A4503) in PBS for 1 h, permeabilized with 0.2% Triton X-100 (Sigma-Aldrich, 93426) in PBS for 10 minutes, washed three times with PBS and finally, nuclei were stained with Hoechst 33342 at a dilution of 1:1000 in 1% BSA for 1 h.The samples were washed three times with PBS and then imaged on a Leica SP8 confocal laser scanning microscope (CLSM).In the microscope, the sample coverslips were mounted on a second, 30 mm glass coverslip and moisturized with a few drops of PBS.Per replicate, three 100 µm z-stacks were taken.
A CellProfiler pipeline was implemented to quantify the fraction of GFP-positive cells.Nuclei were first identified by Ostu thresholding and segmentation of GFP-positive cells was performed with a manually set threshold.Cells were then classified as GFP-positive via the co-localization of GFP signal with each nucleus.The mean GFP-intensity of GFP-positive cells was calculated per nucleus area and cells with a mean intensity below 5×10 -7 were filtered out.
Alkaline phosphatase (ALP) assay and DNA assay
ALP and DNA sample preparation.A colorimetric alkaline phosphatase (ALP) assay kit (abcam, ab83369) was employed to test for the ALP activity as a marker for osteoblastic differentiation both in 2D differentiating cells and in 3D culture over the course of 28 days in total.As baseline measurement, 3x 500'000 expansion cells (day 0) were harvested during splitting, washed, and resuspended in 250 µL ALP assay buffer.For 2D differentiation, 500'000 expansion cells were seeded per well of a collagen coated 6-well plate and cultured under differentiation-inducing conditions.On day 7 and 14, cell samples were collected from three replicate wells each by trypsinization, washed and finally resuspended in 250 µL ALP assay buffer.For the assay of 3D culture samples, three replicate gels of 8 mm diameter were used per condition and collected on days 1, 7 and 14.Gels were washed once with NaCl/HEPES/CaCl2 and transferred into 250 µL ALP assay buffer.Immediately after collection, all samples were homogenized using autoclaved pellet pestles and the pestle motor, flash frozen in liquid nitrogen and stored at -80 °C until use.
ALP assay.The ALP assay was performed according to the manufacturer's instructions.Specifically, the samples were thawed on ice to prevent protein degradation, centrifuged at maximum speed at 4°C for 15 minutes to remove any insoluble material and the supernatant collected in a new tube.The assay setup was planned for technical triplicates of each sample, duplicates of the standard dilutions and three background controls per 3D condition.Sample wells were loaded with 50 µL sample supernatant while background wells were loaded with 10 µL sample supernatant and 20 µL stop solution.The appropriate amounts of 5 mM p-nitrophenyl phosphate (pNPP) substrate solution and ALP assay buffer were added to all wells and the standard curve was prepared as instructed.The plates were incubated at RT in the dark for 1 h, then the reactions were stopped using 20 µL stop solution.Plates were shaken gently and absorption at 405 nm was measured on a Spark 10M plate reader (Tecan).The remaining sample supernatants were frozen at -20 °C to be used for DNA quantification.
To evaluate the ALP assay data, measurements were corrected by the blank as well as the background control (for 3D samples only).The technical replicates were averaged and quantified based on the standard curve.Finally, ALP activity was normalized by the DNA content determined in the DNA quantification assay as described next.
DNA assay.Frozen samples from the ALP assay were thawed and incubated at RT for 48 h.Then, DNA quantification was performed using the Quant-iT PicoGreen dsDNA assay kit (Invitrogen, P7589) following the manufacturer's protocol and the plate setup was planned analogously to the ALP assay.In short, the standard dilution was prepared as instructed and loaded into a 96-well plate.Sample and background wells were loaded with 12.5 µL sample supernatant as well as the appropriate amount of TE buffer.100 µL PicoGreen solution was added to sample and standard wells.The plate was shaken and incubated for 5 minutes in the dark.Finally, after shaking again, the emission at 535 nm was measured with an excitation at 485 nm on a plate reader.For DNA content quantification, the blank was subtracted from all measurements and the background control from each sample measurement.The technical replicates were averaged, quantified based on the standard curve and the initial amount of DNA per sample was computed.
Parathyroid hormone (PTH) treatment
PTH treatment was performed following the protocol described by Yang et al. 1 During media exchange on day 14 of 3D culture, new differentiation medium supplemented with either 50 nM PTH (bovine fragment 1-34, Sigma-Aldrich, P3671) or PBS as vehicle control was added.Culture at 37 °C and 5% CO2 was continued for 24 h until RNA was isolated as described below.
Gene expression by real-time quantitative PCR (RT-qPCR)
RT-qPCR sample preparation.Osteogenic marker gene expression was quantified on 2D predifferentiating cells, 3D embedded cells and to validate cell response to PTH treatment.Three times 0.5 × 10 6 cells were collected from 2D cultures of expansion cells and after 14 days of predifferentiation, and stored in RNAlater solution (Invitrogen, AM7020) at 4 °C.On days 1, 7 and 14 (control and after PTH treatment), six 8 mm hydrogels per condition were washed and stored in RNAlater at 4 °C.To obtain a sufficient amount of mRNA, two 8 mm hydrogels were combined for one analysis, resulting in three samples per condition.
RNA isolation.Once all samples had been collected, total RNA was isolated.RNAlater solution was removed by dilution and pelleting of cells in suspension, or by aspiration in the case of 3D hydrogels.For sample disruption and homogenization, samples were resuspended in 300 µL TRIzol reagent (Invitrogen, 15596026) and crushed using autoclaved pellet pestles on ice.Another 300 µL TRIzol were added, then samples were centrifuged at 8500 g for 30 seconds at RT and the supernatant was collected.RNA isolation was performed using the RNeasy Micro Kit following the manufacturer's instruction, using QIAShredder columns for additional homogenization and including DNA removal using the RNase-Free DNase kit as per the manufacturer's instructions.RNA was eluted in 30 µL RNasefree water and frozen at -20 °C.
cDNA synthesis by reverse transcription.RNA samples were thawed on ice and the RNA concentration measured by Nanodrop spectrophotometry.For cDNA synthesis, 240 ng of RNA were reverse transcribed in a 20 µL reaction using the PrimeScript RT Master Mix (TaKaRa, RR036A) following the manufacturer's instructions.In short, required RNA eluate volumes were combined with 4 µL PrimeScript RT Master Mix and up to 20 µL RNase-free water.Reverse transcription was performed on a T100 Thermal Cycler (Bio-Rad) using the following protocol: 15 min at 37 °C, 5 s at 85 °C, hold at 4 °C.cDNA samples were then stored at -20 °C.
RT-qPCR.The TaqMan Fast Universal PCR Master Mix (Applied Biosystems, 4352042) and TaqMan gene expression assays for selected osteogenic marker genes were used: Alpl (Applied Biosystems, Mm00475834_m1), Pdpn (Mm00494716_m1) and Dmp1 (Mm01208363_m1) with B2m (Mm00437762_m1) as reference gene.The 96-well PCR plate setup was designed using the gene maximization strategy and reactions were run in duplicate with no-template-controls (NTCs).Working on ice, samples were diluted with nuclease-free water and supermixes were prepared by combining TaqMan gene expression assays with PCR Master Mix according to the manufacturer's calculations.After sequentially adding samples and supermixes, the plate was sealed using Microseal sealing film and the solutions mixed by pulse centrifugation (3x 10 s).RT-qPCR was performed on a CFX96 Real Time C1000 Touch Thermal Cycler (Bio-Rad) following the manufacturer's protocol: 1) 20 s at 95 °C, 2) 1 s at 95 °C, 3) 20 s at 60 °C, 4) back to step 2 for 44x, 5) hold at 4 °C.The Cq measurements were exported, and relative gene expression was quantified implementing the 2 -∆∆Cq method as described by Livak and Schmittgen 2 .Fold changes were calculated relative to the lowest detected measurement and normalized by the reference gene, assuming 100% amplification efficiency.
Alizarin Red S staining
Cryosections with 20 µm thickness were prepared as for OsteoImage staining and stained for 1 minute in a solution of 2 mg mL -1 Alizarin Red S (Sigma-Aldrich, A5533).They were then washed in milliQ water, dehydrated in acetone, cleared in xylene and finally mounted with DPX mounting medium (Sigma-Aldrich, 06522).Slides were scanned using the Slide Scanner Panoramic 250 (3DHISTECH).
3D live-cell calcium imaging
For 3D Ca 2+ imaging, idenTx 3 chips (AIM Biotech, DAX-1) were used for both chemical and mechanical stimulation.IDG-SW3 cells were pre-differentiated for 14 days and embedded on-chip at a final density of 4.5 × 10 6 cells mL -1 .The central gel channels were filled with 10 µL hydrogel solution and collagen was allowed to crosslink for 20 minutes at 37 °C and 5% CO2.Then, calcium crosslinking solution was added into each media channel for alginate crosslinking during another 30 minutes incubation.Finally, the crosslinking solution was replaced with CaCl2-supplemented differentiation medium and cell culture was resumed with media changes every other day.
TRPV4 agonist.For chemically induced Ca 2+ signaling on day 7, media channels were washed once with NaCl/HEPES/CaCl2 buffer and samples were stained for 1 h at 37 °C using the same Fluo-4 AM solution as described for 2D Ca 2+ imaging.Thereafter, gels were washed with HBSS containing Ca 2+ for 30 minutes at 37 °C.For the activation of intracellular Ca 2+ signaling, GSK1016790A (Cayman Chemicals, 17289) was used as TRPV4 agonist and prepared at a concentration of 20 µM in HBSS.At the Leica SP8 CLSM, 12 µm xyzt-stacks were recorded every 3 seconds for 10 minutes.After 1 minute of static baseline condition, TRPV4 agonist solution was manually added by emptying the reservoirs with a tissue paper while injecting the agonist solution into both reservoirs on one side.A pressure gradient between the two media channels was kept by varying the fill volume in order to facilitate the agonist diffusion into the gel channel.
3D fluid flow.In a second experiment using fluid flow stimulation, the gels cultured on-chip for 7 days were additionally treated with alginate lyase (Sigma-Aldrich, A1603) to increase the gel porosity by enzymatic degradation of alginate.This was necessary to facilitate 3D gel perfusion.A 500 U mL -1 alginate lyase stock solution was prepared in milliQ water and added to the Fluo-4 AM staining solution at 1:500.The chips were incubated with this solution for 1 h at 37 °C and 5% CO2 and subsequently washed with HBSS containing Ca 2+ for 30 minutes at 37 °C.At the Leica SP8 CLSM, two syringe pumps were connected to both openings of a single media channel, while both openings of the second channel were used as outlets into a waste container.After 1 minute of static baseline, flow was initiated at a rate of 100 µL min -1 (50 µL min -1 from each inlet) and 12 µm xyzt-stacks were recorded every 3 seconds for 5 minutes.
The time series images were processed as for 2D Ca 2+ imaging and intensity values for plotting were normalized by subtracting the first value at baseline. | 2024-01-12T16:18:01.952Z | 2024-01-10T00:00:00.000 | {
"year": 2024,
"sha1": "59f5314110a4c2721ecf397d92f20db102471856",
"oa_license": "CCBY",
"oa_url": "https://pubs.rsc.org/en/content/articlepdf/2023/bm/d3bm01781h",
"oa_status": "HYBRID",
"pdf_src": "PubMedCentral",
"pdf_hash": "7dd103f42d002eaa4d63ebd37adc6d41893ea946",
"s2fieldsofstudy": [
"Materials Science",
"Engineering",
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
21710561 | pes2o/s2orc | v3-fos-license | Self-organizing layers from complex molecular anions
The formation of traditional ionic materials occurs principally via joint accumulation of both anions and cations. Herein, we describe a previously unreported phenomenon by which macroscopic liquid-like thin layers with tunable self-organization properties form through accumulation of stable complex ions of one polarity on surfaces. Using a series of highly stable molecular anions we demonstrate a strong influence of the internal charge distribution of the molecular ions, which is usually shielded by counterions, on the properties of the layers. Detailed characterization reveals that the intrinsically unstable layers of anions on surfaces are stabilized by simultaneous accumulation of neutral molecules from the background environment. Different phases, self-organization mechanisms and optical properties are observed depending on the molecular properties of the deposited anions, the underlying surface and the coadsorbed neutral molecules. This demonstrates rational control of the macroscopic properties (morphology and size of the formed structures) of the newly discovered anion-based layers.
Supplementary Figures Data Page
Supplementary (opt) 33 Supplementary Tables Supplementary Table 1: Molecular formulas Hydrocarbons (MS) 12 Supplementary Table 2: XPS elemental composition after dewetting (XPS) 16 Supplementary Table 3 A scheme to rationalize the retention of charge during ion soft landing is shown in Supplementary Figure 1. Note that deposition of cations on bare metal surfaces results in neutralization. In contrast, deposition on an insulating surface results in buildup of electric fields that prevent further ion deposition at high coverages. Larger amounts of ions that retain their charges can be deposited onto insulating SAMs on top of an underlying conductive surface. The landing of a cation on such a substrate induces a mirror charge in the gold that is grounded through a picoammeter. The situation can be described as loading a two-plate capacitor. If the layer of soft landed ions and mirror charges is approximated by uniformly charged round plates with radius R separated by a distance d (Supplementary Figure 1c), the force F on an ion approaching the center of the plate at distance D may be expressed using equation S1: 1 Supplementary Figure The electric field above the center of the plates vanishes for infinitely large plates, and, therefore, becomes extremely small for large capacitors (very large R/d ratio). However, at the borders, side effects are present that result in larger electric fields that may lead to repulsion between the deposited ions of the same polarity. Therefore, such side effects tend to be minimized by forming a smooth layer instead of many "islands" of accumulated ions (i.e. formation of one large capacitor with uniform plates vs. many small capacitors in the "island model").
High coverages of cations usually resulted in discharge of the ions. This breakdown of the capacitor was explained by the buildup of an electric field across the insulating layer that is strong enough to allow electrons to pass from the gold surface through the SAM. 2 The neutralization is energetically favorable since electron attachment to cations is a spontaneous process. Recent studies showed that highly electronically stable anions stay charged after deposition (see the main text). The deposition of higher amounts of anions may be possible because excess electrons can be bound to anions by several eV. In addition, electron loss from multiply charged anions is hindered by a repulsive Coulomb barrier. This may result in the build up of larger potentials. A detailed examination of the physical situation is beyond the scope of this study, but we present data that clearly show that most of the deposited ions are not neutralized (Supplementary Figure 8 and 9).
Supplementary Note 2: Bright field image of deposition spot before dewetting
The optical appearance of a deposited spot before the dewetting process starts is shown in the following image. At this stage, the layer is invisible in the dark field. For [B12Cl12] 2dewetting usually starts too soon to map the surface. Therefore, an image of a [B12Br12] 2layer is shown, which is similar in optical appearance to [B12Cl12] 2before dewetting.
Supplementary Note 3: AFM investigation of layers in the initial state of dewetting
The initially smooth morphology of the [B12Cl12] 2based layer was demonstrated by acquiring AFM images of a freshly prepared surface directly after its exposure to air. Supplementary Figure 3 demonstrates that the layer surface is much smoother than the underlying substrate (FSAM on gold coated Si). The calculated RMS roughness is 2.30 nm for the bare FSAM surface and 0.44 nm for the layer, respectively.
To estimate the thickness of the layers, we also performed AFM measurements on layers during dewetting. Supplementary Figure 4 shows line profiles obtained over the borders of growing holes. Supplementary Figure 5 shows the development in height of these borders during the process of continuous hole expansion and merging of two borders.
Supplementary Note 4: Thickness dependency of self-organized structures
An increase in droplet size and size of the formed pattern depending on the amount of deposited [B12Cl12] 2ions was found.
Supplementary Figure 6. Comparison of the droplet arrangement after dewetting of layers formed by soft landing of different amounts of [B12Cl12] 2showing substantially larger holes at higher coverages with droplets arranged at the edges.
Supplementary Note 5: IR investigations
Layers prepared by soft-landing of [B12X12] 2ions (X=F-I) onto SAMs were analyzed using in-situ infrared-reflection-absorption spectroscopy. The assignment of IR bands to adventitious hydrocarbons that bind to the surface is supported by comparison of the spectra for different X (Supplementary Figure 10). The growth of the hydrocarbon signals was found to be correlated to the growth of the ion signal: Slower deposition (lower current) did not result in a significant change of the signal ratios. An increase in the hydrocarbon bands in proportion to the ion signals was detected during deposition, see Supplementary Figure Figure 15, in agreement with the literature. 4 We note that an additional lower intensity 11 B NMR signal (+18 ppm) was detected during our investigation of the dissolved [B12Cl12] 2layer. This chemical shift corresponds well to the 11 B chemical shift of the boric acid standard used as a reference, but species such as (RO)2BX (X=halogen) have also been observed at similar ranges of the chemical shift. The additional resonance may have formed as a decomposition product over time in these samples. The dissolved layers were stored prior to NMR investigations for 9 months. The [B12I12] 2layer was stored prior to NMR investigations only for 3 weeks and showed the same signal with more than an order of magnitude weaker relative intensity. All other analytical methods (like XPS) which have been performed in the time frame of minutes to days after deposition, did not indicate the presence of any other boron-containing molecules.
Supplementary Note 9: XRD-analysis
In an effort to obtain information on long range order, micro-XRD investigations on droplets were performed. No polycrystalline order was detected in the droplets by XRD. We did not detect any peaks that may correspond to the material in the 5-100 degrees 2-Theta range (using Cr K, radiation; 2.2897 Å). This observation further supports the liquid nature of the material.
Supplementary Note 10: Morphology of the [B12F12] 2layer and droplets
The morphology of layers generated by soft-landing of
Supplementary Note 11: Vacuum drying
We performed vacuum drying of a [B12Cl12] 2layer after the initial dewetting stage. Straight circular holes as shown in Figure 3 in the manuscript appeared in the layer prior to the process. Subsequent storage for one week in the soft landing instrument under vacuum resulted in the deliquescence of the straight hole borders showing that the driving force for the dewetting process has been removed. The hole borders did not recover, but instead new holes were formed in the areas of intact layer after some time under ambient conditions.
Supplementary Note 14: Phthalate composition
To investigate the reproducibility of the obtained results, layers were generated after the soft landing instrument was taken apart and cleaned, seals replaced, a pump repaired and the pump oil changed. Although the composition of the phthalates changed in terms of relative ratios (see Supplementary Figure 25a), the macroscopic behavior was well reproduced (Supplementary Figure 25b). The [B12Cl12] 2layers showed the previously described dewetting while [B12I12] 2layers were stable during this time frame. It was also possible to substitute the distribution of phthalates by one defined phthalate, see supplementary note 15.
Supplementary Figure 25. a) ESI, positive mode, mass spectra of a [B12Cl12] 2based layer generated after significant changes to the instrument and pumps were performed. The layer behavior is well reproduced as shown in b.
Supplementary note 15: Codeposition of a defined phthalate
By introduction of a liquid phthalate into the soft-landing instrument with base pressure 10 -8 mbar and heating of the reservoir to 80-120°C, we could substitute the distribution of phthalates almost completely with pure diisodecyl phthalate. The base pressure increased during the experiment by one to two orders of magnitude. A residual gas analyzer mass spectrum is shown in Supplementary Figure 26, which shows typical signals of diisodecylphthalates in the measurable mass range (see inserted reference spectrum from NIST Chemistry webbooks for comparison). Supplementary Figure 27 shows mass spectrometric analysis of the layer and optical microscopy of the dewetted layer.
Experiment 1: Partial substitution
For partial substitution of the adventitious hydrocarbons with glycine, we put solid glycine powder on a metal flange next to the gold surface to be used for deposition and heated it up in the vacuum chamber to 70˚C. The base pressure of 8*10 -5 Torr determined by an ion gauge did not increase measurably. IR spectra measured before ion deposition showed that glycine is not deposited in any detectable amounts via chemical vapor deposition on the surface under these conditions. However, during ion deposition, the IR spectrum changed considerably compared to experiments without glycine. Still, the IR bands of the adventitious hydrocarbons were observed, see Supplementary Figure 28c. However, new IR bands attributed to glycine were clearly present in the spectrum.
Experiment 2: Full Glycine substitution
For full substitution of glycine we performed ion soft-landing in another apparatus with a lower base pressure (1×10 -8 mbar). A glycine reservoir was heated to roughly 200˚C and the vapor was introduced into the deposition chamber via a heated leak valve. The reservoir was heated slowly in the timeframe of hours up to this temperature because only at 200˚C glycine could be detected by a residual gas analyzer attached to the main chamber (increased m/z 30 signal, the predominant species in the electron impact spectrum of glycine). This was accompanied by a pressure increase of two orders of magnitude in the deposition chamber. The experimental setup is similar to previously described instruments. 5 The
Supplementary Note 17: Electron beam induced tips analysis before and after methanol washing.
Electron beam induced tips were imaged using atomic force microscopy (AFM) in contact mode for the initial state and tapping mode for the washed state. A single tip was chosen for highresolution analysis, shown in Supplementary Figure 29. Because the tip is asymmetric, line profiles across the tip were performed in two directions, along the widest and narrowest dimensions of the post-washed tip. The position of the profile was chosen to maximize the tip height. The asymmetry of the features is not likely an AFM imaging artifact due to the relatively small aspect ratio, especially for the initial state where the asymmetry is more prominent.
Supplementary Note 18: Binding of water and diisodecylphthalate to [B12Cl12] 2-
The binding energy of a diisodecyl-phthalate molecule and a water molecule was estimated using the DFT method B3LYP/def2-tzvppd 6 including dispersion forces 7 . We note that our investigation does not include all possible conformations of the organic alkyl chains of the phthalates. The binding energies should be considered as rough estimations. The optimized geometries are shown in Supplementary Figures 30-32
Supplementary Note 19: Correlation between phthalate contact angle and final morphology on surfaces
Droplets of diisodecyl-phthalate (0.5 µL) were placed on 3 different surfaces. As can be clearly seen from Supplementary Figure 33a, phthalates have a high contact angle on FSAM, a smaller contact angle on HSAM and wet the surface to a considerable extent on the unmodified gold surface. This behavior can be clearly correlated to the final stage of the layer after long exposure to environmental conditions (see Supplementary Figure 33c). On FSAM droplets are formed, on HSAM dewetting occurs, but no comparable free surface areas are formed and on an unmodified gold surface, the layer showed no visible morphological change.
Supplementary Figure 33
Supplementary Note 20: Binding of water to [B12F12] 2and [B12I12] 2in comparison.
Supplementary Figure 34 schematically shows the disposition of a water molecule in close contact with the halogen shell of [B12F12] 2and [B12I12] 2-. The overall orientation of water molecules is governed by dipole-ion interactions between the water dipole and the negative charge of the boron clusters. This leads to the favorable attractive interaction of the water hydrogens with the negative fluorine shell in of [B12F12] 2-, which is responsible for the strong binding of water (Fig. S29a). In contrast, such favorable interaction is not possible with the slightly positive iodine atoms of [B12I12] 2-. As a result, the water molecule is bound in a staggered orientation and at a longer distance from the ion, reducing the ion dipole interaction. | 2018-05-15T13:16:42.974Z | 2018-05-14T00:00:00.000 | {
"year": 2018,
"sha1": "fb144bca6dbff5af910080c00ae01416a03baf2a",
"oa_license": "CCBY",
"oa_url": "https://www.nature.com/articles/s41467-018-04228-2.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "fb144bca6dbff5af910080c00ae01416a03baf2a",
"s2fieldsofstudy": [
"Chemistry",
"Materials Science"
],
"extfieldsofstudy": [
"Materials Science",
"Medicine"
]
} |
17410319 | pes2o/s2orc | v3-fos-license | Concurrent auditory perception difficulties in older adults with right hemisphere cerebrovascular accident.
BACKGROUND
Older adults with cerebrovascular accident (CVA) show evidence of auditory and speech perception problems. In present study, it was examined whether these problems are due to impairments of concurrent auditory segregation procedure which is the basic level of auditory scene analysis and auditory organization in auditory scenes with competing sounds.
METHODS
Concurrent auditory segregation using competing sentence test (CST) and dichotic digits test (DDT) was assessed and compared in 30 male older adults (15 normal and 15 cases with right hemisphere CVA) in the same age groups (60-75 years old). For the CST, participants were presented with target message in one ear and competing message in the other one. The task was to listen to target sentence and repeat back without attention to competing sentence. For the DDT, auditory stimuli were monosyllabic digits presented dichotically and the task was to repeat those.
RESULTS
Comparing mean score of CST and DDT between CVA patients with right hemisphere impairment and normal participants showed statistically significant difference (p=0.001 for CST and p<0.0001 for DDT).
CONCLUSION
The present study revealed that abnormal CST and DDT scores of participants with right hemisphere CVA could be related to concurrent segregation difficulties. These findings suggest that low level segregation mechanisms and/or high level attention mechanisms might contribute to the problems.
Introduction
Cerebrovascular accident (CVA) is one of the most important and common disorders in older adults. This disorder needs special attention because of its high prevalence, symptoms (e.g. imbalance, confusion, and speech perception problems), and death toll. In western countries, strokes including CVA are the third most common cause of death and the most common cause of disabling neurologic damage. CVA is much more common among older people than among younger adults, usually because the disorders that lead to CVA progress over time (1). se problems may have significant effects on speech processing and communication of patients with CVA (1). Furthermore, these patients do not have any complains related to auditory processing at the first time and common auditory assessments (audiometry and tympanometry), also they do not complain about any related problem. Central auditory tests are cost-benefit and very beneficial for assessment of complex auditory functions (2). In order to evaluate concurrent auditory segregation, which is the basic level of auditory scene analysis and auditory organization in auditory scenes with competing sounds (4). In patients with CVA, we could use specific tests of central auditory system including competing sentence test (CST) and dichotic digits test (DDT) with real-world and speech stimuli (5). These tests have an important role in assessing auditory segregation since dichotic auditory conditions are special auditory scenes.
Several studies used CST to show functional impairment in the brain due to lefthanded CVA (2). Some researchers conducted CST in patients with cerebrocranial injuries (CCI) and subjects with CVA (right and left hemispheres) and reported abnormal responses in both groups (6). In another research, DDT was used to monitor auditory function improvement in a woman with left-handed CVA involving Heschl's gyrus. Results indicated 67% of improvement in DDT responses after 12 months (7).
In the present study, we particularly examined effects of right-handed CVA on auditory segregation. In other words, the study focused on identification of auditory segregation and related perception difficulties in patients with right hemisphere CVA. It was found that frontal and parietal areas of right hemisphere strongly play a role in many cognitive functions including attention and working memory. These functions have important roles in other higher level processes such as auditory perception in older adults (8). In the present study, all of the patients showed impairment of parietal and frontal lobes in their MRI results. Thus, it was hypothesized that central auditory processing and auditory segregation could be affected. The primary objective of this study was highlighting and showing hidden auditory impairments (i.e. auditory segregation problems) of patients with right hemisphere CVA using simple central auditory tests (CST and DDT).
Participants
In this study, we assessed 30 male older adults (15 normal controls and 15 with right hemisphere CVA as test group) in the same age groups (60-75 years old). All of them were right-handed; and Farsi speaking. Both groups had the similar education level (diploma or higher level). All of the participants had the same pure tone average (PTA) and speech recognition threshold (SRT) i.e. lower than 25 dBHL. Participants with ear infections, epilepsy and neurologic disorders such as Alzheimer, head trauma, low cooperation, and speech problems were excluded. All of the patients did not represent any signs of aphasia (clarified by speech and language pathologist) and dementia revealed by Mini Mental Examination (MMSE) (9). The Persian version of MMSE (10) was conducted for screening of cognitive impairments including dementia. Both groups had average IQ (11). Cases with CVA were recruited from the neurology clinic at University of Social Welfare and Rehabilitation Sciences (USWR), Tehran, Iran. All of the patients showed impairment of right parietal and frontal lobes in their MRI results. The patients reported stable symptoms and were in rehabilitation phase (approximately 5 months after discharging). This study was conducted in Audiology Department of USWR between 2012 and 2013. Normal older adults were relatives of recruited patients. Both groups provided informed written consent in accordance with the Helsinki Declaration. We received approval for the research from USWR Review Board of Ethics.
Hearing assessments
The instruments used in this study were immitance acoustic (Interacoustics, AZ7), two channels audiometer (Amplaid, 311), CD player with microphone input, and compact disk (CD) related to Persian version of CST and DDT. Case history was taken precisely for all of the participants to rule out any confounding factors. In the next step, middle ear function was assessed with immitance acoustic and acoustic reflex threshold test. If the middle ear showed normal function, the patient would pass to the next step. Then, we conducted audiometry to measure auditory thresholds in all of the audiometric frequencies (125, 250, 500, 1000, 2000, 4000, and 8000 Hz). In this study, all of the participants had auditory thresholds ≤ 25 dBHL. Additionally, we measured speech recognition threshold (SRT) and speech discrimination score (SDS) with spondaic and monosyllabic words, respectively. If the patient's SRT & PTA were approximately the same (maximal difference ± 8dBHL) and SDS result was between 80 and 100%, he would pass the criteria (2).
Central auditory assessments and task
CST and DDT were conducted for all eligible participants. The CST involves the simultaneous presentation of dichotic sentences of similar duration, word length (six to eight words), and semantic content (12). We conducted Persian version of CST (2). We presented target message at 35dBSL (dependent to SRT and PTA in 3 frequencies 500, 1000, 2000 Hz) and competing message at 50 dBSL over TDH-49 headphones. Therefore, signal to competing sentence (SCR) was -15dB. We instructed the participant to listen to target sentence and repeat back without attention to competing sentence entered to the other ear. Prior to the experiment, participants were presented with 3 sample stimuli prior to data collection to familiarize them with the task and the response. Then, ten paired sentences were sent to each ear. These sentences had 10 scores. If the participant was not repeat-ing back any word, scores were subtracted. All of the participants responded to stimuli in an open set.
The DDT entails presentation of two-digit pairs (from 1 to 10, excluding 4) to each ear simultaneously, and the listener is asked to repeat the numbers in both ears in any order (free recall). The test consists of 20 digits to each ear, presented at 50 dBSL over TDH-49 headphones (13). We conducted this test and scored in percent all of the digits repeated perfectly (14). For the DDT, 2.5% was indicated for score of every digit (15). Participants did not receive any feedback on their performance. Prior to the experiment, they were presented with sample stimuli prior to data collection to familiarize them with the task and the response.
Statistical analysis
Shapiro-Wilk test was conducted to analyze normal distribution of data using SPSS (version 16). We analyzed the data of CVA patients and normal subjects using independent t-test. p<0.05 was indicated as a statistical significant level.
Normal results
We considered CST and DDT scores in normal controls. CST scores were between 92% and 100% (97.06%±3. 19 We compared mean score of CST in the right ear between CVA patients with right hemisphere impairment and normal subjects. Results represented no significant difference statistically (p=0.508). It is believed that the right hemisphere impairment did not effect on the right ear score. In contrast, comparing left ear score between these groups showed statistically significant difference (p=0.001). This result indicated that the ear opposite to impaired one could be affected.
Comparing mean score of DDT in the right ear between CVA patients with right hemisphere impairment and normal persons indicated statistically significant difference (p<0.0001). Also, comparing left ear scores indicated this result (p<0.0001). Table 1 represents these results.
Discussion
The primary objective of our study was to assess concurrent auditory perception in patients with right hemisphere cerebrovascular accident. It is believed that right hemisphere, fronto-parietal localization, strongly plays a role in cognitive function. It is proposed that this hemisphere is one plausible candidate for mediating networks for arousal, novelty, attention, awareness, and working memory, which collectively provide for a set of additional, cognitive, mechanisms that help the brain adapt to age-related changes including auditory attention and perception (8). Any disorder of this localization causes to attention and memory dysfunctions. In the present study, all of the patients showed impairment of parietal and frontal lobes in their MRI results. Thus, it was premised that auditory attention and segregation could be affected.
An important aspect of the study was that we used real-world sounds (speech) and involved many of the complications associated with using these sounds such as the activation of expertise-related processes. Thus, such high-level processes can be studied with relative ease. In the present study, we used two central auditory tests (CST&DDT) to measure concurrent auditory segregation. For the CST, participants were presented with target message in one ear and competing sentence to the other ear. In this test, the participants' task is to listen and attend to the target message and segregate it from simultaneous competing sentence. For the DDT, participants were presented dichotically to monosyllabic digits and their task is dividing attention to both ears, segregating all of the presented digits, and recalling the digits in free order. Findings from the previous studies suggest that older adults have difficulty following a conversation especially in noisy listening situations (e.g., cocktail party) where the task-relevant signal is embedded in multitalker babble (16)(17)(18)(19). Inherent to solving the 'cocktail party problem' is the ability to segregate and identify concurrent sounds.
In the present study, it was noted that normal older adults showed normal scores in CST and DDT, but participants with right hemisphere CVA presented abnormal scores in these assessments. These results suggested that normal older adults have normal function in listening to dichotic auditory stimuli and segregating those from competing sounds while older adults with right hemisphere CVA show impairments in this function. We assume the possibility that low level segregation mechanisms and/or high level attention mechanisms might contribute to the problems. To explain these results, Cusack et al. (20) proposed a hierarchical model of stream segregation. According to this model, preattentive mechanisms segregate streams based on acoustic features (e.g., Δf) and attention-dependent buildup mechanisms further breakdown outputs (streams) of this earlier process that are attended to. Consistent with this model, Snyder et al. (21) provided event-related potential (ERP) evidence for at least two mechanisms contributing to stream segregation: an early preattentive segregation mechanism and an attention-dependent buildup mechanism. These studies have supported a gain model in which attention to a target stream enhances neural processing of sounds within that stream while suppressing unattended streams. Since CST and DDT also measure auditory attention functions (5), we think that auditory segregation difficulties might be related to attention problems. In the present study, cases with right hemisphere CVA obtained low scores in both central auditory tests indicating that simultaneous auditory segregation is possibly modulated by attention and suggesting the involvement of high-level factors in concurrent auditory perception. Snyder et al. (22) assumed that top-down mechanisms within central auditory areas, multimodal pathways, and/or bottom-up mechanisms in peripheral areas (such as cochlea and low level auditory brainstem ) might be contributed in auditory segregation. An ERP study conducted by Alain and Woods (23) showed that selective attention to a stream facilitated early sensory processing (sensory gating) of that stream and inhibited processing of unattended streams. Since all of the patients with right-handed CVA showed impairment of parietal and frontal lobes in their MRI results, it was indicated that these regions possibly are some sources of auditory attention (auditory selective and divided attention). Sarter (24) studied divided attention process in a group of old adults and patients with dementia. In this study, neuropsychological measurements and functional imaging represented active dorsolateral and ventrolateral areas of cortex; prefrontal, cingulated, parietal, and premotor when modulating dual activities.
Our results (DDT scores) indicated that right hemisphere CVA mostly affects concurrent auditory segregation followed by auditory divided attention problems. A few studies have been conducted in the field of identifying mechanisms and sources related to auditory divided attention. However, their finding showed that sources such as two hemispheres, parietal and frontal lobes, association cortex areas (precentral area in both sides), and supplementary motor area were active when allocating auditory divided attention (25). In contrast, these results were not the case in the auditory selective attention.
In this study, because older adults with right hemisphere CVA represented evidence of difficulties in central auditory tests including CST and DDT, we could premise that they have problems in concurrent auditory segregation and speech perception. It could be suggested that further research in the same participants by other central auditory tests is needed to measure auditory segregation using a task that does not also require auditory attention. We also suggest that this study could be conducted in female patients with right-handed CVA to identify any differences of auditory segregation between males and females. Finally, as this study was conducted in patients with right hemisphere CVA, further studies could be conducted in patients with left hemisphere CVA to compare and clarify any differences of auditory perception and segregation difficulties between both groups.
Conclusion
In the present study, we showed impairments of concurrent auditory segregation and perception in older adults with right hemisphere CVA. This study highlighted the important role of auditory segregation process in speech perception. Also, it is confirmed that while many patients with CVA have normal results in audiometry tests (e.g. pure tone audiometry and simple speech tests; SRT and SDS), but they acquire abnormal results of CST and DDT indicating abnormal segregation process that might be contributed to high level functions. | 2018-04-03T06:02:43.858Z | 2014-11-17T00:00:00.000 | {
"year": 2014,
"sha1": "44c1a160f0b750b7a889a971c7af5a50785657e0",
"oa_license": "CCBYNC",
"oa_url": null,
"oa_status": null,
"pdf_src": "PubMedCentral",
"pdf_hash": "c66b1a8972b800cddef21bc4414d5809ade9ba40",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
226353629 | pes2o/s2orc | v3-fos-license | Food Waste Bioeconomy: Sustainable Waste Management Options for Hawassa University Campuses, Ethiopia ABATE
Food waste management is a challenge in University Campuses of developing countries. This study assessed food waste management challenges in Hawassa University and the possibility of cascading the waste through biomass bioeconomy model by using interviews, observations and published and unpublished documents. The results show that so far the food leftover is being used by poor people, collected by animal ranchers or damped in an openpit. Food leftover use by poor people was challenged due to poor hygienic quality, health implication to users, insecurity to campus community and theft of property in the campuses. The university’s animal enterprise was also forced to quiet its agreement with the university due people’s competition for the leftover. Generally food waste management at the University is reactive and long-term sustainability is needed. This study suggests the cascading use of biomass, i.e. using food waste as animal feed; animal waste as feedstock for biogas generation; biogas-slurry as an organic fertilizer for university farm and plantations. If implemented the model improves the waste management practices of the University; improves the resource use efficiency and energy security, and reduces fuel wood consumption and mitigate greenhouse gas emission. Moreover the model creates circular economy that serves as a sustainability showcase in practice for research, training, recreation, experience sharing and income generation activities. DOI: https://dx.doi.org/10.4314/jasem.v24i9.6 Copyright: Copyright © 2020 Abate. This is an open access article distributed under the Creative Commons Attribution License (CCL), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. Dates: Received: 25 August 2020; Revised: 25 September 2020; Accepted: 20 September 2020
Waste is defined as 'abandoned-materials' that are discarded as to no longer have a functional use, or economic-value (Starovoytova, 2018;Zaman & Lehmann, 2011). The solid waste management (SWM) treats all solid materials as a single class. However, solid waste is a heterogeneous-material, vary in terms of composition, volume and based on the prevailing activities and the demography of individuals at the source (Coker et al., 2016;Williams, 2005). Comprehensive SWM is one of the greatest challenges in bringing institutional sustainability (Smyth et al., 2010). In principle solid waste management involves preventing and/or minimizing waste generation, segregation of waste by type at the source; employing recycling, reuse and resource recovery, and ensuring the safe and environmentally sound disposal (de Vega et al., 2008;Jansen, 2010). In developing countries significant portion of population does not have access to a waste collection service and only a fraction of the generated waste is actually collected. Most of the solid waste is disposed in open dumps due to its simplicity and low cost (Nas & Bayram, 2008;WHO, 1996).
Solid waste management is generally a major problem in Ethiopia (Cheever, 2011) and it has become a priority issue in higher institutions. Though the higher education system in Ethiopia has begun in the mid-1960s, the country's higher education system has expanded remarkably in recent years. The annual average student enrolment rate in public and private higher institutions between 2003/04 and 2016/17 has increased by 26.1% (58,632 to 860,378) (EMIS, 2016/2017). The expansion of programs and rise in student and staff number has undoubtedly increased the solid waste generated from the institutions. Consequently solid waste management in higher education institutions in Ethiopia has become one of the greatest challenges for institutional sustainability (Helelo et al., 2019;Kassaye, 2018;Mengesha and Dessalegn, 2014;Seyoum, 2007).
Food production requires many resources, and is responsible for a significant portion of the greenhouse gas (GHG) emissions. Food wastes are typically one of the heaviest components of an organic waste stream, and thereby transporting food wastes to distant a site results in additional emissions and extra resources use (FAO, 2013). Therefore, reducing foodlosses and diverting food waste into usable products through circular bio-economy model at all stages can play role in solving various social and environmental problems, including pollutions, the spread of diseases and the release of greenhouse gases. In Ethiopia several achievements were reported including reclamation of degrade lands into fertile grounds, and urban agriculture (e.g. seedlings, vegetables and small animals) through use of organic wastes. However, solid waste based bio-economy potential of the country is unknown. Therefore the objective of this paper is to identify the major challenges of Hawassa University campus food leftover management and possibility of cascading its use through bioeconomy model.
Study area description:
The study was conducted in Hawassa University, one of an officially accredited and recognized higher institution in Ethiopia. Currently it has 7 campuses: Main Campus, Technology Campus, Wondogenet Campus, Health Campus, Agriculture Campus, Awada Campus and Bensa Daye Campus. All the campuses except Wondogenet, Awada and Bensa Daye are located in Hawassa City. According to the University Registrar and Alumni affairs directorate information the total number of student population in 2019 was 41,929. From the programs 58% of the students are in regular program, 22.6% are in summer program and the remaining is in an evening and weekend programs. The total employed permanent staff workers excluding Bensa Daye campus was 8604. The number of cleaners in Main Campus was 268 (111 permanent, 139 temporary and 18 office workers). Data collection and analysis: In the present study preliminary survey was carried out in HU campuses, and all the possible food leftover sources were identified. Key informant interview was held with people close to food waste management and working at different positions of the university (e.g. cafeteria managers, leaders of dormitory, University Hotel, student dean, director of animal farm). The key informants are closer to solid waste management and are believed to have detailed know-how on the past, present and future plan of the university on food leftover management. The questions include amount generated, practices on food leftover management such as source reduction, recycling or reuse of food leftover wastes, and current practice of waste storage, collection, transportation and disposal. The major challenges of food waste use for poor people, animal feed or energy recovery was also examined during the interviews. Participant observation was conducted to collect firsthand information on food waste management practices starting from source to the final disposal. The information was organized and summarized quantitatively and qualitatively.
RESULTS AND DISCUSSION
Sources of food waste and current management practices: The major sources of food waste in Hawassa University are student cafeteria, teaching hotel, lounges, staff residence and conference halls. The type and amount of food leftover varies depending on the source. Food waste from cafeteria (peels and food leftover) comprises the largest share of food waste and other solid waste at large which is followed by animal waste (animal dung and poultry manure) and paper waste. The initial practice of food leftover waste management in the University involves collection of waste from the point of generation and disposal through open surface dumping. Damping food waste caused foul smells, bad scenery, attracted scavenger birds, flies and other pests to the site. The offensive odor, sanitation problem and environmental pollution (soil, water and air) were particularly aggravated during rainy seasons, and it became a potential health concern for the university community. Waste heaps, dumps and land fill obviously emitted a potent greenhouse gases such as CH4 and N2O, and releases considerable quantities of liquid leachate to the ground water. Latter the university entered into an agreement with pig ranchers to collect a food waste from student cafeteria. The food leftover food from student cafeteria was being sold for 16,000 Ethiopian birr/month in Main Campus and 7500 birr/month in Health Campus in 2013/2014 (1$ = 19.128 birr). However, the bule (food leftover) said to be used to feed pig was actually sold to needy people in the city. Finally the university quitted the agreement with the rancher after confirming its sell to poor people in the city. The food is a mix of leftover cleaned from dining tables and transported in unhygienic sack/barrel which could be a potential health risk (Main Campus Student Dean, 2019). In 2018 the university enterprise officially started using food waste for beef and dairy cattle. The leftover was properly dried before use to ensure the feed quality on the animal's health. Yet the bule intended for the enterprise animals attracted several competitors, children and daily laborers. Yet most of the peels and unused food waste is open damped (Figure 1). Similarly animal waste from dairy farm was managed by flashing it into an open-pit ( Figure 2). A small portion of animal waste was used as fertilizer for campus plantation and animal farm.
Challenges of food waste management in Hawassa
University: Several food waste management strategies were tried but it lacks the desired goal, and the institution is looking for more proactive, safe and effective mechanism to handle the solid waste. The solid waste management proclamation No. 513/2007 obliges the university administrations to plan, collect and store, transport and dispose/landfill solid wastes. Article on food related wastes (Article 10) articulates that ''food industries and restaurants shall collect, store, and dispose of the food related solid wastes they generate in an environmentally sound manner.'' Therefore the University is answerable for improper SWM and the institution has a mandate to segregation, ensuring placement of waste collection facilities in place, prohibiting of waste disposal at public places and ensuring environmentally sound disposal sites. According to the waste hierarchy ( Figure 3) food waste prevention activities are most preferred, and followed by reuse activities and recycling (Imbert, 2017). Though waste prevention is most preferred sustainable solution it is considered as the most challenging waste management strategy (Imbert, 2017;Papargyropoulou et al., 2014;Yolin, 2015). In Hawassa University the organic waste prevention can be achieved at limited points in the food cycle value chain. Improving the quality of raw material purchased, reducing wastage at storage, using improved peeling technology, improving quality prepared food and using sustainable waste management strategy are important points needed to reduce waste generation. According to one informant ''the amount of food leftover is higher on days when cabbage is served as part of daily menu in student cafeteria because many students prefer missing and eating from other places''. Food leftover donation to the poor: Food leftover donation to poor of the poor (tesfegnoch) is a preferred management strategy next to source reduction when compared to feeding to animals, energy recovery, composting or land-filling. Feeding food leftover for tesfegnoch is a re-use form of waste management option if the health quality of the food is ensured. Food sharing/donation to needy people was also encouraged by organizations such as the European Union (EC, 2017). However, besides the safety problem of food leftover generated the operational and legality of leftover redistribution at Hawassa University can be questioned due to problems being created by the users. The attraction of people to the University for Food leftover has created fight for the food, increased insecurity of students and cafeteria workers, theft of property and damage to campus plantation. Moreover according to one of the Main Campus Cafeteria Manager ''giving food leftover to tesfegnoch creates dependency syndrome on leftover users, and increases user children school absenteeism and drop-outs''. ''The people coming for food leftover during morning and evening are trampling farm plants'' (keyinformant from animal farm). Moreover, in-the-localcontext food leftover is socially and culturally unacceptable and lowers user families' prestige.
Food waste as animal feed: In the main campus, the university has established plant farm, aquaculture, animal farm (dairy, beef and poultry) and abattoir services. The demand for agricultural products within and outside the university is increasing, and the university enterprise has a plan to invest on expansion and diversification of the farm system and its modernization. The idea of establishing botanical garden in the main campus is also underway but stacked at mapping stage due to financial constraint. The Main Campus and Technology Campuses are nearby each other and, therefore, the organic waste from both campuses can be managed through integrated circular bio-economy model as shown in Figure 4. The Hawassa University enterprise has already started recycling food waste as animal feed since 2018, but the practice did not last for long due competition of poor people with animals. According to an informant from animal farm "so far the university has no consistent stand on food leftover management; it should choose to donate the leftover to poor people or to the enterprise as a feed. If it is aimed for the enterprise animals the food waste must be collected properly in a hygienic way and the interference of poor people must be stopped.'' From sustainability point of view using food waste as animal feed reduces the energy, water, fertilizer and other resources needed to grow crops and transport feed. The animal waste from dairy farm is currently connected to septic tank system that discharged waste into an open-pit, which contributes to human induced emissions of gases including methane, ammonia and nitrous oxide (Caro et al., 2014), http://www.fao.org/3/a-a0261e.pdf).
Using animal waste (dung and poultry dropping) as feedstock to produce combustible methane gas for cooking and lighting reduces emission of GHGs from land-filling (Ojolo et al., 2007). The wasted part of the food leftover can be mixed with animal dung and poultry manure as feedstock for biogas digester to increase the efficiency of biogas generation (Chibueze et al., 2017;Muthu et al., 2017). According to the data obtained from the University student service directorate office, besides electricity the main campus uses about 19.95 m 3 woods per day for cooking in student cafeteria which costs Et Birr 11,471.25 per day in 2014. In addition to electricity the yearly firewood consumption in main campus student cafeterias alone was 1900 m 3 in 2017/2018 and the planned consumption for 2018/2019 was 2800 m 3 . The university depends on the fire wood due to unreliability and inadequacy of power supply. The biogas use fills the energy gap created by power interruptions and replaces the firewood consumption at least in some of the cafeteria. The bioslurry, byproduct from biogas plant, can be used as an excellent biofertilizer to improve soil physical, chemical, and biological property. The use of biogas and bioslurry as electricity and fertilizer respectively in countries such as Ethiopia is reduces the burden on ruminant forests, reliance on chemical fertilizer and emission of greenhouse gases, and it is consistent with the ideas that are inherent in climate smart agriculture and sustainable development.
Conclusions:
The food waste management in Hawassa University has been reactive. If implemented the food waste based integrated bio-economy model reduces the socioeconomic and environmental problems associated with the waste. In the model the food waste is used as animal feed, and the animal waste is used to produce biogas and the bioslurry from biogas plant is used for campus plantation. The implementation of the model improves SWM of the campus, and will serve as a sustainability showcase in practice for research, training and experience sharing. | 2020-10-29T09:05:47.168Z | 2020-10-16T00:00:00.000 | {
"year": 2020,
"sha1": "d38e653234462d51df798159d801e4891e4e9bf2",
"oa_license": "CCBY",
"oa_url": "https://www.ajol.info/index.php/jasem/article/download/200613/189182",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "1efd1e6e7b660b29b1af45c998a102c78596ecfb",
"s2fieldsofstudy": [
"Economics"
],
"extfieldsofstudy": [
"Business"
]
} |
269421534 | pes2o/s2orc | v3-fos-license | The Impact of Hospital Transfers on Surgical Delay and Associated Postoperative Outcomes for Hip Fracture Patients in Scotland: A Cohort Study
Background/Objectives: Hip fractures exert a substantial burden on hospital systems. Within Scotland 20% of the population resides rurally, warranting investigation of how this impacts prompt access to surgical care. This study aims to determine whether indirect hospital admission via hospital transfer affects the likelihood of surgical management within 36 h for hip fracture patients. Methods: A retrospective cohort study was performed. This used Scottish Hip Fracture Audit data including patients aged ≥50 split into two propensity matched groups based on their transfer status. Descriptive analysis compared patient characteristics. Regression assessed achieving surgery within 36 h of admission in the unmatched and matched cohorts. Secondary outcomes included time to surgery, mortality, mobilization, returning to residence and length of stay. A sensitivity analysis was undertaken to assess for residual confounding effects. Results: The unmatched analysis included 20,132 patients. Transfer patients were younger (p = 0.007) and less-comorbid (p < 0.001). In the matched population, 711 (63.6%) transfer patients had surgery with 36 h of presentation to hospital, compared to 852 (75.3%) non-transfer patients. Transfer patients had 43% reduced odds of timely surgery (OR (95% CI) 0.57 (0.48 to 0.69); p < 0.001). No disparities emerged in mortality, mobilisation or returning to residence., Transfer patients experienced a significant increase in length of stay in hospital (median (IQR) 16 (8 to 33) vs. 13 (8 to 30); p = 0.024). Conclusions: Hospital transfer is associated with significantly reduced odds of timely surgery, a longer time to surgery and longer length of stay. Development of structured network pathways that minimize delay to transfer are required to potentially optimize outcomes and reduce associated cost.
Introduction
Hip fractures pose significant challenges to healthcare systems worldwide, including in Scotland where around 7000 patients require hospital admission annually [1].The annual direct costs of hip fracture admissions in the UK exceeds £2 billion, with further financial consequences due to lost productivity from morbidity and mortality [2].Scotland, like many advanced economies, is experiencing an ageing population due to a longer life expectancy and reduced birth rate [3].Consequently, it is anticipated that the number of hip fracture cases will increase, imposing a greater burden on the NHS and affected individuals [4].Surgical treatment is the primary approach for most hip fracture cases [5].Notable, 20% of Scotland's population resides in rural communities, making it essential to investigate potential associations between transfer status and delays in surgical management [6].
The Scottish Standards of Care for Hip Fracture Patients (SSCHFP) was developed to reduce variation in hip fracture care across Scotland, whilst further enhancing the quality of clinical care [7].Previous research has demonstrated adherence to standard six (surgery within 36 h of admission)-is associated with improved patient outcomes [8].Furthermore, delayed surgery has previously been associated with adverse postoperative outcomes in large meta-analysis, including mortality rates, complications, and extended hospital stays [9,10].While limited international studies have explored the associated between hospital transfer and delay in surgical management [11][12][13][14][15][16], none however have addressed the unique geographical challenges faced by Scotland and the associated large rural population, particularly in the Highlands and Islands.This means that several patients do not have direct access to hospitals with hip fracture services, and instead initially present to small local units designed to manage rehabilitation or minor injuries.
The authors hypothesis that hospital transfer in Scotland may be associated with delays in receiving surgical management within 36 h for hip fracture patients.This study aimed to analyse the Scottish Hip Fracture Audit (SHFA) to determine if indirect admission via hospital transfer impacts the likelihood of surgical management within 36 h of admission for hip fracture patients aged 50 and over in Scotland.Secondary aims explored associations between transfer status and other patient outcomes based on SSCHFP guidelines [7].
Study Design, Setting, and Participants
A retrospective analysis of cohort data was undertaken using anonymised audit data prospectively collected by the SHFA between January 2019 and December 2021.The chosen period reflects when detailed information about transferred patients was available.Data was collected from all trauma centers in Scotland, and local audit coordinators ensured data quality and robustness [17].This study included all patients over the age of 50 in Scotland who experienced an acute hip fracture between January 2019 and December 2021.Patients managed conservatively, those with extensive trauma, a known pathological fracture or who suffered an in-hospital fall were excluded.
Data Collection
Anonymised data was obtained from the SHFA database through Public Health Scotland (PHS).The primary explanatory variable of interest was the patient's transfer status (transfer/non-transfer), Other demographic and patient variables were age, sex, residence prior to admission, American Society of Anaesthesiologists (ASA) grade, operation type, 4AT score and Scottish Index of multiple deprivation decile (SIMD) [18].The primary outcome of interest was receiving surgery within 36 h of admission.Secondary outcomes included time to surgery, mortality at 30 and 60 days postoperatively, returning to original residence by day 30 postoperatively, mobilisation by day one postoperatively, total length of stay (LOS), and acute LOS [7].LOS was truncated at 60 days.
Sample Size
An a priori sample size calculation was conducted which indicated a maximum of 1380 patients (690 per group) were required to detect a 10% difference in the odds of achieving surgery within 36 h of admission between the groups at 80% power and p < 0.05.The SHFA contained 20,430 non-transfer and 1213 transfer patients potentially eligible for inclusion between January 2019 and December 2021.
Statistical Analysis
Analysts had access to an anonymised dataset containing the requested variables obtained from PHS.Initial data visualization was performed to assess the data characteristics.Data cleaning was undertaken, and missing values were recoded using the SHFA data dictionary [19].All variables had less than 3.5% missing data, except for 4AT score (21.9%) and ASA grade (17.8%;Appendix A).It was confirmed that all missing data were missing at random or missing completely at random.The multiple imputation of chained equation random forest algorithm (MICE) was used to impute missing data for all explanatory fields.All outcome variables had <1% missing data and pairwise deletion was used to manage this.
The study population was dichotomised into two groups based on their transfer status.Time to surgery was calculated by subtracting the date and time of admission from the date and time of surgery.4AT scores were categorised based on the rapid clinical test for delirium interpretation [20].Descriptive analysis was performed to examine differences in patient variables by transfer status.Visualisation of histograms and Shapiro-Wilks tests confirmed continues variables to be non-normally distributed, thus they were presented as medians with their interquartile range (IQR).Categorical variables were reported as a number with percentages.Pearson's chi-squared tests were employed to assess differences in the predictor variables between the two groups.Chi-squared tests with continuity correct were used when categorical variables contained two groups.
To address heavy imbalances in group sizes, the non-transfer group was matched to the transfer group using nearest neighbour propensity score matching with a one-to-one ratio by all explanatory variables [21].Analysis of outcome variables was performed in both the unmatched and matched populations.The association between the transfer status and dichotomous outcome variables was assessed using unadjusted logistic regression.Mann-Whitney U tests were used to assess the association between transfer status and time to surgery, acute LOS and total LOS.A sub-group analysis stratifying patients transferred from islands was also undertaken, where Kruskal-Wallis tests were used to identify differences in continuous outcomes.To explore the potential effects of unmeasured confounders relating to patient frailty, we conducted a sensitivity analysis which replicated the main analysis but for patients aged 80 and over who were not admitted from home, representing the frailest individuals in the study population.All statistical analysis was performed using R (version 4.2.0).Statistical significance was determined by p < 0.05.
Ethics
The service evaluation nature of this project and the use of anonymised secondary data meant ethical approval was not required.Subsequent PHS approval was granted in May 2023 (DP23240035).This study was conducted in accordance with the Declaration of Helsinki [22], and the Caldicott principles regulating the use of patient data [23].This study was reported in accordance with the REporting of studies Conducted using Observational Routinely collected health data (RECORD) statement [24].
Participants
Initially, 22,132 patients were included from the SHFA datasbase within the specified time frame.Following exclusion criteria there were 20,190 participants, with 19,049 in the non-transfer group and 1141 in the transfer group (Figure 1).Table 1 reports the patient characteristics of these two groups, Appendix A Table A1 describes characteristics prior to data imputation.
Unmatched Study Population
A total of 20,190 participants were included in the descriptive analysis for the unmatched population.The two groups demonstrated significant differences in all explana-
Matched Study Population Island Sub-Group
Within the 1141 transfer patients, 142 (12.4%) were transferred from islands (Table 4).The median (IQR) time to surgery (hours) for transferred island patients was found to be singificantly greater than non-transfer patients (39.7 (29.0 to 58.5) vs. 20.3(14.5 to 35.9); p < 0.001).Among the transferred island patients, 56 (40.6%) underwent surgery within 36 h of admission, compared to 655 (66.8%) patients transferred from the mainland.Transferred island patients were significantly found to have 78% reduced odds of achieving surgery within 36 h of admission compared to non-transfer patients (OR (95% CI) 0.22
Sensitivity Analysis
Table 5 reports postoperative outcomes for participants aged 80 and over not admitted from home, representing the frailest patients in the population.This included 274 participants (137 in the transfer group and 137 in the non-transfer group).Also in this population, transfer patients experienced significantly longer times to surgery (median (IQR) 26.4 (17.5 to 38.2) vs. 19.4(14.2 to 28.8) hours; p < 0.001).Transfer patients had further significantly reduced odds of receiving surgery within 36 h of admission, experiencing 58% reduced odds (OR (95%CI) 0.42 (0.23 to 0.74); p = 0.003).In this sub-group unlike previously, transfer patients had 59% increased odds of suffering mortality within 30 days (OR (95%) 1.59 (0.78 to 3.34); p = 0.208) and 14% increased odds within 60 days (OR (95% CI) 1.14 (0.65 to 2.03); p = 0.640).These differences were not however statistically significant.Transfer patients were also found to have reduced odds of returning to their original residence (OR (95% CI) 0.74 (0.44 to 1.27); p = 0.821) and achieving early postoperative mobilisation (OR (95% CI) 0.78 (0.49 to 1.27); p = 0.332), but this was also not statistically signiicant.
Discussion
This study aimed to determine if indirect hospital admission via hospital transfer impacts the likelihood of surgical management within 36 h of admission for hip fracture patients aged 50 and over in Scotland.We found that transferred patients experienced longer times to surgery and were significantly less likely to undergo surgery within 36 h of admission.This finding aligns with our initial hypothesis and suggests that hospital transfer may be associated with delays in surgical management.This study did not however find any differences for secondary outcomes such as mortality, return to residence, postoperative mobilisation and acute LOS.We did however reveal that transferred patients experienced a longer total LOS.Transferred patients were also more likely to be younger and healthier than non-transferred patients.When patients were transferred from islands as opposed to rural hospitals on the mainland they experienced further delays to their management and more time in hospital.
Our findings agree with the previous evidence exploring the relationship between transfer status and delays to surgery [11, [13][14][15][16], except for one Irish study which found no differences in the odds of achieving timely surgery between the two groups [12].This could be because Ireland's geography and healthcare system varies from Scotland's, and they utilised a target time of 48 h.The study was also conducted ten years ago, when demands on orthopaedic services were less and patients were not as complicated [25].Considering both our findings and the existing evidence base, there are strong suggestions that transfer status is associated with delays to surgery.
These delays might have been unavoidable, such as needing to optimise a patient's health status or long-term medication prior to surgery [26].However, many of these delays are likely avoidable, being the consequence of inefficient referral pathways, limited capacity of operative rooms or availability of surgical personnel [27].There is evidence however that delays are appropriate to optimise the medical condition of certain patient groups [28][29][30].
Our findings suggest that, despite experiencing delays to surgery, transfer patients experience the same postoperative outcomes as non-transfer patients.This conflicts many large systematic reviews and meta-analysis that have demonstrated associations between delayed surgery and adverse outcomes [9,10,31].Strong associations between frailty and adverse outcomes have been demonstrated in the past following surgical treatment for hip fractures [32][33][34][35].Despite not being statistically significant, likely because of inadequate power creating imprecision, our sensitivity analysis did demonstrate a trend towards greater mortality and reduced odds of mobilisation and returning to residence for transfer patients when investigating the frailest patients in our sample.This suggests we did not discover any differences in postoperative outcomes possibly because of residual confounding relating to patient frailty existing in the main analysis, despite efforts to control this.Similar to major trauma patients it is possible that those with a delay to theatre related to hospital transfer exhibit a "second hit" phenomenon, which has not been adequately investigated in this population to date.
The characteristics of our sample population suggests transfer patients are younger and less co-morbid than non-transfer patients.This conflicts recent data from the Scottish government and a study undertaken by Teckle et al. which reveals Scotland's rural population is older and has more co-morbidities than those living in cities [36,37].Despite this, they did identify that more people in rural communities live at home, suggesting a lower prevalence of frailty.Consequently, it is unclear if our transfer cohort is healthier than the rural population, they represent because of hospitals selectively referring healthier patients for more complicated procedures such as a Total Hip Replacement.
We identified a significantly longer total LOS for transferred patients, which is consistent with the existing literature [38][39][40].Limited rehabilitation resources in rural Scotland could require that transfer patients are rehabilitated further prior to discharge, or discharge planning procedures could be more complicated.In 2017, the cost of each excess bed day in hospital was £351 [41].Therefore, when adjusted for inflation, delays discharging transferred patients costs at least £1,528,347 per year in 2024.More importantly, additional consequences include lost productivity, a worse patient experience, deterioration of general health, additional stress for staff and reduced availability of beds [42].
The main strengths of this study lie in its use of the large and comprehensive dataset collected by the SHFA [17].This facilitated a substantial sample size, excellent data quality and a nationally representative cohort reflective of hip fracture care within a developed healthcare system.To our knowledge, this study is the first to investigate the association between transfer status and delays in surgical management for hip fractures in Scotland and contributes to the small international evidence base [11][12][13][14][15][16].
Performing imputation by MICE better accounts for statistical uncertainty than other methods and considers the relationships that exist between variables [43,44].Propensity matching better accounts for unseen variables causing differences between the two groups than adjusted regression, whilst addressing the imbalances in group sizes [21,45,46].However, this did cause a loss of data and could limit the generalisability of findings.Preserving transfer cases with uncommon characteristics using nearest neighbour matching instead of coarsened exact matching reduced this risk.
A key limitation of our study was using secondary, aggregated data as we were unable to collect all potential confounding variables, which likely caused residual confounding effects in our analysis.Hip fracture care is complicated and influenced by numerous factors which this study could not account for.In the past, ASA grade has demonstrated validity as an indicator of preoperative health status [47].Previous research identified the availability of ortho-geriatric services to significantly impact patient outcomes [48][49][50].Considering more than 50% of our population was over 80 years old, this would be an important factor to control.
We were not able to describe and control for where patients have been transferred from and why they were transferred.Some rural practices selectively refer healthier patients for more complicated procedures as opposed to transferring all cases.This could have created a healthier transfer population with residual confounding.
Utilising multi-center national data allows this study to be generalised to all of Scotland.Considering our findings agree with international literature, the findings perhaps could be inferred to other countries with similar geographical and population characteristics as Scotland.Since less than 5% of the population was younger than 60, care should be taken inferring results to younger patients.
Future research should address the limitations of this study to attempt to more definitively determine if the delay experienced by transferred patients is associated with worse postoperative outcomes.Residual confounding must be addressed by considering all important confounding variables.A qualitative aspect exploring healthcare professionals' beliefs regarding obstructions to achieving time targets for surgery and discharging patients would provide valuable information.Long term outcomes such as 1-year mortality should also be explored, as well as patient reported outcomes including pain, quality of life and functionality.A comprehensive economic analysis would be required for any future policy changes.
Conclusions
In conclusion, hospital transfer is significantly associated with reduced odds of achieving surgical management for hip fractures within 36 h of admission, a longer time to surgery, and a greater total LOS in Scotland.Despite this, transfer patients do not experience worse postoperative outcomes.It is unclear however if this is the result of residual confounding effects.Future research is required to address the limitations of this study to determine if hospital transfer is associated with worse postoperative outcomes in Scotland.
Figure 1 .
Figure 1.Flowchart showing patient selection for the sample population.
Figure 1 .
Figure 1.Flowchart showing patient selection for the sample population.
Figure 2 .
Figure 2. Violin plot showing time to surgery (hours) for non-transfer and transfer patients, outliers truncated at 150 h.
Figure 2 .
Figure 2. Violin plot showing time to surgery (hours) for non-transfer and transfer patients, outliers truncated at 150 h.
Figure 2 .
Figure 2. Violin plot showing time to surgery (hours) for non-transfer and transfer patients, outliers truncated at 150 h.
Figure 3 .
Figure 3. Error bar chart showing the percentage of patients within the non-transfer and transfer groups to achieve surgery within 36 h of admission.Figure 3. Error bar chart showing the percentage of patients within the non-transfer and transfer groups to achieve surgery within 36 h of admission.
Figure 3 .
Figure 3. Error bar chart showing the percentage of patients within the non-transfer and transfer groups to achieve surgery within 36 h of admission.Figure 3. Error bar chart showing the percentage of patients within the non-transfer and transfer groups to achieve surgery within 36 h of admission.
Figure 4 .
Figure 4. Violin plot showing the total LOS (days) for non-transfer and transfer patients.
Figure 4 .
Figure 4. Violin plot showing the total LOS (days) for non-transfer and transfer patients.
Author Contributions:
Conceptualization, L.F., S.B. and P.K.M.; methodology, L.F. and L.L.; software, L.L. and L.F.; validation, L.L. and L.F.; formal analysis L.L. and L.F.; investigation, L.F. and L.L.; resources, L.L. and L.F.; data curation, L.F.; writing-original draft preparation, L.L.; writing-review and editing, L.L., S.B., P.K.M. and L.F.; visualization, L.L.; supervision, L.F., S.B. and P.K.M.; project administration, L.L. and L.F.All authors have read and agreed to the published version of the manuscript.Funding: This research received no external funding.Institutional Review Board Statement: Ethical review and approval were waived for this study due to the use of anonymized data and the service evaluation nature of design.Approval was granted by the Scottish Hip Fracture Quality Improvement and Research Sub-group ((DP23240035) May 2023.Informed Consent Statement: Individual patient consent was not required for this study due to the use of anonymized data and the service evaluation nature of design.Data Availability Statement:Requests for data included in the study should be made to Public Health Scotland.Code available for the project is available on request to the senior author.
Table 2 .
Postoperative outcomes in the unmatched study population.
Table 3 .
Postoperative outcomes in the matched study population.
Table 4 .
Post-operative outcomes including island sub-group.
Table 5 .
Sensitivity analysis of participants aged ≥80 and not admitted from home.
Table A1 .
Characteristics of included hip fracture patients and associations with transfer status before data imputation. | 2024-04-28T15:18:21.762Z | 2024-04-26T00:00:00.000 | {
"year": 2024,
"sha1": "f5627e2805ec5427128af336f6f742858da07d2f",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2077-0383/13/9/2546/pdf?version=1714122094",
"oa_status": "GOLD",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "a187605e7e951528dd644cd5da0c7ffb49f953d7",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": []
} |
244930967 | pes2o/s2orc | v3-fos-license | Gastrointestinal manifestations and their relation to faecal calprotectin in children with autism
Introduction A common comorbidity in autism spectrum disorder (ASD) children is gastrointestinal problems, and a possible link between active gastrointestinal inflammation and autism has been suggested. Faecal calprotectin (FC) is a non-invasive marker for of gastrointestinal inflammation. Aim To study the level of FC as a marker of bowel inflammation in children with ASD and its possible relation to gastrointestinal manifestations. Material and methods Calprotectin levels were assessed in stool samples of 40 ASD children. Autism severity was assessed by the Childhood Autism Rating Scale (CARS). Severity of gastrointestinal symptoms was assessed using a modified version of the 6-Item Gastrointestinal Severity Index (6-GSI) questionnaire. A control group of 40 healthy children matched for age and sex with the cases was also included to compare their levels of FC. Results Gastrointestinal symptoms were present in 82.5% of children with autism; the most reported offensive stool odour (70%) and the least diarrhoea (17.5%), and a high 6-GSI score was observed in 35% of ASD children. FC levels were elevated in 35% of the cases and in 25% of the control group. The mean levels of FC of cases were significantly elevated compared to levels of controls. FC levels positively correlated with severity of gastrointestinal symptoms (6-GSI) in autistic patients. There was positive correlation between CARS and 6-GSI. Conclusions Gastrointestinal manifestations are a common comorbidity in autistic patients. ASD patients have significantly higher FC levels than healthy controls. FC levels are strongly correlated with the severity of gastrointestinal manifestations in ASD children. So, gastrointestinal manifestations among autistic patients could be caused by gastrointestinal inflammation.
Introduction
Autism spectrum disorder (ASD) is a neurodevelopmental disorder that is characterized by difficult communication, impaired reciprocal social interaction, and repetitive, restricted, and stereotyped patterns of interests or behaviours. These deficits start early in life, vary widely in severity, and often change with the gain of other developmental skills [1].
Many children diagnosed as ASD also have gastrointestinal symptoms such as chronic diarrhoea, abdominal pain, constipation, vomiting, and gastroesophageal reflux. An association between ASD and inflammatory intestinal mucosal pathology has been proposed, and a link between gastrointestinal problems and be-havioural signs and symptoms in autistic patients has been suggested [2]. Endoscopy results in children with ASD have demonstrated that the inflammation could occur anywhere along the gastrointestinal tract [3]. Some procedures can be used for detecting bowel inflammation including endoscopy, biopsy, colonoscopy, and analysis for biochemical markers in stool [4].
Calprotectin is a cytoplasmic protein that is present mainly in neutrophils; it is released by cell death and disruption [5]. During some inflammatory processes, calprotectin is released with the intracellular exudates in high amounts and can be found in body fluids and serum; therefore, it can be considered as a useful marker for inflammation [6]. Gastroenterology Review 2021; 16 (4) Calprotectin in stool signifies intestinal tract infiltration with neutrophils. The level of faecal calprotectin (FC) correlates with intestinal tract inflammation histologically and macroscopically [5]. FC has been considered as a non-invasive marker for some gastrointestinal disorders that can be used before more invasive procedures [7].
Some studies demonstrated that intestinal inflammation is more prevalent in children with autism, while other research failed to discover intestinal inflammation among autistic children.
Aim
The aim of this work was to study the level of faecal calprotectin as a marker of bowel inflammation in children with autism and its possible relation to gastrointestinal manifestations.
Material and methods
The study included 40 autistic children aged 3 to 12 years and fulfilling the Diagnostic and Statistical Manual of Mental Disorders, Fifth Edition (DSM-5) diagnostic criteria [8]. A control group of 40 healthy children matched for sex and age was also included to compare their level of faecal calprotectin with the cases group.
The cases were selected from those attending the outpatient neurobehavioral clinic at Alexandria University Children's Hospital. Written informed consent was obtained from parents/caregivers of children after explanation of the steps and nature of the study. The study was started after the agreement of the Medical Ethics Committee of Faculty of Medicine, Alexandria University. Patients with dysmorphic features suggestive of syndromic developmental delay and children with any chronic gastrointestinal disease such as chronic gastritis or celiac disease were excluded.
All the studied children were subjected to a thorough history taking and complete physical examination with special emphasis on neurological examination. The severity of autism was assessed using the Childhood Autism Rating Scale (CARS) [9]. It was classified as mild to moderate if from 30 to 36.5 or severe if more than or equal to 37.
Gastrointestinal (GI) symptoms and the symptom severity were assessed using a modified version of the GI Severity Index, i.e. a shortened version called the 6-GI Severity Index (6-GSI) [10]. It included 6 items, which were constipation, abdominal pain, diarrhoea, stool smell, stool consistency, and flatulence. Each variant was scored 0, 1, or 2 according its frequency per week; a zero score of any variant was interpreted as the symptom is not present, and a 1 or 2 score of any variant denoted the presence of the symptom with differ-ent severity. Total score equal to or less than 3 was classified as low score, and more than 3 was a high score.
Faecal samples were collected and stored at below -20°C. After thawing, the extracts were diluted and run on enzyme linked immunosorbent assay (ELISA) plates. Calprotectin levels were measured in stool samples using an EDI TM Quantitative faecal calprotectin ELISA [11]. FC levels were classified as follows: < 50 µg/g = normal, ≥ 50 µg/g = elevated. A comparison between cases and control as regards faecal calprotectin levels was done, and the following correlations were investigated among cases: between autism severity (CARS) and GI symptoms severity (6-GSI), FC and GI symptoms severity (6-GSI), and between FC and autism severity (CARS).
Statistical analysis
Data were entered into the computer and analysed using IBM SPSS software package version 20.0 (Armonk, NY: IBM Corp) [12]. Qualitative data were presented as percentages and numbers. The Kolmogorov-Smirnov test was utilized to demonstrate the normality of distribution. Quantitative data were demonstrated using mean, range (minimum and maximum), median, and standard deviation. The significance of the results was judged at the 5% level. The c 2 test was used to test the association between qualitative variables. Fisher's Exact or Monte Carlo correction was used when in the c 2 more than 20% of the cells had an expected count of less than 5 and required correction. The Mann-Whitney test was used to make comparisons between 2 studied independent subgroups that were not normally distributed.
Results
Out of the 40 ASD children, 28 (70.0%) were males and 12 (30.0%) were females. Their age ranged from 3 to 12 years with a mean of 6.53 ±2.10 years. Twenty-five (62.5%) children were from urban areas, and consanguinity was positive only in 6 (15%) ASD children.
Cases were diagnosed at ages ranging from 18.0 to 36.0 months with a mean of 23.40 ±5.07 months. Twenty-six (65%) children had regressive type of autism and 14 (35%) had non-regressive autism. According to CARS, 23 (57.5%) of the ASD children were mild to moderate and 17 (42.5%) were severe. CARS ranged from 30 to 48.5 with a mean of 36.18 ±5. 22.
Gastrointestinal symptoms were present in 33 (82.5%) ASD children. The most frequent symptom was offensive stool odour (70%), and the least was diarrhoea (17.5%). The total 6-GSI score was low in 26 (65%) cases and high in 14 (35%) cases. The total score ranged from 0 to 9 with a mean of 2.85 ±2.05 (Table I).
Gastroenterology Review 2021; 16 (4) A control group of 40 healthy children matched for sex and age was included to compare their level of faecal calprotectin with the cases group. The faecal calprotectin level was elevated (≥ 50 µg/g) in 14 (35%) children with ASD and in 10 (25%) of the control group. Comparing the mean levels of faecal calprotectin, it showed that the mean level of FC in cases was 47.03 ±26.68 while in the control group it was 37.08 ±21.55, and this showed statistical significant difference (p = 0.049) (Table II).
Correlations between CARS, 6-GSI, and the levels of faecal calprotectin among cases were investigated and revealed a significant positive correlation between CARS and 6-GSI at p = 0.003 and a significant positive correlation between faecal calprotectin and 6-GSI at p = 0.002. However, no significant correlation was found between CARS and faecal calprotectin (p = 0.280) (Table III).
Discussion
GI problems are common morbidities in ASD children; numerous studies have suggested a probable gut-brain axis that could be explained by inflammatory, immunological, or genetic factors [2]. Afferent gut-brain pathway includes inflammatory mediators, entero-endocrine system, intestinal microbiota, and sensory epithelial cells, while efferent pathway involves neuroendocrine and autonomic nervous systems [13].
In the current study we investigated faecal calprotectin as a marker of inflammation in the gastrointestinal tract in children with ASD. It was found that faecal calprotectin levels were elevated in 35% of patients in comparison to 25% of the controls, and the mean levels of faecal calprotectin in cases were significantly more elevated than in the control group.
This finding is consistent with the findings of Karkelis et al. [14], de Magistris et al. [15], Babinská et al. [16], and Eduardo et al. [17], who observed that higher levels of calprotectin were detected in the stools of autistic children than in normal children. Karkelis [17]. In contrast, other previous studies by Fernell et al. [18], Wos et al. [19], and Strati et al. [20] revealed that faecal calprotectin levels of autistic patients were not elevated more than in normal populations.
Regarding GI manifestations, it was found that 82.5% of ASD patients had at least one GI symptom, with offensive stool odour being the most common (in 70% of patients) and diarrhoea the least reported (in 17.5% of patients). High 6-GSI scores were observed in 35% of ASD children.
Horvath and Perman detected GI manifestations in 84.1% of ASD children [21]. Valicenti-McDermott et al. compared the frequency of GI manifestations in 3 groups: normally developed children, autistic children, and a group with other developmental disorders. They detected GI symptoms in 28% of children with normal development, in 70% of autistic children, and in 42% of children with other developmental disorders [22].
In contrast, Ibrahim et al. [23] and Black et al. [24] found that GI symptoms were not detected more in autistic than in normal children. Ibrahim et al. in 2009 found no significant difference in the overall incidence of GI symptoms between autistic children and controls, although constipation and feeding problems/food selectivity were detected more in ASD children [23]. Another study by Black et al. reviewed hospital records and found that GI problems were not detected in autistic children more than in the normal population (9% vs. 9%) [24].
A wide range variations of prevalence of GI symptoms in autism were observed by Buie et al. [25], McElhanon et al. [26], and Holingue et al. [27], Buie et al. revealed that in autistic children the prevalence of GI tract symptoms ranges from 9% to 84% versus 9-37% for normal children [25]. In 2014, McElhanon et al. in a meta-analysis that involved 15 studies over 30 years, revealed that general GI symptoms ranged from 0.39 to 48.25, with the observation that GI symptoms in children with ASD are 4 times more prevalent than for children without ASD [26]. In 2018, Holingue et al. re-viewed studies dating back to 1980; the ranges were quite wide. Among the 62 studies, for the category of "any" GI symptom the range was 4.2-96.8% of participants [27].
Constipation, diarrhoea, and abdominal pain were reported as the most common GI symptoms in autistic patients in several studies. Holingue et al. in their review on GI symptomatology in ASD, revealed that the median prevalence of constipation was 22.2% and of diarrhoea 13% [27]. Gorrindo et al. found that functional constipation was the most frequent type of GI manifestation in children with ASD (85%) [28]. In another study by Wang et al., parents reported that the most common Gl symptoms in children with ASD were constipation (20%) and chronic diarrhoea (19%), and that increased autism symptom severity was associated with a higher score of GI problems [29]. However, a study done by Parracho et al. found that diarrhoea was the most common GI symptom (75.6%), followed by excess wind (55.2%), abdominal pain (46.6%), constipation (44.8%), and abnormal faeces (43%) [30]. Also, another study by Molloy and Manning-Courtney described diarrhoea as being more common in ASD children (17%) [31].
The wide variations in presentations of gastrointestinal tract affection in ASD patients may be attributed to high methodological variability including the person who reported the symptoms (parents, caregivers, or physician), different scales used to evaluate GI symptoms, different environment, study design, age group, and sample size.
In the current study, we correlated 3 variables among cases: severity of autism (CARS), severity of GI symptoms (6-GSI score), and levels of FC as a marker of intestinal inflammation. Correlations were found to be significantly positive between CARS and GI severity score, and between FC and GI severity score; however, (4) no significant correlation was found between CARS and FC level. In 2011, Adams et al. used 6-GSI to assess GI severity in ASD children and found that the total score was low in 39% of patients and high in 61% of patients. Also, a strong positive correlation between autism severity and GI symptom severity was detected [10]. Similarly, in 2011, Wang et al. demonstrated that increased autism severity was associated with more frequent GI problems [29].
According to our best knowledge, limited studies have correlated GI severity index (as a clinical method of detection of GI problems) and faecal calprotectin (as a laboratory method) in ASD patients. Further studies are required because a positive correlation between the level of FC and the GI severity index was found.
One of the limitations of the current study was the lack of objective confirmation of an absence of a concomitant bowel disease of inflammatory origin, such as endoscopy, in children with autism. This could be justified by the fact that the severity of symptoms was not sufficient to arrange for an invasive procedure. The elevation of calprotectin was mild, and in the absence of specific clinical presentation suggestive of a serious disease like haematemesis or bleeding per rectum, most of the anticipated findings in endoscopy in such a presentation (type of patients and severity) is usually mild and non-specific [4,32].
Conclusions
Gastrointestinal manifestations are a common comorbidity in autistic patients, and the severity of their GI manifestations is strongly correlated with autism severity. ASD patients have significantly higher FC levels than healthy controls, and their FC levels are strongly correlated with the severity of gastrointestinal manifestations in autistic children. FC as a lab marker and GI severity score could be utilized as an indicator of GI problem severity in autistic patients with GI symptoms. | 2021-12-08T16:03:45.399Z | 2021-12-02T00:00:00.000 | {
"year": 2021,
"sha1": "8f61c614c51430551c411e3c7f3073171ea8c351",
"oa_license": "CCBYNCSA",
"oa_url": "https://www.termedia.pl/Journal/-41/pdf-45780-10?filename=Gastrointestinal%20manifestations.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "44e14dbfb0b362a7f72a848fcb2581640080018d",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
90063957 | pes2o/s2orc | v3-fos-license | Recycling spent Pleurotus eryngii substrate supplemented with Tenebrio molitor feces for cultivation of Agrocybe chaxingu
In the industrialized production of mushrooms usually only one flush of fruitbody is harvested, so that nutrients and energy in the substrate is not fully exploited. In this study, the spent Pleurotus eryngii substrate was recycled for the cultivation of Agrocybe chaxingu under ambient temperature. Six formulae were tested: (1) Control: 98% spent substrate, 1% sucrose, 1% lime; (2) Control + 10% wheat bran; (3) Control + 20% wheat bran; (4) Control + 10% T. molitor feces; (5) Control + 20% T. molitor feces; (6) Control + 10% wheat bran + 10% T. molitor feces. Two flushes of fruitbody were harvested, the control substrate resulted in a biological efficiency of 40.42%; the formulae with supplementation of 10% wheat bran, 20% wheat bran and 10% T. molitor feces significantly increased biological efficiency to 52.50, 54.61 and 51.56%, respectively, and supplementation of 20% T. molitor feces, or 10% wheat bran plus 10% feces further significantly increased biological efficiency to 62.95 and 61.10%, respectively. All supplemented substrates had significantly higher cellulose and laccase activity than the Control (cellulase 0.10 U/g; laccase 41.00 U/g), which were 10% wheat bran (0.15 U/g; 72.67 U/g), 10% T. molitor feces (0.17 U/g; 98.33 U/g), 20% wheat bran (0.22 U/g; 76.00 U/g), 20% T. molitor feces (0.27 U/g; 87.00 U/g), 10% wheat bran plus 10% T. molitor feces (0.25 U/g; 97.67 U/g), respectively. Spent Pleurotus eryngii substrate was promising for cultivation of Agrocybe chaxingu, especially when supplemented with 20% T. molitor feces, or with 10% T. molitor feces plus 10% wheat bran.
Introduction
In the industrialized production of low temperature fruiting type mushrooms like Pleurotus eryngi and Flammulina velutipes, usually only one flush of fruitbodies is harvested (biological efficiency 60-65%) because the biological efficiency of the successive flushes are not high enough (approximately 30%) to make a profit where facilities and air cooling systems are expensive.
Currently, most of the spent substrate is burnt to generate steam for substrate sterilization and heating of mushroom farms, some spent substrate is used as organic fertilizers in orchards. Nutrients and energy in the substrate is not fully exploited, as evidenced by over 100% total biological efficiency in non-industrialized mushroom production under natural environmental conditions where 3-4 flushes were harvested (Philippoussis et al. 2001;Mandeel et al. 2005). In recent years, many experiments have been conducted to recycle the spent substrate for cultivation of other mushrooms (usually high temperature fruiting types) (Royse 1992;Li 2013).
Agrocybe chaxingu (in some previous cases mistakenly termed A. cylindracea or A. aegerita) (Callac et al. 2011) is a popular mushroom with a sweet aroma and many medicinal benefits. It is an antioxidant (Choi et al. 2009), possessing properties that aid the curing of cancers (Hyun et al. 1996), diabetes (Lee et al. 2010), etc. With a view to recycle spent P. eryngii substrate for cultivation of other mushrooms in a low cost way, in the present study, A. chaxingu was chosen because it is a moderate temperature fruiting type mushroom suitable for cultivation at a broad range of ambient temperatures for a period from late spring to autumn in south China.
To formulate recycled substrate for A. chaxingu cultivation, the nutrient composition of spent P. erygnii substrate was analyzed and compared with the unused substrate. Tenebrio molitor rearing has expanded rapidly in China, mainly as animal and pet feed, and to a lesser degree for human consumption. The sand-like feces of T. molitor larvae contain digested fiber, crude protein (14-18%), crude lipid (15-18%) and minerals (Wang et al. 2012;Lee and Rho 2014), therefore, T. molitor feces could be an ideal ingredient used for mushroom cultivation, but very few such studies were reported (Gan et al. 2008). Currently T. molitor feces is used mainly as garden fertilizer or livestock feed.
The purpose of the present study was to test if the spent substrate of P. eryngii in the industrialized production setting was a good substrate for production of A. chaxingu; and the effect of supplementing 10-20% T. molitor feces or/and wheat bran in improving fruiting body yield.
Material and methods
A. chaxingu and spent P. eryngii substrate The experimental A. chaxingu strain was purchased from Xue Shan Er Precious Edible Mushroom Institute, Gutian county, Fujian province. Spent P. eryngii substrate was provided by Guangdong Lantian Agricultural Co., Ltd., Fengshun county, Guangdong province. The formula of the unused substrate for P. eryngii production was: 50% sugarcane bagasse, 20% cottonseed hulls, 20% wheat bran, 5% cornmeal, 3% soymeal, 1% lime, 1% gypsum. T. molitor feces was purchased via Taobao.com from Hong Chang Feed Rearing Farm in Binzhou city, Shandong province. Wheat bran and other ingredients and materials were purchased from a local market.
Determination of composition of P. eryngii substrate
Total carbon content of the spent and unused P. eryngii substrate was determined by potassium dichromate method (ISO 14235 1998). Total nitrogen content was determined by the Kjeldahl method (Bremner and Breitenbeck 1983). Soluble sugar content was determined by the anthrone-sulfuric acid method (Spiro 1966).
Starch content was determined with the method described by Holm et al. (1986). Cellulose, hemicellulose, lignin and ash content were determined according to Goering and Van Soest (1970).
Preparation of substrate
Mushroom cultivation was carried out in a chamber in our laboratory. The following 6 formulae were adopted, representing without or with supplementation of 10-20% T. molitor feces or/and wheat bran. For each formula 30 bags were inoculated. The spent P. eryngii substrate was fragmented by hand and sun dried for use, and after mixing thoroughly with other ingredients, the 1% sucrose was solved in required water to make sure the wet substrate contain 65% water. Then the wet substrate was filled into 33 9 17 cm HDPE plastic bags with each bag containing 857 g (equals to 300 g dry substrate). The substrate was pressed to a compactness so that the bag side was slightly tensioned to leave no free space for primordial occurrence during cropping stage. The bags were not fully filled so that an empty space was left in the bag to maintain moisture during cropping. After sealing with neck rings and cotton-free (sponge) caps, the bags of substrate were autoclaved (Tomy SS325) at 121°C for 2 h.
Inoculation and mycelial culture
As the temperature in the autoclave dropped to 60-70°C, the bags of substrate were moved to a biosafety cabinet for further cooling and UV light sterilization of surface microorganisms (2-3 h). When the substrate was cooled thoroughly each bag was inoculated with about 20 g solid spawn previously cultured in bags with spent P. eryngii substrate. Then the mycelium bags were cultured in the dark (with a closed curtain) at room temperature (18-23°C), with air relative humidity maintained at 65-70% by occasionally adding moisture with a spray humidifier.
Cropping management
As most of the bags of all six formulae were fully colonized (upon spawn run completion), after 10 additional days of mycelial growth, the neck rings and caps were removed from the bags, and natural light was supplied to induce primordial formation. At the same time the air relative humidity was maintained at 80-90% with a humidifier.
Determination of mycelial cellulase and laccase activity during spawn run Five samples were taken of each formula for enzyme determination upon completion of substrate colonization. Crude enzyme was extracted by placing 2.0 g fresh mycelial culture into a 250 mL flask added with 20 mL 0.1 mol/L citrate buffer (pH 5.0) which was shaken in a rotary shaker at 28°C, 200 rpm for 2 h. The extracted solution together with substrate was centrifuged at 4000 rpm for 10 min and the supernatant was used as crude enzyme for activity assay.
The carboxymethyl cellulase (CMCase) activity assay followed the method of Ghose (1987). 1 unit CMCase activity was defined as the enzyme amount required to transfer substrate into 1 lmol glucose and expressed in U/g fresh mycelial culture. Laccase activity was measured by following the method described by Heinzkill et al. (1998), with modification of doubling both the sample volume and reagent volume to suit a 1 cm cuvette. Laccase activity was expressed as U/g substrate (fresh weight), where 1 U was defined as 1 lmol of substrate oxidized per min.
Spawn run period, fruiting body yield and biological efficiency
The spawn run period (the number of days from inoculation to colonization completion of the substrate by the mycelium) was recorded. Two flushes of fruiting bodies were harvested. The fresh weights of fruiting body were recorded and biological efficiency (BE %) was calculated by dividing fresh weight of fruiting body by dry substrate weight per bag.
Data statistical analysis
Original data were processed using EXCEL (Microsoft, WA, USA) and Scheffe's tests were performed using SPSS 17.0 (SPSS Inc., Chicago, IL, USA).
Results and discussion
Composition of unused and spent P. eryngii substrate As indicated in Table 1, the content of total carbon, total nitrogen and C/N ratio in the spent P. eryngii substrate was merely slightly reduced as compared with unused substrate. The more easily digestible ingredients (soluble sugar, starch and hemicellulose) are significantly reduced to 40.14, 11.14 and 54.28% of the unused substrate values. The cellulose content in the spent substrate was 91.55% that of the unused substrate, which was slightly reduced, and lignin in the spent substrate was 81.88% that of the unused substrate, reducing to a larger degree than cellulose. The ash in the spent substrate was more than twice that of the unused substrate, reflecting the dry matter loss through respiration by P. eryngii. The data indicated that to obtain good fruitbody yield the spent substrate should be replenished with the easily digestible ingredients like soluble sugar, starch and hemicellulose, but extra nitrogen source (e.g., soymeal), lignocellulosic ingredients were not required to be supplemented, hence wheat bran, cornmeal and T. molitor feces could satisfy this purpose. In fact, too much nitrogen in the substrate can lead to ammonia accumulation during storage or preparation which inhibited mycelial growth (Choi 2004;Mohamed et al. 2016).
Mycelial cellulase and laccase activity and spawn run period
During vegetative growth edible fungi produce a wide range of extracellular enzymes to degrade the lignocellulosic substrates, including cellulase, laccases, peroxidases, xylanase, protease, etc. (Magnelli and Forchiassin 1999). In the present study, the mycelial cellulase and laccase activity at 40 day of spawn run (upon colonization of substrate) were determined to reveal possible associations of substrate degradation rate with substrate formulae (Table 2). Mn peroxidase was not determined in this study because it displayed a similar pattern to laccase (Zeng et al. 2013).
As shown in Table 2, the cellulase activity in mycelia of Formulae 2 and 4 were not significantly different from that in Formula 1, but Formulae 3, 5 and 6 had significantly higher cellulase activity than Formula 1, which indicated that both the inclusion of wheat bran and T. molitor feces significantly enhanced cellulase activity as the inclusion rate reached 20% (either separately or combined), and supplementation of 20% feces (Formula 5) demonstrated the highest value.
All supplemented formulae demonstrated significantly higher laccase activity than the Control, indicating that either supplementation of wheat bran or T. molitor feces could stimulate laccase excretion. And Formulae 4 (10% feces) and 6 (10% feces ? 10% bran) had significantly higher laccase activity than Formulae 2 and 3 (10 and 20% bran), but not higher than Formula 5 (20% feces), indicating T. molitor feces was superior to wheat bran in stimulating laccase activity in Ganoderma. According to substrate inducing theory, such enzyme activity enhancing effect of T. molitor feces might be due to the presence of digested products of cellulose, chitin and other polymeric compounds.
The spawn run period reflects how the substrate suits the mycelial growth, in the present study the spawn run periods with all the 6 formulae of substrate were over 40 days as the mycelia were cultured under natural temperatures (18-23°C). It can be seen from Table 3 that the spawn run period of Chaxingu was the shortest (40 days) in the substrate of Formulae 1 and 4, supplementation of 10-20% (Formulae 2, 3) wheat bran significantly extended the spawn run period by about 3 days as compared with Formula 1, and supplementation of 20% T. molitor feces or 10% wheat bran plus 10% feces (Formulae 5, 6) significantly extended the spawn run period by about 4 days; The extended spawn run period by supplementation of wheat bran or/and feces might be due to two reasons: (1) both wheat bran and T. molitor feces were not elastic as was sugarcane bagasse and cottonseed hulls, thus reducing the porosity of supplemented substrates with poorer oxygen transmission within the substrate; (2) supplementation of the ingredients led the mycelia to allocate more resources to excrete lignocellulosic enzymes (as evidenced by data in Table 2) and other enzymes such as protease, which facilitated final degradation of substrate but slowed vegetative mycelial growth in the first stage.
Fruitbody yield and biological efficiency
Primordia emerged successively in bags of all formulae of substrate one week after applying cropping management (as previously described measures, including cap removing, curtain opening and addition of air humidity) except seemingly slightly later on Formula 6. Due to the limited number (n = 30) of bags for each formula it was difficult to collect synchronical fruitbodies at a time, the separately collected representative fruiting bodies of each formula were shown in Fig. 1. The fruitbody yield and biological efficiency of each formula are shown in Table 3.
It can be seen from Table 3 that in the first flush, all the supplemented formulae had significant higher fruitbody yields than Formula 1, and no significant differences Data are presented in mean ± SD (n = 3) Data are presented in mean ± SD (n = 5 for enzyme activity; n = 30 for spawn run period) Means within each column bearing no common superscripts are significantly different (P \ 0.05) existed among the supplemented formulae. In the second flush the fruitbody yield of Formulae 3, 5 and 6 were significantly higher than Formulae 2 and 4 as well as Formula 1, indicating that as compared with the 10% supplementation level, supplementing 20% wheat bran or/ and T. molitor feces could increase yield of successive flushes, and therefore, the total yield. By looking at the total fruitbody yield of the six Formulae, it can be seen that the spent P. eryngii substrate supplemented with only 1% sucrose and 1% lime (Formula 1) could yield 404.17 g/kg fruitbody for 2 flushes, namely achieving a biological efficiency of 40.42%, which was satisfactorily obtained from a farm waste (Jeznabadi et al. 2016). Supplementation of 10-20% wheat bran (Formulae 2, 3) or 10% feces (Formula 4) significantly increased total fruitbody yield, so that the biological efficiency was raised to over 51.56%. Supplementation of 20% (Formula 5) or 10% feces plus 10% wheat bran (Formula 6) resulted in the highest biological efficiency of over 61.10% which was not only significantly higher than Formula 1 but also higher than the 10-20% wheat bran supplementation formula and the 10% feces supplementation formula. Therefore, T. molitor feces was superior to wheat bran to supplement the spent P. eryngii substrate for cultivation of A. chaxingu. It was shown previously that the nutritive value of T. molitor was superior to livestock meat (Rumpold and Schlüter 2013) and insects including T. molitor are far more efficient in transforming plant biomass into animal biomass than conventional livestock (Nakagaki and DeFoliart 1991).
Conclusion
Spent Pleurotus eryngii substrate was promising for cultivation of Agrocybe chaxingu under ambient temperatures. On the recycled substrate supplemented with only 1% sugar and 1% lime a biological efficiency of 40.42% was achieved. The formulae with an additional supplementation of 10% wheat bran or 10% Tenebrio molitor raised the biological efficiency to 52.50 and 51.56%, respectively. The formulae with an additional supplementation of 20% T. molitor feces or 10% T. molitor feces plus 10% wheat bran demonstrated the highest biological efficiency of 62.95 and 61.10%, respectively, significantly higher than that of the formula with supplementation of 20% wheat bran, therefore, T. molitor feces was an excellent supplement that was superior to wheat bran for Agrocybe chaxingu cultivation. | 2019-04-02T13:11:37.824Z | 2017-09-21T00:00:00.000 | {
"year": 2017,
"sha1": "ba94b50e9a87dc57c4e0657f8b2d82e11f5da5bc",
"oa_license": "CCBY",
"oa_url": "https://link.springer.com/content/pdf/10.1007/s40093-017-0171-9.pdf",
"oa_status": "GOLD",
"pdf_src": "Springer",
"pdf_hash": "ba94b50e9a87dc57c4e0657f8b2d82e11f5da5bc",
"s2fieldsofstudy": [
"Agricultural and Food Sciences",
"Environmental Science"
],
"extfieldsofstudy": [
"Chemistry"
]
} |
244173937 | pes2o/s2orc | v3-fos-license | Conservation Units and Sustainable Development Goals: An Examination of the Private Natural Heritage Dona Benta e Seu Caboclo in Brazil
The 2030 Agenda is a global action plan presented by the United Nations (UN) that establishes Sustainable Development Goals (SDGs. Conservation Units (UC) are an important element of the strategy towards nature conservation. Starting from a local approach to critically analyze these issues of global relevance, the focus of the investigation is the Dona Benta e Seu Caboclo Natural Heritage, a private conservation unit located in the municipality of Pirambu in the state of Sergipe and the community surrounding the Lagoa Redonda settlement. The study aimed to analyze the perception of the owner and the community regarding the environment, RPPN, and SDGs in order to build a critical approach to the issues that interconnect nature conservation and sustainable development. The methodology is based on the interview with the owner and a focus group with the community, carried out between January and March 2020. Following these interviews, it was ascertained that there is a divergence in how public and private lands are understood by locals: Private lands are exclusively associated with production, whilst public land is associated with conservation. Community representatives do not recognize RPPN as a conservation area, with those associated objectives. Yet, debates on the environment and sustainable development intertwined with nature conservation are recognized by everyone as a priority. In the end, it is possible to recognize the importance of strengthening a space for coexistence between the local population and the RPPN in order to implement common and transformative actions in favor of conservation, and sustainable development.
Introduction
In 2015 the United Nations (UN) presented the 2030 Agenda and the Sustainable Development Goals (SDGs) as a global action plan that proposes to support governments in designing public policies for development; combining economic, social and environmental concerns. The 2030 Agenda is composed of 17 SDGs, interconnected among themselves, understood through the lens of five principles: people, prosperity, peace, partnerships, and the planet promoting environmental management integrating natural resources and ecosystems (Unite Nations, 2015).
The set of SDGs established to guide actions in favor of maintaining the natural resources and ecosystems of Agenda 2030 are focused on achieving water security on the planet, adopting clean and accessible energy, sustainable conservation of the oceans, seas, and marine resources, as well as protecting terrestrial ecosystems and combating global climate change. Since the adoption of the 2030 Agenda, it is possible to recognize efforts by environmental governance in order to bring the local realities closer to the goals proposed by the SDGs that guide actions for the integrated and sustainable management of natural resources and ecosystem. These efforts also to go through the local realities of the Conservation Units.
UCs are territorial spaces for environmental protection; they possess relevant natural characteristics and are recognized as effective instruments for the preservation and conservation of nature (Newsome and Hughes, 2018). UCS are territorial spaces that support biodiversity conservation and socioeconomic development (Bhammar el al., 2021).
In Brazil, they were legally instituted and regulated in 2000 by the SNUC -National System of Conservation Units. The SNUC not only unified the legal treatment of the units but also characterized them according to their management objectives and types of use for the purpose of nature conservation (Brasil, 2000).
Recognizing the importance of reflection on how these themes interact with each other, this article integrates broader research developed in the Health and Environment Program (PSA) of the University Tiradentes (UNIT) entitled "Sustainable Development Goals (ODS) and Private Reserves of Heritage Natural (RPPN) in Brazil, proposing beginning from a local approach of the Private Reserve of Natural Heritage (RPPN) Dona Benta and Seu Caboclo, and the surrounding community in Povoado Lagoa Redonda, in order to critically analyze this is a global reality.
Tracing the Path of the 2030 Agenda
The United Nations (UN) in 1972 promoted the Conference on the Human Environment, also known as the Stockholm Conference, with the objective of debating themes related to the importance of the preservation of nature and the role of society in doing so.
The Stockholm Conference was the first major global conference focused on environmental degradation and policies of human development. The United Nations for the Environment (UNEP) was created from this conference, and has since been working on issues related to the environment in conjunction with governments and other organizations (Seyfang, 2003).
According to Corea do Lago (2007) one of the great merits of the Stockholm Conference was to bring to the world agenda discussions such as the repercussion of pollution on people's quality of life, previously only privately debated by national diplomats. Thus, it is necessary to recognize that other major environmental issues, such as sustainable development or the difficulties in implementing the recommendations focused on environmental preservation are only debated today, driven by the global meetings that have taken place since then.
In 1992, twenty years after the Stockholm Conference, Rio de Janeiro hosted the United Nations Conference on Environment and Development (UNCED), also known as Eco-92 or Earth Summit. The Earth Summit not only produced important documents, such as the Rio Declaration on Environment and Development and the Declaration of Principles on the Use of Forests, bringing debates on development, environmental protection, and social justice to the world agenda but also culminated in the approval of "Agenda 21" (Kumar,2020).
According to Malheiros et al. (2007), Agenda 21 not only proposed to the signatory countries of the UN a new model of sustainable development, but also, put pressure on States to develop strategies and plans adapted to local realities in favor of sustainable development.
In the year 2000, in the 55th. Session of the UN General Assembly in New York, with the presence of Heads of State and Government 191 nations, considering the themes previously proposed in Agenda 21, signed a commitment approving the "Millennium Declaration", a common document synthesized in eight Millennium Development Goals (MDGs), with the objective of enshrining into governmental strategies the goal of combatting problems such as extreme poverty, gender equality and environmental sustainability (Unite Nations, 2000).
Continuing this series of world meetings on the environment proposed by the UN in 2002, the World Summit on Sustainable Development, also known as Rio + 10, was held in Johannesburg, South Africa. And in 2012, the debates returned to Rio de Janeiro at the United Nations Conferences on Sustainable Development also known as Rio + 20, when the document entitled "The Future We Want" was presented, proposing new challenges to the signatory governments, addressing topics such as the green economy and sustainable development (Unite Nations, 2012).
During Rio + 20, the "General Assembly Open Working Group on Sustainable Development Goals" was created, and is made up of representatives of governments and social, technical, and civil society organizations with the purpose of preparing a new document that could synthesize the new set of actions and priority goals aimed at a new world development agenda (Unite Nations, 2014).
In 2015, representatives of the 193 UN member states meeting at the General Assembly in New York approved this new document now entitled "Transforming Our World: The 2030 Agenda for Sustainable Development" composed of 17 Sustainable Development Goals (SDGs) and 169 goals that brought to the agenda topics such as; Eradication of poverty; Sustainable agriculture and zero hunger; Health and wellness; Quality education; Gender equality; Sanitation and drinking water for all; Clean and affordable energy; Decent work and economic growth; Industry innovation and infrastructure; Reduction of inequalities; Sustainable city and communities; Responsible consumption and production; Action against global climate change: Life on the water; Earth life; Peace, justice and effective institutions and finally the importance of institutional partnerships and means of implementation (Unite Nations, 2015).
Conservation Unit (UC): Private Natural Protection Reserve
Conservation Units (UCs) are natural areas recognized as effective instruments for the preservation and conservation of nature (Correa,2021;Gatti 2017;Brasil,2006) In Brazil, UCs were instituted from the enactment of Law No. 9.985 / 2000 when the National System of Conservation Units (SNUC) was created, which unified the legal treatment of the units and characterized them into two large groups according to their objectives of management and types of use. The Integra Protection UCs, those where the indirect use of their natural attributes is allowed, and the UCs for sustainable use, where it is possible to make nature conservation compatible with the use of part of their natural resources. Created at the Federal, State or Municipal level, they are subdivided into the categories: Ecological Station; Biological Reserve; National park; Natural Monument; Wildlife Refuge; Environmental Protection area; Area of Relevant Ecological Interest; National Forest; Extractive reserve; Wildlife Reserve; Sustainable Development Reserve; Private Reserve of Natural Heritage (Brasil,2000).
Methodological Procedures
In this study, the Case Study research technique was adopted with the owner of the RPPN and the Focal Group with members of the Povoado Rural Lagoa Redonda community. The research project upon which this article is based was submitted and approved by the UNIT Research Ethics Committee in November 2019 with CAAE 0047519.5.000.5371.
The criteria established for choosing RPPN Dona Benta and Seu Caboclo were as follows: Being an RPPN located in the State of Sergipe, thus meeting the requirements of the research program in which it is linked, which establishes Sergipe as a priority territory for studies; Have your management plan approved; Be an open UC for scientific research and visitation for tourism, recreational and educational purposes. For data collection with the owner of the RPPN, a script of interviews was prepared, composed of clear and unambiguous questions, with the purpose of knowing the motivation for the creation of this RPPN and the activities allowed in the area. As well as understanding, from the perspective of the RPPNista, the relations between the universe of the RPPN and ODS and which public policies would be fundamental for the strengthening of the RPPNs in Brazil and specifically in Sergipe. The interview was conducted in January 2020, and the data collected were treated through Discourse Analysis (AD), which according to Gatti (2005), allows interpreting and recognizing language not only as a linguistic form, but also, as a repository for understanding the social values and cultural narratives by which its speakers live.
In a second moment to collect the data with the community of Povoado Lagoa Redonda, the research technique Focal Group was adopted, which according to Gatti (2005, p.11), allows us to "... understand everyday practices, actions, and reactions to facts and events, behaviors and attitudes, building an important technique for the knowledge of representations, perceptions, beliefs, habits, values, restrictions, prejudices, languages and symbols." The interview was conducted in January 2020, and the data collected were treated through Discourse Analysis (AD), which according to Careganato (2006), allows interpreting and recognizing language not only as a linguistic form, but also, as a repository for understanding the social values and cultural narratives by which its speakers live.
In a second moment to collect the data with the community of Povoado Lagoa Redonda. The discussion process with members of the community took place in March 2020 in a collective area of the village, lasted for four continuous hours, and was attended by ten residents of the community. In this study, the criteria adopted for choosing participants in the focus group were: Being over 18; Interested in the proposed themes; Be willing to participate in discussion with other members of the community.
For the treatment of the data, Content Analysis (CA) was carried out, which according to Bardin (1977), is "a set of techniques of analysis of communications aiming to obtain by systematic procedures the description of the content of the messages." Using the Iramutec software (Interface of R pour les Multidimensionnelles Analyzes de Textes et de Questionnaires), allows different forms of analysis, among them the similarity analyzes adopted in this study for the grouping and organization of words graphically (Loubè re and Ratinaud 2014).
Results and Discusión
Historically, the entire northern coastal strip of the State of Sergipe suffers from the depredation of natural resources and almost total extinction of the Atlantic Forest. The conflicts that permeate this coastal ecosystem constitute a series of socio-environmental impacts that point to losses of natural and cultural resources. From the perspective of territorial planning, the North Coast of Sergipe is perhaps the most conserved area in the state, thanks to its preserved natural areas that house three of the state UCs. One of them being the Santa Isabel Biological Reserve (REBIO), an important federal area of integral protection established in 1988, in addition to the RPPN Morro da Lucré cia and the RPPN Dona Benta and Seu Caboclo, constituted respectively in 2012 and 2010 (Barreto, 2019). It is in this context that the Cordeiro de Jesus Farm, which houses the RPPN Dona Benta and Seu Caboclo, known as the "Lagoa do Avô" Farm, formerly focused on coconut production, and after being acquired by the current owner, changed its name, created RPPN and became the headquarters of the nature preservation, leisure, and tourism complex Ecomuseo do Roceiro. According to the interviewed interviewer, there are many motivations that a rural landowner has to create an RPPN, but what motivated him personally was his ideal of keeping the local ecosystem intact from the stimuli of local technicians from environment agencies. "...So, I can say that respect for nature in my case comes from birth. But I wanted to go beyond respect, I wanted to have a farm, produce organics, preserve the forests and animals until the opportunity arose to acquire a property with all the natural elements I was looking for (water, forest, animals, dunes, communities, etc.) to start my project." " ...But I wanted to go beyond respect, I wanted to have a farm, produce organics, preserve the bush and animals until the opportunity arose to acquire a property with all the natural elements I was looking for." Reflecting on the role of RPPN Dona Benta and Seu Caboclo as a propelling tool for the sustainable development of the territory, the search for partners with educational institutions and the like, relationship with the community, and ecotourism programs are the guiding axes of the unit according to reports by the RPPN manager . "In the Research and Monitoring Program, several researches have already been carried out, covering fauna, flora, tourism and the communities surrounding the RPPN, through educational institutions: UFS, IFS, UNIT, Faculdade Jardins, UFPE, UFRJ and Schools of the County." About the perception of the owner of the RPPN about the 2030 Agenda and the SDGs Reaffirming the integration of all objectives with each other and emphasizing that the 2030 Agenda is a tool for bringing the community closer together, as it encourages interaction with nature in a more harmonious and sustainable way, see Table 1. (2010), the approximately 350 residents make their income from extraction followed by fishing activities, subsistence agriculture and tourism (Sergipe,2020). According Braghini & Vilaret (2013), since the creation of REBIO Santa Isabel, discontent arose in the region by the population of the village, because at the same time that this unit brought national prominence to the locality, it also established restrictive norms of access to resources existing in that territory. Regarding the perception of local communities about the RPPNs. It is observed in the similarity analysis graph that a significant number of participants do not know about the existence of the RPPN nor its objectives, but they know the rural property Cordeiro de Jesus farm where the RPPN Dona Benta and Seu Caboclo is located as well as the owner, see Table 2. Reaffirming occurrences between words indicating convexity between them. 2. It is observed that the words that stand out most in the speeches reaffirm the local narrative of ignorance of the RPPN as a preservation area.
3. From the prominent word, other words that have a significant expression are branched out, reaffirming the narrative of knowing REBIO Santa Isabel as a conservation unit.
4. At the end of the ramifications, there is the relationship between private rural property with an -owner‖ and not as a conservation unit.
Source: Prepared by the authors (2020) The possibility of having a natural preservation area on private land was a topic that caused strangeness and aroused the curiosity of the participants who refer to conservation units as public areas, alluding to REBIO Santa Isabel as being "that government area closed to the population." Linked to the local understanding that conservation areas only exist on public lands and that private lands are associated exclusively with production, community representatives do not recognize the RPPN as a nature conservation area, and neither its objectives, they do however identify that being a preservation area, with tourism potential, could generate more jobs for the local population. Regarding environmental issues, there is a local narrative valuing the importance of debates on the theme, however, such statements are related exclusively to nature preservation. They reaffirm the importance of debates on the environment and highlight the importance of the participation of children in the community in these debates. But they also demonstrate a careful look for survival, bringing in the discourse not only the values focused on the need to preserve the place where they live but also the village's potential for tourism and the promotion of sustainable agriculture and fishing by proposing partnerships with RPPN Dona Benta and Seu Caboclo focusing on tourism development actions, see Table 3. Table 3. Similitude Analysis: Answers to the guiding question "Do you know the RPPN Dona Benta e Seu Caboclo and know that it is a Private Natural Heritage ?‖ 1.It is observed that there are words that stand out in the speech reaffirming the occurrences between the words and the indications of the convexity between them. It is observed that there are words that stand out most in the speeches reaffirming the local narrative of the importance on the environment, but directly related to the values intrinsic to nature. 2.From the prominent word, other words branch out that present a significant expression of the future narrative interconnected with the role of children in the community. 3.At the end of the ramifications, the relationship between nature and the future is contemplated.
Source: Prepared by the authors (2020) In the presentation of the guiding questions in the focus group on the 2030 Agenda and SDGs. Although, according to the UN, the document "Transforming Our World: The 2030 Agenda for Sustainable Development‖, composed of 17 Sustainable Development Goals (SDGs) integrated among themselves, establishes that the involvement of different segments of the population is as important as the participation of companies and governments.
The distance between the population and the 2030 Agenda, and its proposed objectives, is clear when the theme arouses the curiosity of the participants, and almost all respondents stated that they do not know the SDGs or their purposes.
It is observed that there is a single negative word that stands out in the speech reaffirming the few occurrences between the words and the indications of convexity. It is also observed that there is no discourse formed about the SDGs, see Table 4. (2020) Which allowed the participants of the focus group to correlate some realities experienced in the village with the objectives set out in the 2030 Agenda, see Table 5 Table 5
Conclusion
Starting from a local approach to critically analyze a topic of global relevance was a challenging proposal, albeit one that allowed a critical approach towards the issues that interconnect nature conservation and local, sustainable development.
It is believed that one of the contributions of this article is, in addition to reflecting how themes such as the SDGs and UCs interconnect with each other in Brazil, bringing more critical and reflective participation from the community.
Reconciling the protection of natural resources associated with practices that enable sustainable development to be achieved, prosperity and appreciation of people are increasingly required and demanded by local society.
It is observed that RPPN Dona Benta e Seu Caboclo consists of a UC category with ample conditions to assist in this dialogue, because it has as a main characteristic, the owner voluntarily adheres to rules designed for the conservation of nature, but also using the area for tourism, leisure and environmental education. However, the local understanding that conservation units exist only on public lands while private lands are associated exclusively with the extraction and use of raw materials, and not as a preservation place compels us to reflect on the need to strengthen an integration space and coexistence between the local population and the RPPN in order to establish these links between the actors.
The importance of analyzing the perception of the surrounding community about the RPPN is focused upon in order to know the meanings and attitudes that govern such local relations. Alongside this, as a means of examining the possibility of adopting the SDGs script proposed by Agenda 2030 as a tool for recognizing the local reality so that it can support municipal and state governments in the design of public policies and partnerships between local agents. There are also some final reflections on why environmental debates are important in rural communities close to preservation areas and how these paths can be built, recognizing in dialogue and environmental education paths that can facilitate and strengthen the space of coexistence between community universes; between rural productive and natural preservation areas. | 2021-10-19T15:16:56.139Z | 2021-09-27T00:00:00.000 | {
"year": 2021,
"sha1": "858587383fd652a2624338e88c69e0272889f923",
"oa_license": null,
"oa_url": "https://redfame.com/journal/index.php/ijsss/article/download/5322/5581",
"oa_status": "GOLD",
"pdf_src": "Adhoc",
"pdf_hash": "4aa53a5ddd74bb929808e104dc4b5cad1f144f0a",
"s2fieldsofstudy": [
"Sociology"
],
"extfieldsofstudy": [
"Political Science"
]
} |
255834382 | pes2o/s2orc | v3-fos-license | NUT midline carcinoma mimicking a germ cell tumor: a case report
NUT midline carcinoma (NMC) is a rare and highly aggressive malignancy. Although more information on NMC has been recently accumulating in the literature, most oncologists and pathologists remain unfamiliar with the clinical and pathologic features of this disease. The clinical features of NMC sometimes mimic those of other malignancies, and NMC can therefore be overlooked if the diagnosis is not suspected. We present the case of a young male with NMC arising in the mediastinum with elevated serum alpha-fetoprotein levels suggestive of an extragonadal nonseminomatous germ-cell tumor. A 28-year-old Japanese male presented with cough and left-sided chest pain for 6 weeks. The patient had a mediastinal tumor with metastases to the right lung, lymph nodes, and bones at initial presentation. Nonseminomatous germ cell tumor was suspected due to the young age, location of the tumors, and elevated serum alpha-fetoprotein. However, biopsy confirmed the diagnosis of NMC with immunohistochemistry. The tumor briefly responded to cytotoxic chemotherapy but subsequently progressed and became refractory to the chemotherapy regimen. External beam radiotherapy was administered with dramatic shrinkage of the tumor and a metabolic response on 18-fluoro-2-deoxyglucose positron emission tomography/computed tomography (18F-FDG PET/CT) scan. However, the patient died 4.5 months after the diagnosis of NMC. Serum levels of alpha-fetoprotein may be elevated in patients with NMC. Regardless of the level of tumor markers, immunohistochemistry for NUT should be performed in cases of poorly differentiated carcinomas without glandular differentiation arising in the midline structures. 18F-FDG PET/CT is useful for staging and assessing responses to therapy.
Background
NUT midline carcinoma (NMC) is a highly aggressive subset of squamous cell carcinomas, affecting both children and adults [1]. The genetic hallmark is a rearrangement of the NUT gene, located on chromosome 15 [2]. The rearrangement commonly occurs between the NUT gene and BET family genes BRD4 and BRD3 [1], although other rare fusion partners of the NUT gene have also been recently reported [3].
Because of the poor prognosis (median survival 6.7 months) [2] and poor response to conventional cytotoxic chemotherapy, new drugs such as BET inhibitor (BETi) and histone deacetylase inhibitor (HDACi) are now in clinical trials for patients with NMC [3]. Because of the availability of these potentially promising new investigational drugs, prompt diagnosis of NMC is even more important to plan appropriate treatment and to encourage patients to consider participating in clinical trials. Most oncologists and pathologists are not familiar with NMC owing to its rarity. The clinical features of NMC sometimes mimic those of other malignancies. For these reasons, NMC may often be misdiagnosed if it is not suspected and specifically looked for. In one study, 114 cases of poorly differentiated carcinomas or unclassified mediastinal malignancies were pathologically reexamined using immunohistochemistry for NUT and fluorescence in situ hybridization (FISH), leading to the diagnosis of NMC in 4 (3.5%) cases [4]. Here we report the case of a young male with NMC arising in the mediastinum with elevated serum alpha-fetoprotein (AFP) levels, suggestive of an extra-gonadal nonseminomatous germ cell tumor (NSGCT).
Case presentation
A 28-year-old Japanese male presented with cough and left-sided chest pain for 6 weeks. The medical, surgical, and family histories were unremarkable. He smoked approximately 20 cigarettes per day for 6 years and infrequently consumed small amounts of alcohol. Physical examination was unremarkable; the lungs were clear to auscultation. Chest X-ray revealed an enlarged mediastinum. A full-body CT scan showed a bulky mediastinal mass with right bronchial stenosis, lymphadenopathy in the right side of the hilum and supraclavicular region, and a mass in the right middle lobe measuring 4.4 × 3.0 cm (Fig. 1). 18 F-FDG PET/CT showed the involvement of multiple bones, including spine, scapula, ribs, sternum, pelvis, and femur (Fig. 2a).
The clinical course and patient background suggested a differential diagnosis that included lung cancer, lymphoma, and a mediastinal germ cell tumor (GCT). Pathology examination of tissue from an endobronchial ultrasound-guided transbronchial needle aspiration (EBUS-TBNA) biopsy of a mediastinal lymph node revealed a loosely cohesive growth pattern with prominent necrosis and degeneration and no clear pattern of differentiation (Fig. 3a). The tumor was composed of ovoid and spindle-shaped cells with anisocytosis, scanty cytoplasm, and irregular ovoid hyperchromatic nuclei (Fig. 3b).
Only a minor proportion (<5%) of the cells were positive for AE1/AE3. Squamous cell markers (p40 and p63) were weakly positive in 10-20% of the cells. Neuroendocrine markers (chromogranin and synaptophysin), leukocyte common antigen, myogenic markers (MyoD1 and myogenin), germ cell markers (placental alkaline phosphatase and hCG), c-kit, TTF-1, CA19-9, and CD30 were all negative. The surface markers of tumor cells obtained with flow cytometry were not compatible with lymphoma. A bone marrow aspiration and biopsy revealed infiltration of cells with pathologic features similar to those of the EBUS-TBNA biopsy specimen.
The clinical presentation of a mediastinal tumor in a young male with an elevated serum AFP suggested NSGCT, but findings of immunohistochemistry of tumor sections were not consistent with that diagnosis. An outside pathology consultation was obtained, and the diagnosis of NMC was suggested due to the pathologic and clinical characteristics. Thus, immunohistochemistry using NUT (C52B1) rabbit monoclonal antibody was performed.
While waiting for results of the immunohistochemical examination for NUT, serum levels of LDH increased to 993 IU/L on day 11, suggesting rapid disease progression. Because of the poorly differentiated pattern on histologic analysis, elevated serum AFP, and case reports indicating that cisplatin-based treatment showed some efficacy for treating NMC [5], we decided to start chemotherapy with BEP regimen (bleomycin 30 kU on days 1, 8, and 15; etoposide, 100 mg/m 2 on days 1-5; and cisplatin, 20 mg/m 2 on days 1-5; given every 21 days). On day 8 of the first cycle, the immunohistochemistry result revealed that most of the neoplastic cell nuclei were strongly positive for NUT in a speckled pattern (Fig. 3c). The diagnosis of NMC was thus confirmed. After two cycles of chemotherapy, CT showed tumor regression, and serum levels of LDH declined. Therefore, the BEP regimen was continued. However, after three cycles, CT showed tumor progression.
Because the performance status of the patient had declined, we considered single-agent chemotherapy to be most appropriate and started doxorubicin (75 mg/m 2 every 21 days). Despite the change in chemotherapy regimen, serum levels of LDH continued to increase. Therefore, we judged doxorubicin to be ineffective, although CT scan after the first cycle of doxorubicin showed no change in tumor volume. We believed that local control of the mediastinal mass was most important for the patient at that point to prevent airway obstruction as the tumor progressed. Some authors have reported on the effectiveness of radiotherapy and chemoradiotherapy for NMC [2,6], we therefore administered mediastinal radiotherapy with concomitant weekly docetaxel (30 mg/m 2 ). Radiotherapy was planned with conventional fractionation, 60 Gy/30 fractions (fr).
CT after 16 Gy had been administered showed an apparent decrease in tumor bulk in the irradiated area, although it had increased in other areas. Docetaxel did not seem to be beneficial for systemic tumor control, and platelet counts had decreased by 2.9 × 10 3 /μL; thus, docetaxel was discontinued after four cycles and radiotherapy alone was continued. The patient started to complain of pain in the lower back and right femur; MRI confirmed the presence of osteolytic bone metastases. Palliative radiotherapy (30 Gy/10 fr) for metastases in vertebrae L3-S1 and the right femur was concurrently started with irradiation to the mediastinum. Although the pain in the lower back and right femur were relieved, the patient developed painless proptosis in the right eye. While considering additional radiotherapy to prevent pain, we performed 18 F-FDG PET/CT to evaluate the extent of metastases to the bones and other organs. It showed that radiotherapy had achieved good local control in the mediastinum, vertebrae L3-S1, and right femur, but there were many new sites of abnormal FDG accumulation (Fig. 2b). Moreover, there was an abnormal FDG uptake in a mass in the right orbital soft tissue (Fig. 2c), suggestive of orbital metastasis. Since then, the patient condition gradually deteriorated, and only palliative care was given. He died 4.5 months after the initial diagnosis of NMC.
Discussion
Except for the inconsistency with histologic results, the characteristics of the present case for the most part resembled those of extra-gonadal NSGCT: 1) occurrence in young adults, mostly males; 2) midline location; 3) metastases to the lungs, liver, and bones; and 4) elevated serum tumor markers (AFP and hCG) [7].
In an international analysis of mediastinal nonseminomas, an elevated serum AFP was present in 74% (211/ 287) and β-hCG in 38% (110/287) of the cases [8]. The serum AFP level of the present case was compatible with those results. Aside from the present case, there are only three case reports of NMC with elevated serum AFP levels. In one case, the AFP level was 326 μg/L and β-hCG was <1 IU/L [5]; in another case, they were 62 ng/ mL and N/A [9]; in the other, they were 1742 ng/mL and <2 IU/L, respectively [10]. If immunohistochemistry for NUT had not been performed, these cases may have been classified as poorly differentiated carcinoma with midline distribution (extra-gonadal germ cell syndrome), one of the groups of carcinoma of unknown primary with favorable prognosis [11].
French et al. [11] recommended that immunohistochemistry for NUT should be performed in all poorly differentiated carcinomas without glandular differentiation arising in the chest, head, and neck. This means that even if serum tumor markers for GCT are elevated in cases of a tumor arising from such sites, NMC should be suspected.
Definitive diagnosis can be made only by demonstration of nuclear staining using the rabbit monoclonal antibody (clone C52B1) for the NUT protein, even without confirmation of the fusion oncogene with FISH. The antibody is highly sensitive (87%) and specific (100%) in non-GCTs [12]. Because we did not suspect NMC initially and did not cryopreserve the pathology specimen, we could not perform the additional molecular analyses such as FISH. It is noteworthy that GCTs can also display nuclear NUT reactivity, but the staining is very focal (<5% of tumor cells) and faint, and it lacks the speckled pattern [13]. In one report, CD34 immunoreactivity was positive in 7 of 11 NMC cases despite they were epithelial tumor [14]. We performed an additional immunohistological staining of CD34 in the present case after the patient's death however, no significant CD34 staining was observed.
Another remarkable feature in the present case was the metastasis to the orbital soft tissues. Only four patients with NMC who had orbital involvement have been reported [2,[15][16][17]. Although involvement of the mediastinum, paranasal sinuses, nasal cavity, intrathoracic organs, bone, and lymph nodes have been commonly reported, intra-abdominal organs [5,18] and cutaneous tissue [19] are rare metastatic sites. Till date, there has been no report of brain involvement [20].
The usefulness of 18 F-FDG PET/CT has been demonstrated in staging and assessing the response to treatment of NMC [21,22]. In our patient, 18 F-FDG PET/CT was performed before and after treatment, showing a good response to radiation therapy by the tumor. Metastasis to the bone marrow was shown with 18 F-FDG PET/ CT, but not with CT alone. We therefore recommend performing 18 F-FDG PET/CT before treatment for accurate staging of NMC.
As in other reports, the tumor in the present case briefly responded to cytotoxic chemotherapy but became refractory to the treatment soon after. There are no specific effective chemotherapeutic regimens [2] because even dose-dense chemotherapy was not found to control the tumor for long [23]. It is probably impossible to cure NMC with cytotoxic chemotherapy alone. Thus, novel targeted therapeutic approaches such as BETi (direct acting inhibitors of the BRD3 and BRD4 bromodomains) and HDACi are highly anticipated [3]. Several phase I clinical trials of BETi (GSK-525762A and OTX015) and HDACi (CUDC-907) are available to patients with NMC. Because of rapid tumor progression and only a short-term response to cytotoxic chemotherapy, we should encourage patients diagnosed with NMC to consider participating in clinical trials as early as possible.
Conclusions
In summary, we present the case of a patient who had clinical features similar to those of extra-gonadal NSGCT. As serum levels of AFP can be elevated in NMC, immunohistochemistry for NUT should be considered in all poorly differentiated carcinomas arising in midline structures without glandular differentiation, regardless of the levels of tumor markers. 18 F-FDG PET/ CT is useful for staging and assessing the response to therapy. It is expected that novel targeted therapies may change the poor prognosis of NMC in the near future. | 2023-01-16T14:15:36.068Z | 2016-11-17T00:00:00.000 | {
"year": 2016,
"sha1": "117cbc1c9c61110bf2cc89686497b26f895d8326",
"oa_license": "CCBY",
"oa_url": "https://bmccancer.biomedcentral.com/track/pdf/10.1186/s12885-016-2944-3",
"oa_status": "GOLD",
"pdf_src": "SpringerNature",
"pdf_hash": "117cbc1c9c61110bf2cc89686497b26f895d8326",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": []
} |
12510588 | pes2o/s2orc | v3-fos-license | Piezoresistive Membrane Surface Stress Sensors for Characterization of Breath Samples of Head and Neck Cancer Patients
For many diseases, where a particular organ is affected, chemical by-products can be found in the patient’s exhaled breath. Breath analysis is often done using gas chromatography and mass spectrometry, but interpretation of results is difficult and time-consuming. We performed characterization of patients’ exhaled breath samples by an electronic nose technique based on an array of nanomechanical membrane sensors. Each membrane is coated with a different thin polymer layer. By pumping the exhaled breath into a measurement chamber, volatile organic compounds present in patients’ breath diffuse into the polymer layers and deform the membranes by changes in surface stress. The bending of the membranes is measured piezoresistively and the signals are converted into voltages. The sensor deflection pattern allows one to characterize the condition of the patient. In a clinical pilot study, we investigated breath samples from head and neck cancer patients and healthy control persons. Evaluation using principal component analysis (PCA) allowed a clear distinction between the two groups. As head and neck cancer can be completely removed by surgery, the breath of cured patients was investigated after surgery again and the results were similar to those of the healthy control group, indicating that surgery was successful.
Introduction
More than a century ago, medical practitioners asked patients to exhale in order to figure out whether their breath contained specific smells possibly related to a particular disease. This old idea is here adopted to investigate breath samples of cancer patients using a nanomechanical electronic nose device. Specific chemical tracer substances or chemical by-products of metabolic processes are often found in the patient's breath for many diseases of the respiratory tract system. Conventionally, breath samples are analyzed using gas chromatography and mass spectrometry methods, but interpretation of results is difficult and time-consuming. Here, an electronic nose technique is presented to characterize patients' exhaled breath samples in a non-invasive way which allows a simpler analysis than with the abovementioned classical standard analytical procedures. Cancer is a disease where cells are growing in an uncontrolled way forming a tumor, invading and destroying adjacent healthy tissues and organs. Cancerous cells can spread to other locations in the body via lymph or blood vessels to form metastases, the most common cause of cancer-related death in patients with solid tumors.
Head and neck squamous cell carcinoma (HNSCC) is the fifth most important cancer type worldwide. HNSCC is highly curable if detected early. However, second primary tumors and local recurrences are a major challenge, the latter being the most common cause of treatment failure and disease-related death. Early detection of HNSCC and identification of residual or recurrent disease in treated patients allow early therapeutic intervention and may result in a survival advantage. Diagnosis is normally performed by endoscopy and taking a biopsy of suspect lesions. We propose here a non-invasive diagnostic technique based on detection of volatile organic compounds (VOCs) in exhaled breath using an electronic nose technique.
Detection of head and neck cancer using patients' exhaled breath is a well-established technique [1,2]. In cancer progression, the squamous cells of the head or neck provoke cellular oxidative stress [3], leading to the emission of cancer-specific VOCs into the blood [4]. A part of the VOC biomarkers in the blood are transmitted to the alveolar exhaled breath through exchange via the lung. The presence of such VOCs (particularly straight and monomethylated alkanes and benzene derivatives) in breath is documented by gas chromatography/mass spectrometry measurements [5,6], These types of VOCs also occur in the breath of healthy subjects, but in a different composition ratio as in cancer patients [4]. Numerous reports on successful application of electronic noses for breath testing have been reported in the literature for many years [7][8][9][10][11][12][13][14][15].
In recent years, mechanics has experienced a revival, as microfabrication technologies and nanotechnology are applied to produce tiny structures. The development started with a novel imaging technique named atomic force microscopy [16], which provides ultrahigh resolution of a surface on the atomic scale. This technology is based on raster-scanning a surface with a microfabricated cantilever beam that has a tiny tip at its free end. While keeping the distance constant between tip and surface by controlling their interaction force, a topography map of the surface is produced, revealing details on the atomic scale. Most frequently, a laser is used to determine the tiny deflection of the cantilever in the nanometer range by reflecting the laser beam at the apex of the cantilever and measuring the position with a lateral photodiode. Although the cantilever is very small, the readout still requires tabletop-sized equipment.
A cantilever beam is an excellent force sensor for ultra-small forces in the nano-Newton range. The high sensitivity can not only be used for atomic force microscopy, but also allows one to apply the cantilever beam for measuring surface forces during molecular adsorption processes on the cantilever surface, thus enabling cantilevers as chemical sensors [17].
Over the years, cantilever sensors have turned out to be very useful for detecting DNA hybridization with single point mutation sensitivity [18], protein and antibody recognition [19], and even for assessing patient eligibility for cancer treatment [20]. The only drawback is that the equipment for optical cantilever deflection readout is still quite bulky. This disadvantage can be overcome by employing a different method for deflection detection, namely the use of piezoresistor elements to determine bending. The required readout electronics then fits in a portable box of 10 cmˆ10 cmˆ16 cm, including data acquisition and gas handling. For detection of head and neck cancer, we take here advantage of the bending responses of an array of piezoresistive polymer-coated membranes due to exposure to VOCs.
Microfabrication
Medical applications favor the routine use of a compact, small-sized, portable and non-invasive device. A prototype was used to examine patients' exhaled breath samples in search for VOC patterns associated with head and neck cancer. Membrane-type surface stress sensors (MSS) have been Sensors 2016, 16, 1149 3 of 8 first described by Yoshikawa et al. [21]. Their application for medical sensing have been reported in by Loizeau et al. [22,23]. MSS arranged in arrays for molecular detection in gaseous phase have been microfabricated from silicon-on-insulator substrates and structured by deep reactive ion etching. The round membranes have a diameter of 500 µm and a thickness of 2.5 µm and are suspended by four sensing beams with integrated p-type piezoresistors, representing a full Wheatstone bridge ( Figure 1). The p-doped piezoresistors have been fabricated using two distinct doping processes (ion diffusion through boron silica glass and implantation). The latter method features shallow resistors, which are very sensitive to surface stress changes. patterns associated with head and neck cancer. Membrane-type surface stress sensors (MSS) have been first described by Yoshikawa et al. [21]. Their application for medical sensing have been reported in by Loizeau et al. [22,23]. MSS arranged in arrays for molecular detection in gaseous phase have been microfabricated from silicon-on-insulator substrates and structured by deep reactive ion etching. The round membranes have a diameter of 500 µm and a thickness of 2.5 µm and are suspended by four sensing beams with integrated p-type piezoresistors, representing a full Wheatstone bridge ( Figure 1). The p-doped piezoresistors have been fabricated using two distinct doping processes (ion diffusion through boron silica glass and implantation). The latter method features shallow resistors, which are very sensitive to surface stress changes.
Membrane Functionalization
The membranes have been coated with a thin (<1 µm) polymer layer using inkjet spotting ( Figure 2). VOCs present in the breath sample will diffuse into the polymer layer in a way characteristic for each polymer resulting in swelling [14] and produce bending of the membrane.
Clinical Pilot Study
In a clinical pilot study, we investigated breath samples from head and neck cancer patients and healthy donors (smokers) as control persons in a double blind trial. The patient inclusion criteria The actual diameter of the round membrane (shown in blue) is 500 µm and its thickness is 2.5 µm. The membrane is suspended by four sensing beams with integrated p-type piezoresistors (shown in red), representing a full Wheatstone bridge. A solid supporting frame (green) holds the sensor.
Membrane Functionalization
The membranes have been coated with a thin (<1 µm) polymer layer using inkjet spotting ( Figure 2). VOCs present in the breath sample will diffuse into the polymer layer in a way characteristic for each polymer resulting in swelling [14] and produce bending of the membrane. patterns associated with head and neck cancer. Membrane-type surface stress sensors (MSS) have been first described by Yoshikawa et al. [21]. Their application for medical sensing have been reported in by Loizeau et al. [22,23]. MSS arranged in arrays for molecular detection in gaseous phase have been microfabricated from silicon-on-insulator substrates and structured by deep reactive ion etching. The round membranes have a diameter of 500 µm and a thickness of 2.5 µm and are suspended by four sensing beams with integrated p-type piezoresistors, representing a full Wheatstone bridge (Figure 1). The p-doped piezoresistors have been fabricated using two distinct doping processes (ion diffusion through boron silica glass and implantation). The latter method features shallow resistors, which are very sensitive to surface stress changes.
Membrane Functionalization
The membranes have been coated with a thin (<1 µm) polymer layer using inkjet spotting ( Figure 2). VOCs present in the breath sample will diffuse into the polymer layer in a way characteristic for each polymer resulting in swelling [14] and produce bending of the membrane.
Clinical Pilot Study
In a clinical pilot study, we investigated breath samples from head and neck cancer patients and healthy donors (smokers) as control persons in a double blind trial. The patient inclusion criteria
Clinical Pilot Study
In a clinical pilot study, we investigated breath samples from head and neck cancer patients and healthy donors (smokers) as control persons in a double blind trial. The patient inclusion criteria were gastrointestinal bleeding, central nervous system disorders. 5. History of immunodeficiency or autoimmune pathologies. 6. Metastases in the central nervous system, not treated and progressing. 7. Chemotherapy, radiotherapy, immunotherapy less than 4 weeks prior to entry into the study (6 weeks for nitrosoureas). 8. Concomitant treatment with steroids and antihistamines. Topical or inhaled steroid application is allowed. 9. Psychiatric disorders or dependencies that could prevent informed consent. 10. Kidney dysfunction with creatinine >2ˆupper limit of normal value. 11. Diabetes.
The patients were selected from the same age groups (between 60 and 85 years old, male and female). The clinical pilot study comprised unfortunately only a few usable patient samples. Originally there were four healthy donors, and nine head and neck squamous cell carcinoma patients. From the nine head and neck squamous cell carcinoma patients, there were only three who provided useable breath samples before and after surgery. For the other six there was either only a breath sample before surgery available or the volume of the sample was too small to allow reliable measurements (in some cases only 0.2 L was provided, which does not allow one to reliably follow the measurement protocol). Patients and donors were asked to breathe into a 1 liter Tedlar bag. Breath samples were collected before surgery and after surgery, typically 2 weeks after the operation. The bags were then stored at 4˝C until analysis. Each breath sample has been measured seven times, whereby the first injection-purge cycle has been discarded to avoid influence of previous measurements.
Results and Discussion
Breath samples from head and neck cancer patients and healthy donors (smokers) have been characterized by the MSS electronic nose technique. Gaseous sample from the Tedlar sample bag were transported into the measurement chamber by pumping at a rate of 15 mL/min using micropumps (Bartels Mikrotechnik GmbH, Dortmund, Germany). After exposure to the polymer-coated MSS sensors for 30 s, the chamber has been purged with dry nitrogen gas also for 30 s. Several injection/purge cycles have been performed, resulting in a chart as shown in Figure 3. Polymer-coated membranes can be reproducibly regenerated by purging them with dry nitrogen gas.
The deflection values within an injection/purge cycle at 10, 15, 20 and 25 s after beginning of each injection were subtracted from the value at the beginning of the injection (0 s) to reduce the influence of possible drifts in the measurement. These four differential values obtained for each membrane sensor were processed using principal component analysis (PCA) to be represented in a two-dimensional plot showing one dot for each breath sample measurement, i.e., one injection-purging cycle (Figure 4).
To emphasize the distinction capability of the method, hierarchical tree analysis (unweighted pair group method with arithmetic mean, UPGMA) was performed. In this method, the data are analyzed by calculating the Euclidian distance between vectors consisting of data points and their closest neighbors. Figure 5 shows the UPGMA diagram of the data. Breath measurements of NHSCC patients before surgery are clearly different from measurements of healthy control persons and cured NHSCC patients after surgery, demonstrating the success of surgery. Other evidence that VOC profiles in exhaled breath can be used to detect diseases has been shown by Phillips et al. for lung and breast cancer [24]. A pilot study of analysis of air exhaled by HNSCC patients using an array of five gold nanoparticle sensors and gas chromatography has shown promising results [1,2]. VOCs related to diseases like diabetes mellitus and uraemia in breath Other evidence that VOC profiles in exhaled breath can be used to detect diseases has been shown by Phillips et al. for lung and breast cancer [24]. A pilot study of analysis of air exhaled by HNSCC patients using an array of five gold nanoparticle sensors and gas chromatography has shown promising results [1,2]. VOCs related to diseases like diabetes mellitus and uraemia in breath Other evidence that VOC profiles in exhaled breath can be used to detect diseases has been shown by Phillips et al. for lung and breast cancer [24]. A pilot study of analysis of air exhaled by HNSCC patients using an array of five gold nanoparticle sensors and gas chromatography has shown promising results [1,2]. VOCs related to diseases like diabetes mellitus and uraemia in breath were reported to be detectable easily using polymer-coated nanomechanical cantilever arrays [25], allowing to distinguish different VOCs.
were reported to be detectable easily using polymer-coated nanomechanical cantilever arrays [25], allowing to distinguish different VOCs. Screening for HNSCC using tests or diagnostic methods using a non-invasive method based on exhaled breath represents a major advantage for possible detection of tumors at an early stage. Identification of subjects that showed indication of VOCs related to HNSCC allows subsequent closer examination using invasive traditional techniques like extraction of a tissue sample (biopsy) The detection of tumors based on testing the exhaled breath of patients, is especially promising for tumors of the upper aerodigestive tract, as the probability is high that VOCs originating from different metabolic pathways of the cell, in particular from tumors at an early stage, can be detected.
Conclusions
Early detection of primary tumors and of recurrences after surgical removal of the primary tumor is crucial for patients with HNSCC. Invasive analyses, e.g. endoscopies, give clear indication on the treatment success, but are a hassle for the patient. Detecting the VOCs in exhaled breath represents a non-invasive method to follow the success of a treatment/surgery. We have shown that MSS are capable to distinguish HNSCC patients before surgery from healthy control persons and HNSCC patients after surgery by monitoring VOCs in patients' breath samples. The measurement device used is portable and powered by a laptop computer's universal serial bus port.
Detecting VOCs associated with cancer growth will ultimately lead to a simple, easily performable and non-invasive screening technique that can be used in conjunction with, or as alternative to standard more invasive techniques. The technique could eventually be adapted to other pathologies affecting the respiratory tract. Screening for HNSCC using tests or diagnostic methods using a non-invasive method based on exhaled breath represents a major advantage for possible detection of tumors at an early stage. Identification of subjects that showed indication of VOCs related to HNSCC allows subsequent closer examination using invasive traditional techniques like extraction of a tissue sample (biopsy). The detection of tumors based on testing the exhaled breath of patients, is especially promising for tumors of the upper aerodigestive tract, as the probability is high that VOCs originating from different metabolic pathways of the cell, in particular from tumors at an early stage, can be detected.
Conclusions
Early detection of primary tumors and of recurrences after surgical removal of the primary tumor is crucial for patients with HNSCC. Invasive analyses, e.g. endoscopies, give clear indication on the treatment success, but are a hassle for the patient. Detecting the VOCs in exhaled breath represents a non-invasive method to follow the success of a treatment/surgery. We have shown that MSS are capable to distinguish HNSCC patients before surgery from healthy control persons and HNSCC patients after surgery by monitoring VOCs in patients' breath samples. The measurement device used is portable and powered by a laptop computer's universal serial bus port.
Detecting VOCs associated with cancer growth will ultimately lead to a simple, easily performable and non-invasive screening technique that can be used in conjunction with, or as alternative to standard more invasive techniques. The technique could eventually be adapted to other pathologies affecting the respiratory tract. | 2016-07-25T08:52:20.182Z | 2015-11-10T00:00:00.000 | {
"year": 2016,
"sha1": "82a56794180765d29cf9c38dd0d1ced029e247e0",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/1424-8220/16/7/1149/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "6f08620531f3e0e007d83248963a4c2b9fcf6a58",
"s2fieldsofstudy": [
"Engineering",
"Medicine"
],
"extfieldsofstudy": [
"Materials Science",
"Computer Science",
"Medicine"
]
} |
257764789 | pes2o/s2orc | v3-fos-license | Safety and Feasibility of Skin-to-Skin Contact in the Delivery Room for High-Risk Cardiac Neonates
Early skin-to-skin contact (SSC), beginning in the delivery room, provides myriad health benefits for mother and baby. Early SSC in the delivery room is the standard of care for healthy neonates following both vaginal and cesarean delivery. However, there is little published evidence on the safety of this practice in infants with congenital anomalies requiring immediate postnatal evaluation, including critical congenital heart disease (CCHD). Currently, the standard practice following delivery of infants with CCHD in many delivery centers has been immediate separation of mother and baby for neonatal stabilization and transfer to a different hospital unit or a different hospital altogether. However, most neonates with prenatally diagnosed congenital heart disease, even those with ductal-dependent lesions, are clinically stable in the immediate newborn period. Therefore, we sought to increase the percentage of newborns with prenatally diagnosed CCHD who are born in our regional level II–III delivery hospitals who receive mother-baby SSC in the delivery room. Using quality improvement methodology, through a series of Plan-Do-Study-Act cycles we successfully increased mother-baby skin-to-skin contact in the delivery room for eligible cardiac patients born across our city-wide delivery hospitals from a baseline 15% to greater than 50%. Supplementary Information The online version contains supplementary material available at 10.1007/s00246-023-03149-2.
Introduction
One important means of supporting mother-baby bonding is skin-to-skin contact (SSC), the placement of a neonate onto mother's chest or abdomen to permit direct ventral-to-ventral skin-to-skin contact [1]. SSC is known to confer health benefits for both mother and baby, improving maternal stress and anxiety, promoting healthy mother-child interactions, and increasing maternal breastmilk intention rates [2][3][4][5][6][7][8][9]. For baby, SSC supports autonomic regulation, weight gain, and improved immunological functioning, as well as promotes attachment and healthy emotional and cognitive development [4,5,[10][11][12]. "Early SSC" refers to the initiation of SSC shortly after birth-ideally immediately after delivery and lasting through an initial breastfeeding attempt [1], however a consensus definition is lacking regarding the exact time frame for SSC initiation to qualify as "early".
Specific benefits of early SSC are well-established and include improved fetal-to-neonatal transition and cardiorespiratory stability with enhanced autonomic regulation [2,13,14]. Published studies support that early SSC in healthy term and preterm neonates following both vaginal and cesarean delivery is both safe and effective [15][16][17][18][19], and early SSC is recommended by the American Academy of Pediatrics and World Health Organizations [20]. Further, researchers have hypothesized that this immediate postnatal period represents a critical window for neuronal programming, somatosensory system development, parentinfant bonding, social development, and reward processing and learning, suggesting the potential for longer-term neurodevelopmental benefits of early SSC [5]. For mothers, early SSC decreases analgesia requirements and risk of peripartum hemorrhage [13,21]. Additionally, early SSC confers positive birthing experiences, decreased maternal stress, anxiety and pain, and increased rates of breastfeeding exclusivity and duration [7,22,23]. While data detailing contemporary early SSC rates in United States hospitals is inadequate at best, in Pennsylvania's experience of deliveries at 34-42 weeks gestation, 56% of mothers reported being able to hold their baby within five minutes of delivery [23].
Neonates and infants with critical congenital heart disease are at risk for a number of postnatal complications, including growth failure, feeding and breastfeeding challenges, and necrotizing enterocolitis [24][25][26][27][28]. They are also known to be at risk for neurodevelopmental impairments [29,30]. Ongoing SSC in an intensive care setting can be challenging, and many parents have expressed fears around holding their infant in the ICU as well as significant stress throughout the ICU stay [31,32]. However, among neonates with CCHD, SSC around the time of cardiac surgery improved neonatal comfort and decreased maternal stress and anxiety [33][34][35].
Despite the strong of evidence demonstrating beneficial effects of early SSC, the separation of mother and baby following delivery remains routine in many circumstances, particularly for neonates with prenatally diagnosed congenital anomalies requiring timely postnatal evaluation [36][37][38]. Historically, standard practice following delivery of neonates with critical cardiac anomalies has been immediate separation of mother and baby to permit perinatal assessment and intervention by a neonatal resuscitation team, followed by transfer to a specialized unit or separate children's hospital, while the mother remains in the delivery center post-partum unit to recover. Unique amongst other congenital anomalies, most neonates with prenatally diagnosed CCHD, even those with ductal-dependent lesions, are clinically stable in the immediate newborn period and urgent postnatal interventions are not typically required [39]. Donofrio and colleagues have published "Levels of Care" guidelines for the resuscitation of infants by cardiac lesion, supporting routine delivery care for many infants with CHD [40]. While SSC is not specifically addressed in these guidelines, in any other context routine DR care would include early SSC. Despite the overwhelming evidence of the benefits of early SSC, we are aware of only one center that has ventured to explore SSC in the delivery room (DR) for neonates with CHD [41], and we are not aware of any published guidelines specifically addressing early SSC for cardiac neonates. Utilizing quality improvement (QI) methodology, we sought to increase the percentage of newborns prenatally diagnosed with eligible CCHD lesions who receive SSC in the DR from our baseline of 15% to greater than or equal to 50% within 18 months following project initiation and sustain for 12 months.
Context
This QI project was implemented across eight regional delivery hospitals within the neonatal service line at Nationwide Children's Hospital (NCH) in Columbus, Ohio from 2018 through 2021. NCH itself is a large, not for profit, quaternary care, freestanding pediatric teaching hospital located in the Midwest with affiliation to regional delivery hospitals. The NCH Fetal Center performs regional multidisciplinary prenatal consultation for all families with prenatally diagnosed significant CHD through collaboration with fetal cardiology, neonatology, high-risk maternal fetal medicine, and other specialists. At our centers, a modified version of the Donofrio et al. Level of Care designation is utilized following a fetal cardiac diagnosis to develop an individualized delivery and perinatal care plan, with regional delivery hospital sites representing a spectrum of volume and acuity [42]. Across our service line, delivery hospitals range from Level I-III regarding neonatal intensive care capacities and vary significantly with regards to delivery volume and acuity. Generally, high-risk fetal deliveries are cohorted to delivery at one of the two highest-level delivery hospitals.
Medical Ethics Approval
The Institutional Review Board (IRB) at NCH determined that this project was QI and not human subjects research. Therefore, IRB review and approval was not required per institutional policy.
Preintervention
In January 2018 we assembled a multidisciplinary working group composed of key stakeholders: physicians trained in neonatology and pediatric cardiology; Fetal Center nurse coordinator; at least one neonatal nurse or nurse practitioner to champion the project at each delivery hospital ("project champion"). The team utilized strategies from the Institute for Healthcare Improvement (IHI) model for improvement, including Aim statement, Key Driver Diagram (KDD), and Plan-Do-Study-Act (PDSA) cycles to achieve project aims. After a review of baseline data, the project team developed a KDD ( Fig. 1) with identification of several key drivers for success including identification of appropriate candidates, communication, and buy-in from regional delivery hospital staff to support project engagement. These drivers informed development of specific interventions which were implemented sequentially through PDSA cycles.
Interventions
Our workgroup focused initial efforts on development of an eligible cardiac diagnosis list. We included all motherbaby dyads referred for prenatal consultation with a prenatal diagnosis of CHD determined to require postnatal admission to a neonatal or cardiac intensive care unit (NICU, CTICU). We incorporated recommendations from relevant subspecialists to define eligibility criteria based on both cardiac diagnosis and factors at delivery defining safety in the perinatal transition, and created standardized cardiac-diagnosis specific guidelines for perinatal management (such as target saturation goals, postnatal hospital unit and monitoring requirements, empiric prostaglandin need) ( Table 1). Inclusion criteria supporting an expected normal perinatal transition required successfully meeting each of the following: delivery at full term gestational age (defined as minimum 37 weeks gestational age), minimum Apgar score of eight at both one and five minutes, and absence of a positive pressure ventilation requirement during delivery resuscitation.
Next, we defined contraindications in the DR. Exclusion criteria were identified to prioritize safety in the DR. CHD diagnoses with a high chance of early clinical instability were excluded: transposition of the great arteries with intact given that this situation would not be expected to result in mother-baby separation for delivery room stabilization and ICU admission and management.
Our multidisciplinary team developed a generalized algorithm for SSC that allowed for center-specific individualizations based on workflow, personnel and other unit factors. We additionally created guidelines for supervision of SSC, for which all eligible patients receiving DR SSC would be directly monitored by a NICU nurse, nurse practitioner or physician who would remain physically present with baby throughout the SSC period, and cardiopulmonary stability would be evaluated via direct visual assessment and continuous pulse oximetry monitoring. Finally, standardized documentation (Data Collection Sheet) was developed to support delivery teams' recording key elements of the SSC experience (Supplemental Fig. 3).
Initial project education included written study materials distributed across delivery centers and to members of the Fetal Center, as well as monthly virtual project meetings with open question and answer sessions. For our Fetal Center families, education regarding DR SSC was incorporated into their multidisciplinary fetal counseling session. Evaluating barriers to S2S adoption across the first two data collection quarters, it was discovered that there were varying degrees of provider comfort in caring for neonates with CHD across delivery centers. Therefore, in January of 2019 we created targeted education for DR staff across all eight regional delivery hospitals in the form of PowerPoint slides focused on perinatal instability risk by cardiac diagnosis as well as the expected course of PDA closure postnatally. For each delivery center, the project champion was tasked with dissemination and assessment of comprehension, although no formal measure was assessed.
Multiple iterations of communication pathways to effectively identify eligible patients and disseminate perinatal SSC plans across the neonatal service line were ultimately required based on our network of delivery hospitals. At the time of project go live in 2018, we implemented a first formal process to track eligible fetal patients for DR SSC, as well as an initial process for dissemination to all delivery hospital personnel via biweekly email listserv. Beginning in July of 2019, we updated this documentation process and our fetal team began to document individual patient eligibility in the electronic medical record (EMR) at the time of prenatal consultation, which was then standardized through an EMR "SmartPhrase" template in 2020. This allowed for eligible patients to be added to a DR SSC specific fetal list which was maintained by the Fetal Center nurse coordinator for tracking purposes.
Measures and Definitions
Outcome Measure: 1. Percentage of eligible neonates successfully receiving SSC in the delivery room: Calculated quarterly by taking all eligible patients who received SSC in the DR and dividing by the total number of eligible patients. We defined "early SSC" as direct mother-baby direct contact in the delivery room but did not formally designate a required time-to-SSC initiation or required duration.
Process Measure: 1. Percentage of eligible patients with a prenatally identified SSC plan: This process measure was chosen as we believed that effective development and dissemination of an SSC plan for all fetal patients would be one critical input required for delivery teams to achieve our desired outcome.
Balancing Measures: 1. Adverse events in the DR: Defined as provider concern for clinical decompensation, need for escalation of cardiorespiratory support or early termination of SSC attributable to any provider perceived concerns related to DR SSC. 2. Percent of neonates receiving DR SSC with hypothermia upon arrival to NICU: Hypothermia was defined as temperature at or below 36.0°F while hyperthermia was defined as temperature greater than or equal to 38°F.
Data Analysis and Study of Interventions
Data for analysis were obtained from July 2018 through March 2021. Our primary outcome measure was tracked on a p-chart, a type of Statistical Process Control (SPC) chart with application of established rules to identify signals of special cause variation, or nonrandom change [43]. Given relatively small denominators, we plotted data quarterly. Our SPC chart was generated using QI Macros SPC Software Version 2020.10, an add-in to Microsoft Excel. Observed improvements were felt to be directly related to the interventions implemented as described given temporal relationship of interventions to improvements and no other known changes to the system. Compliance with our process measure was gathered by performing chart review on all eligible patients during the study period with documentation of a DR SSC plan required for compliance. Adverse events were assessed by documentation within the EMR DR summary or as reported in the Data Collection Sheet. Hypothermia and hyperthermia were assessed by EMR review to determine initial recorded temperature upon NICU arrival. Both our process and balancing measures were analyzed utilizing descriptive statistics.
Results
Our baseline percentage of eligible patients who received DR S2S was 15%. Following project initiation, 124 total patients met eligibility criteria based on cardiac diagnosis alone. Two of these patients were excluded as they were delivered outside our network of delivery hospitals and another 2 had incomplete DR information to assess eligibility and were therefore excluded. Of the remaining 120 patients, 60% (72/120) met the DR transition criteria necessary for safe SSC. Of patients not meeting transition inclusion criteria (N = 48, 40%), the most common reasons were premature delivery (< 37 weeks gestational age) and Apgar score below threshold. Our target SSC success rate greater than 50% was achieved around 9 months into the intervention period, with ultimate SSC success rate for our target population stabilizing around 70%. This SSC success rate was sustained for our QI project goal of 12 months. Figure 2 details percentage of eligible newborns successfully receiving SSC in the DR. Data points are presented by quarter. The isolated drop below 30% once target SSC had been established represents months during the emergence of SARS COV 2 pandemic, which across our delivery centers undoubtedly impacted factors responsible for successful SSC performance.
Regarding our process measure, of the 72 eligible patients, 49 (68%) had documentation of a prenatal plan for SSC, while 23 (32%) did not. This was noted as a large barrier to successful SSC early in the project; 87% (20/23) of patients with absent plan occurred in the first twelve months of study implementation, and none of these patients achieved successful SSC. PDSA cycles targeting barriers for this process measure proved critical for success.
Next, we explored SSC success by individual factors including delivery hospital and delivery modality, presented Table 2. While delivery hospital within our regional neonatal network hospitals did not appear to significantly impact likelihood of success, delivery via cesarean section was associated with an overall SSC success rate of only 28%. Even when considering SSC success during the second half of the project, assuming a now-established culture of DR SSC, SSC was achieved only 25% of the time following cesarean delivery but 72% of the time following vaginal birth.
Regarding study balancing measures, no deaths occurred in the DR during the study period, and every patient achieved admission to the NICU or CTICU as per their prenatal care plan. We did not identify any adverse event in the delivery room reportedly attributable to DR SSC. Evaluation of NICU admission temperature identified an overall low frequency of admission temperature outside of physiologic range. Admission hypothermia was reported in 2 (3%, 2/70 with 2 patients missing data, lowest admission temperature 35.9°F) neonates following SSC and zero neonates for whom DR SSC was not performed. Admission hyperthermia was reported in only 1 (1%, 1/70) neonate and did not represent a neonate completing DR SSC.
Discussion
Our QI project successfully developed and implemented a plan to increase DR SSC for mothers and infants with a prenatal diagnosis of CCHD. This initiative was very well received by our fetal families. While single study in nature, our results are important in that they demonstrate both feasibility and safety for this notable change in peripartum culture. Additionally, this multi-site project involved eight delivery sites across our regional neonatal network, encompassing a spectrum of NICU levels, resources, and healthcare provider teams. Out of an abundance of safety, our eligibility criteria were intentionally stringent, with the clear goal of limiting SSC to the most stable neonates with CHD. We had no reported adverse events. The dialog created by our project improved the peripartum culture in our delivery hospitals and has allowed for us to consider expanding our eligibility to permit inclusion of additional populations. Specifically, we propose that moving forward we may be able to expand eligibility to consider additional diagnoses and more lenient gestational age and one minute Apgar score, such as considering CHD neonates born at 36 weeks gestational age or with a lower one minute Apgar score, provided they meet the five minute Apgar score threshold. Table 3 reports our group's identified sources of challenge. Observed early barriers to successful implementation included perceptions around inadequate staffing, insufficient project awareness, and lack of DR team education. Consistent with our experience, lack of project awareness has been cited by other groups as a key barrier to successful [21,37]. Thus, early PDSA cycles targeted family and delivery provider engagement through enhanced education to improve comfort with CHD physiology, collaborative development of CCHD care and SSC supervision guidelines, and development of process for tracking eligible fetal patients/families with effective information distribution to key individuals across delivery centers. Achieving comfort, proficiency, and buy-in across our diverse regional delivery hospitals required an individualized approach at each hospital, which was most effectively achieved by partnering with the institutionally based project champion. Without question, individual delivery hospitals each identified unique site-specific barriers to implementation, including staffing model concerns, time limitations, comfort level with culture change, and multiprovider and multisubspecialty communication challenges. Additionally, we believe that family engagement through SSC education provided prenatally contributed to successful SSC; families were encouraged and empowered to request skin-to-skin following delivery.
As we overcame early barriers, cesarean delivery emerged as the most persistent and significant barrier to DR SSC success. Similarly, cesarean delivery has been a well-documented barrier to early SSC across diverse neonatal populations [11,36,38,44]. Recently, Crenshaw et al. reported that despite positive core beliefs by health care providers regarding neonatal and maternal benefits of SSC, cesarean section related concerns limiting early SSC included risk of hypothermia due to cold OR, contamination of sterile field, and inability to assess newborn with potential for maternal and/or neonatal instability [44]. However, clinical studies specifically evaluating safety of early SSC following cesarean delivery suggest this intervention not only appears safe, but also decreases maternal pain, improves maternal birth experience, and supports fetal-to-neonatal transition [14,21,45,46]. Moving forward, partnering with Labor and Delivery front line providers to improve engagement and support will undoubtedly be critical for sustained SSC improvement and evolution of the perinatal culture.
For our initial QI project, we prioritized achievement of any DR SSC as our first outcome, regardless of duration, as this felt most realistically attainable within our service line culture. By large database review, no outcome differences were observed when SSC duration was stratified into less than 60 min versus more than one hour [1]. While this suggests that any period of early SSC may be beneficial, a more complete understanding of the effects of early SSC duration on desired mother-baby outcomes would be clinically important. Additionally, incorporating routine DR breastfeeding will be an important target outcome as we aim to improve mother-baby bonding and optimize longer-term maternal and neonatal health. Important for both families and caregivers, prioritizing mother-baby bonding and breastmilk use may be important contributors in optimizing care and outcomes specific to the CHD population. Additionally, fathers and partners undoubtedly represent yet another important contributor in early bonding and family health. While our study was limited in scope to birth mothers, moving forward a more expansive consideration exploring potential benefits in SSC between neonate and additional parent/caregiver would undoubtedly prove valuable for optimizing family-centered care and family bonding.
Considering the big picture, early skin-to-skin care following delivery represents one easily achievable piece of a much larger puzzle mapping opportunities to optimize family-centered care, family bonding, and health and wellness outcomes for both families and babies. This may be particularly important when considering challenges of families affected by high-risk congenital anomalies such as CCHD. Our strong belief is that this comprehensive, collaborative, and family-centered approach must start prenatally, continue perinatally, and extend postnatally throughout hospitalization and homegoing.
Limitations of our study include relatively small patient numbers, owning to pilot exclusively within our Nationwide Children's Hospital delivery network. Additionally, we acknowledge that assessment of measures and outcome was dependent upon their inclusion within documentation in the EMR or data sheet. Further, our endpoint of SSC success was believed to be a first important outcome, however additional details around the SSC experience such as age of life at initiation, SSC duration, and opportunity to attempt initial breastfeed will be important to explore moving forward.
Conclusion
We demonstrated safety and efficacy of early mother-baby skin-to-skin contact following delivery of neonates prenatally diagnosed with high-risk cardiac anomalies for which postnatal care required separation of mother-baby for neonatal intensive care. Overwhelmingly, delivery via cesarean section represents the most persistent and significant barrier to SSC success. We are hopeful that evolution of DR culture supporting SSC will have "trickle down" effects on pediatric health and maternal and family wellbeing; this may be particularly impactful for families already facing a potentially life-threatening condition for their child.
Larger-scale studies and exploration of early SSC effect on expanded clinical and longitudinal outcomes, such as feeding, growth and necrotizing enterocolitis, or duration of maternal breastfeeding and breastmilk use, as well as maternal wellbeing and mother-baby bonding would be important next steps. Additionally, incorporating early direct breastfeeding efforts into this SSC experience could prove beneficial for both mother and baby. | 2023-03-28T06:15:56.546Z | 2023-03-27T00:00:00.000 | {
"year": 2023,
"sha1": "eee8ad504bd138c8c21455415ac84ac8798d8f04",
"oa_license": null,
"oa_url": "https://link.springer.com/content/pdf/10.1007/s00246-023-03149-2.pdf",
"oa_status": "BRONZE",
"pdf_src": "PubMedCentral",
"pdf_hash": "4884df96eb3148a077256815861ad0d47f4d38a3",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
18888592 | pes2o/s2orc | v3-fos-license | Primitive elements in $p$-divisible groups
We introduce the notion of primitive elements in arbitrary truncated $p$-divisible groups. By design, the scheme of primitive elements is finite and locally free over the base. Primitive elements generalize the"points of exact order $N$,"developed by Drinfeld and Katz-Mazur for elliptic curves.
Introduction
In this paper, we observe that Raynaud's theory of Haar measures on finite flat group schemes [Ray74] may be used to define a "non-triviality" condition on sections, which we call non-nullity. For groups of order p, we show that non-null sections are "generators" in the sense of Oort-Tate theory [TO70]. For truncated pdivisible groups, we use a non-nullity condition to define the notion of primitivity, generalizing the "points of exact order N " of Drinfeld [Dri74] and Katz-Mazur [KM85].
In the case of elliptic curves, Drinfeld and Katz-Mazur go further and define full level structures. This allows them to construct and prove nice properties of integral models of modular curves at arbitrary levels in a very elegant fashion. We believe that our definition of primitive elements may be a first step toward defining full level structures in certain cases, as it was in previous work by one of us in the case µ p × µ p [Wak16]. However, for general p-divisible groups, we believe that new ideas are needed, and we hope that this work will lead to a better understanding of the issues involved in defining full level structures.
1.1. The problem of full level structures. To understand the problem of finding level structures, consider the following setup. Let S be a Noetherian scheme that is flat over Z (p) , and let G be a finite flat group scheme such that G[1/p] := G × S S[1/p] isétale-locally isomorphic to (Z/p r Z) g (for instance, S could be a Shimura variety classifying g/2-dimensional abelian varieties with additional structure, and G could be the p r -torsion of the universal abelian variety). A level structure on G is a map (Z/p r Z) g → G that is like an isomorphism. The desired properties of level structures are best described scheme-theoretically. The set of full level structures F G should be a closed subscheme of Hom S (Z/p r Z) g , G satisfying: • F G is flat over S • F G × S S[1/p] = Isom S[1/p] (Z/p r Z) g , G[1/p] .
Since Hom S (Z/p r Z) g , G is flat over Z (p) , these conditions determine F G uniquely. However, in practice, it may be difficult to tell if a given homomorphism is full. For many purposes, F G is only useful if there is an explicit description of the ideal defining it.
Previous results.
In the case where G embeds into a smooth curve C over S (for example if G = E[p r ] for an elliptic curve E), a satisfactory theory of full level structures has been built out of the ideas of Drinfeld [Dri74]. However, Drinfeld's definition crucially uses the fact that G is a Cartier divisor in C. Katz and Mazur developed a notion of "full set of sections," which they show is equivalent to the Drinfeld level structure in the case that G ⊂ C [KM85, §1.10]. However, as Chai and Norman pointed out [CN90,Appendix], the Katz-Mazur definition does not give a flat space in general -it fails even for the relatively simple example of G = µ p × µ p .
More recently, one of the present authors developed a notion of full homomorphisms in the specific case G = µ p × µ p [Wak16].
1.3. Primitive elements. The first step in finding a basis for a free module is to find a primitive vector -that is, an element that can be extended to a basis. Analogously, a first step towards defining a notion of full level structure might be to define a notion of primitive element for group schemes. In addition, the notion of primitive element is needed to define the correct notion of "linear independence," which is a key part of the method in [Wak16] for G = µ p × µ p . In this paper we develop a formal theory of primitive elements, generalizing the ad hoc notion defined in [Wak16].
1.4. Primitive elements and full homomorphisms. One may suggest defining a homomorphism ϕ : (Z/p r Z) g → G to be "full" if it sends primitive vectors to primitive vectors. Indeed, if G is constant, then this corresponds to the condition that the matrix of ϕ has linearly independent columns. However, the example of µ p × µ p studied in [Wak16] shows why this definition does not give a flat space of full homomorphisms. In that case, one may think of ϕ as a "2 × 2-matrix with coefficients in µ p ." If ϕ sends primitive vectors to primitive vectors, then the columns are "linearly independent," but the rows may not be -hence the elements cutting out the condition that the rows be "linearly independent" are p-torsion elements in the coordinate ring of the space of full homomorphisms. On the other hand, the main theorem of [Wak16] implies that column conditions together with the row conditions give a flat space.
For a general group G, there is no obvious analog of the row conditions, so it is not clear how to generalize from primitive vectors to full homomorphisms. A new idea is needed.
1.5. Summary. Let S be a scheme, and let G be a finite locally free (commutative) group scheme over S. Let |G| denote the rank of G. We define a closed subscheme G × ⊂ G, which we call the non-null subscheme. The ideal cutting out G × consists of invariant measures, as in Raynaud's theory [Ray74], on the Cartier dual of G. As a consequence of Raynaud's results, G × is finite and locally free over S of rank |G|− 1. We think of G × as the group-scheme version of the set of non-zero elements of G.
There does not seem to be any completely satisfactory word to use here. Since the identity element in G(S) can perfectly well lie in G × (S) (as happens in the second example below when S is a scheme over F p ), it would be extremely confusing say that elements in G × (S) are non-zero. Instead we have chosen to say that they are "non-null." As evidence that the notion of non-nullity is reasonable, we mention the following examples: • If G = Γ S , the constant group-scheme associated to a finite abelian group Γ, then G × = (Γ \ {0}) S , the scheme of non-identity sections. • If G = µ p , then G × = µ × p , the scheme of primitive roots of unity. • If G is an Oort-Tate group [TO70] (i.e. |G| = p), then G × coincides with the scheme of generators defined by Haines and Rapoport [HR12]. • If G is a Raynaud group [Ray74] (i.e. G has an action of F q and |G| = q for some power q of p), then G × coincides with the scheme of F q -generators defined by Katz-Mazur (c.f. [Pap95]) We define primitive elements using non-nullity as follows.
is given by multiplication by p r−1 . It follows that the subscheme G prim ⊂ G is locally free over S of rank (p h − 1)p h(r−1) .
In specific examples, we can identify G prim : • If V = (Q p /Z p ) h and G = V S is a constant p-divisible group, then G prim is the scheme associated to the set of primitive vectors in the free Z/p r Zmodule V [p r ]. • If G = µ p r , then G prim is the subscheme of primitive roots of unity.
• If G = E[p r ] for an elliptic curve E, then G prim is the scheme of sections "of exact order p r " defined by Drinfeld-Katz-Mazur [Dri74,KM85].
This justifies the notation G prim -it is meant to evoke both the notion of primitive vector in a free module, and primitive root of unity.
1.6. Applications to Shimura varieties. Let X be a Shimura variety over Q that has a universal abelian variety A over it, and suppose X and A are models for X and A that are flat over Z (p) . Then, for each r > 1, there is an interesting cover X 1 (p r ) of X given by adding the additional data of a point of order p r in A.
The scheme X 1 (p r ) := A[p r ] prim is an integral model for X 1 (p r ) that is finite and flat over X. Since X is flat over Z (p) , this implies that X 1 (p r ) is the Zariski-closure of X 1 (p) in Hom X (Z/p r Z, A[p r ]). In particular, this "flat-closure" model, which is a priori only flat over Z (p) , is actually flat over X.
On the other hand, one can show that, except for modular curves (or the Drinfeld case), the scheme X 1 (p r ) is not normal. In particular, X 1 (p r ) is not the normalization of X in X 1 (p r ), and this gives an example where the "normalization" and "flat closure" models differ.
This issue of non-normality makes us doubtful that these models will have direct application to the Langlands program. Instead, we view the theory of primitive elements as an interesting tool to use in the future study of integral models. For example, it would be interesting to consider combining the notion of primitive element with parahoric models of Shimura varieties, in analogy with the work of Pappas on Hilbert modular varieties [Pap95]. Using the theory of Raynaud group schemes, Pappas produces a model for Γ 1 (p)-type level that is normal (but not finite over the base). 1.7. Acknowledgements. We thank G. Boxer, B. Levin and K. Madapusi Pera for interesting conversations about integral models. We are grateful to T. Haines, G. Pappas, and M. Rapoport for helpful comments on a preliminary version of this paper. We thank the referees for comments and suggestions.
Review of Raynaud's Haar measures for finite flat group schemes
In this section we work over an affine base scheme S = Spec(k), and G denotes a commutative group scheme over S that is finite, flat and finitely presented. So G = Spec(A) with A locally free of finite rank as k-module. This rank is a locally constant function, denoted |G|, on S.
We write G ′ = Spec(A ′ ) for the Cartier dual of G; it is another object of the same kind as G, and |G ′ | = |G|. Recall that A and A ′ are the k-duals of each other.
As Raynaud [Ray74] points out, it is helpful to think about f ∈ A as a function on G and µ ∈ A ′ as a measure on G, and then to write µ, f ∈ k for the natural pairing of µ with f . Closely following Raynaud's notation and conventions, we • write ⋆ for the multiplication law on A ′ (intuitively, convolution of measures), • write 1 for the unit element in the ring A (intuitively, the constant function with value 1), • write δ for the unit element in the ring A ′ , i.e. the counit A → k for the coalgebra A (intuitively, evaluation of functions at the identity element in the group G), • denote the natural A-module structure on A ′ by f µ (intuitively, pointwise multiplication of a measure by a function), and • denote the natural A ′ -module structure on A by µ ⋆ f (intuitively, the convolution of a function by a measure). By definition these actions are given by the formulas A G-module is by definition a comodule for the coalgebra A, but, because A is locally free of finite rank as k-module, giving a G-module is the same as giving an A ′ -module M . For example, the A ′ -module structure on A reviewed above is the one corresponding to the natural G-module structure on A.
Given a G-module M , its submodule M G of G-invariants consists of all elements in M annihilated by the augmentation ideal I ′ in A ′ . For any k-algebra R there is a natural map (We are abbreviating ⊗ k to ⊗.) When (2.1) is an isomorphism for every k-algebra, one says that forming G-invariants in M commutes with extension of scalars. Bear in mind that M need not have this property, even when it is locally free of finite rank as k-module.
For the G-module A one has A G = k. So, in this example, it is evident that forming invariants does commute with extension of scalars. Now A ′ is of course an A ′ -module, i.e. a G-module, so we can form its submodule of G-invariants We will refer to elements of D G as G-invariant measures on G. From the decomposition A ′ = k ⊕ I ′ it follows immediately that D G can also be described as Raynaud proves (in the discussion on page 277 of [Ray74]) that (A ′ ) G is a direct summand of A ′ , locally free of rank 1 as k-module. In other words, G-invariant measures on G form a line bundle over S. When D G is free of rank 1 (not just locally so), a basis element µ for it is called a Haar measure on G. A G-invariant measure µ : A → k is a Haar measure if and only if it is surjective.
The line bundle of G ′ -invariant measures on G ′ is then a direct summand of A that is locally free of rank 1 as a k-module. Note that since A = I ⊕ k, where I = ker(δ) is the augmentation ideal, (2.2) implies that J G is the annihilator of I in A. Raynaud also proves that (1) The natural pairing A ′ ⊗ A → k restricts to a perfect pairing between D G and J G . So the line bundles D G and J G on S are canonically dual to each other.
. It follows from (2) that, Zariski locally on S, the G-module A ′ is isomorphic to the G-module A. So forming G-invariants in A ′ (i.e. forming D G ) commutes with extension of scalars. (This useful fact is brought out by Moret-Bailly in the section of [MB85] in which he summarizes Raynaud's work.) 2.1. Integration in stages. We need one more fact about Haar measures, namely an analog of the "integration in stages formula" in the theory of Haar measures on locally compact groups. It seems plausible that this is well-known, but since we do not know a reference, we will provide a proof.
Consider a short exact sequence Here H = Spec(B), K = Spec(C) are objects of the same type as G, so B and C are locally free k-modules. Moreover A is faithfully flat over its subalgebra C, and B is the quotient of A by the ideal generated by the augmentation ideal I C in C.
Our goal is to understand invariant measures on G in terms of invariant measures on H and K. Dual to C ⊂ A and A ։ B are the algebra homomorphisms Observe that the kernel of A ′ ։ C ′ is the ideal in A ′ generated by the augmentation ideal Proof. It is evident that µ G is independent of the choice of the liftingμ K , because this lifting is well-defined modulo (I B ′ )A ′ , and I B ′ annihilates µ H . The rest of the lemma is most easily understood in terms of integration in stages, as we will now see. given by convolution with µ H (intuitively, integration over the orbits of H on G). We claim that, if µ H is a Haar measure, then I H is surjective. Indeed, this is a special case of the following more general statement. Let T = Spec(D) be an affine S-scheme, and let X = Spec(E) be an H-torsor over T . Then the map I H : E → D (given by convolution with the Haar measure µ H ) is surjective. Surjectivity of I H is fpqc local, so we are reduced to the case in which X = H × T . Then I H is obviously surjective, because it is obtained by tensoring µ H : B ։ k with D.
The map I H : A → C is equivariant with respect to G ։ K (and the natural actions of G on A and K on C), and the composition µ := µ K • I H : A → k is G-equivariant, i.e. µ ∈ D G . Unwinding the definitions, one sees that µ = µ G . The work we did shows that µ G is surjective when both µ H , µ K are surjective, and hence that µ K ⊗ µ H → µ G is an isomorphism from D K ⊗ D H to D G .
Non-null elements in G
In this section we continue with k and G as in the previous section.
3.1. Definition of non-nullity of elements in G. The explicit description (2.2) of J G shows that it is an ideal in A. We will refer to J G as the non-nullity ideal. We denote by G × ֒→ G the closed subscheme of G cut out by the ideal J G . Observe that G × = Spec(A/J G ) is locally free of rank |G| − 1 over S.
For every k-algebra R, G × (R) is a subset of G(R). We say that an element g ∈ G(R) is non-null when it lies in the subset G × (R). In the next subsections we will investigate this notion. 3.3. Testing non-nullity using an overring R ′ ⊃ R. An R-valued point of G is given by a k-algebra homomorphism g : A → R. The element g ∈ G(R) is nonnull if and only if the ring homomorphism g : If R ′ /R is faithfully flat, then R → R ′ is injective. So the notion of non-nullity is fpqc local, and therefore continues to make sense for any base scheme (or even algebraic space) S.
3.4.
Non-nullity in theétale case. Assume that G/S isétale. Then, locally in theétale topology, G is constant. It follows from the calculation in the previous subsection that A is the direct sum of the ideals J G and I G . In other words, A is the cartesian product of the k-algebras A/J G and A/I G . Therefore 3.5. Behavior under base change. Consider a k-algebra R. For any scheme X/k we denote by X R its base change to R. In particular we may base change G to R, obtaining a group scheme G R = Spec(R ⊗ A) over R. In our review of Haar measures, we mentioned that the natural map R ⊗ J G ֒→ J GR is an isomorphism, which tells us that the natural morphism is an isomorphism. In other words, forming G × from G commutes with extension of scalars.
3.6. Non-nullity for Oort-Tate groups. Now let us examine the notion of nonnullity in the case of Oort-Tate groups [TO70]. Our notion of non-nullity applies to all groups of order p over any base ring k, but in order to compare it to the notion of Oort-Tate generator we need to restrict attention to Λ-algebras, where Λ is the base ring considered in [TO70]. If ζ ∈ Z p is a primitive (p − 1)-rst root of unity, then with the intersection taking place in Q p . Let k be a Λ-algebra (e.g., a Z p -algebra). Then, given suitable a, b ∈ k, Oort-Tate construct a group G a,b of order p over k, but we will fix a, b and just call the group G. The corresponding k-algebra is A = k[x]/(x p − ax), and its augmentation ideal is generated by x. So the ideal J G consists of all elements in A that are annihilated by x, and a short computation reveals that J G is the k-submodule of A generated by x p−1 − a. This shows that an element g ∈ G(k) is non-null in our sense if and only if g is a generator of G in the sense of Haines-Rapoport [HR12] (this notion of generator was first used by Deligne-Rapoport in [DR73, Section V.2.6, pg. 106]) . Moreover, as Haines-Rapoport show [HR12, Remark 3.3.2], this is also equivalent to g having "exact order p" in the sense of Drinfeld-Katz-Mazur. This agreement suggests that the notion of non-nullity is a natural one.
Example 3.1. The above discussion applies to the group µ p of p-th roots of unity. The result is that a section ζ ∈ µ p (k) lies in µ × p if and only if Φ p (ζ) = 0 (where Φ p (T ) = 1 + T + · · · + T p−1 is the cyclotomic polynomial). In other words, µ × p is the subscheme of primitive p-th roots of unity.
3.7. Non-nullity for Raynaud groups. Raynaud groups are a natural generalization of Oort-Tate groups, and in this case, again, the notion of non-nullity agrees with a well-studied notion. We thank G. Pappas for communicating this generalization to us.
Let q = p n be a power of p and let D be the ring defined analogously to Λ, but with q in place of p (see [Ray74, Section 1.1]), and let k be a D-algebra. Given a suitable 2n-tuple (δ 1 , . . . , δ n , γ 1 , . . . , γ n ) ∈ k 2n , Raynaud, in [Ray74, Collolaire 1.5.1], defines a group scheme G over k with |G| = q together with an action of F q on G -that is, G is an F q -vector space scheme of dimension 1. The corresponding where i ranges over {1, . . . , n} and x n+1 := x 1 . Then the augmentation ideal is generated by (x 1 , . . . , x n ), and using [dSRS97, Proposition 2.1], for example, one can see that J G is the k-submodule of A generated by (x 1 · · · x n ) p−1 − δ 1 . . . δ n . By [Pap95, Proposition 5.1.5], G × is the scheme of "F q -generators of G", in the sense of Katz-Mazur.
3.8. Products. Consider groups G 1 , G 2 over k. The corresponding k-algebras, augmentation ideals, and non-nullity ideals will be denoted A i , I i , J i (for i = 1, 2). The ring of regular functions for the group G = G 1 × G 2 is A = A 1 ⊗ A 2 , and its augmentation ideal I G is (I 1 ⊗ A 2 ) + (A 1 ⊗ I 2 ). Therefore the ideal J G in A annihilated by I is the intersection of the ideal annihilated by I 1 , namely J 1 ⊗ A 2 , and the one annihilated by I 2 , namely A 1 ⊗ J 2 . (It follows that J G = J 1 ⊗ J 2 , and so J G is also the product of the ideals J 1 ⊗ A 2 and A 1 ⊗ J 2 .) In more geometrical language, we just verified that G × is the "union" of the closed subschemes G 1 ×G × 2 and G × 1 ×G 2 of G (i.e. it is the smallest closed subscheme containing the two given closed subschemes).
Some care is required in this situation. Consider an R-valued point g = (g 1 , g 2 ) of G. If g 1 is non-null or g 2 is non-null, then g is non-null. However, the converse is false, as is illustrated by the next example (when considering points with values in a ring that is not an integral domain).
Example 3.2. Let (x, y) ∈ µ p ×µ p . Then (x, y) is non-null if and only if Φ p (x)Φ p (y) vanishes. So, for this group, non-nullity coincides with the notion of primitivity introduced in [Wak16].
3.9. Extensions. Consider a short exact sequence as in Section 2.1. We use the same system of notation: to G, H, K correspond k-algebras A, B, C respectively. Their augmentation ideals will be denoted I A , I B , I C , and their non-nullity ideals will be denoted J A , J B , J C . Applying Lemma 2.1 to the Cartier dual of G, we see that J B ⊗ J C ≃ J A , just as in the special case when G = H × K. In fact Lemma 2.1 says more. It tells us that where i * denotes the surjection A ։ B = A/I C A obtained from i : H ֒→ G. From this we obtain the following lemma.
Lemma 3.3. The closed subscheme G × of G contains both of the following closed subschemes of G: • the closed subscheme H × ֒→ H ֒→ G, • the closed subscheme π −1 (K × ) of G obtained from K × ֒→ K by base change along π : G ։ K. The first item tells us that there exists an arrow j making commute. This arrow is unique, and it is a closed immersion. If K isétale over S, then the first item can be strengthened to the statement that the square (3.2) is cartesian.
Proof. We begin with the first item. The first item is true if and only if there exists an arrow j making the square (3.2) commute. This is the condition that i * (J A ) ⊂ J B . That this condition holds follows from (3.1), which shows that i * (J A ) is the product of the ideals J B and (i * )(J C ) in B.
The first item can be strengthened to the statement that the square (3.2) is cartesian if and only if the inclusion i * (J A ) ⊂ J B is an equality. This is certainly the case when i * (J C ) is the unit ideal in B.
When K isétale over S, we have seen that C = J C ⊕ I C . Therefore there exists f ∈ J C such that 1 − f ∈ I C . The image of f under i * is equal to 1, showing that i * (J C ) is indeed the unit ideal in B.
Finally, the second item is true if and only if the ideal AJ C contains the ideal J A . The truth of this is obvious from (3.1).
Remark 3.4. Let h ∈ H(R). The lemma implies that, if h is non-null for H, then it is non-null for G. It also implies that the converse is true provided that K iś etale over S. In general the converse is false. For example, (1, y) ∈ µ p (R) × µ p (R) is non-null if and only if pΦ p (y) = 0, while y ∈ µ p (R) is non-null if and only if Φ p (y) = 0. These are equivalent conditions when p is invertible in R, but not in general.
Primitivity of points in truncated p-divisible groups
In this section we fix a prime number p.
4.1. Definition of primitivity. Now we consider a p-divisible group G of height h over any base scheme S. For any positive integer i we are interested in the p itorsion G[p i ] in G, but henceforth we abbreviate G[p i ] to G i . For any pair i, j of positive integers there is then a short exact (in the fppf sense) sequence The arrow G i+j ։ G j (strictly speaking, its composition with G j ֒→ G i+j ) is given by raising to the power p i , and it is finite locally free of rank p hi . Let R be a k-algebra, let x be an R-valued point of G i , and writex for the image of x under the canonical homomorphism G i ։ G 1 (raising to the power p i−1 ). We say that x is primitive ifx is non-null in G 1 (R).
In other words, if we define G prim i as the fiber product making is primitive if and only if it lies in the image of the R-points of G prim i . Because the square is cartesian, we see that 1 is finite, locally free of rank p h(i−1) . Now G × 1 is finite, locally free of rank p h − 1 over S, so we conclude that G prim i is finite, locally free of rank (p h − 1)p h(i−1) over S.
4.2.
Comparison with points of exact order N on elliptic curves. In this subsection we fix i and put N = p i . Consider an elliptic curve E over S. Let E N denote its N -torsion points. Then consider the following two closed subschemes of E N , namely • the closed subscheme E prim N defined above, and • the closed subscheme, call it E ♯ N , of points of exact order N in the sense of Drinfeld and Katz-Mazur (see [Dri74,KM85]).
We claim that (⋆) E prim N coincides with E ♯ N . We need to prove that (⋆) holds for every elliptic curve E/S. We cannot see a priori a natural morphism between these two objects, so we proceed in the same way that similar problems are treated in [KM85].
Step 1 It is evident that (⋆) holds when p is invertible on S, because E N /S is thenétale.
Step 2 Next we check that (⋆) holds for E/S whenever S is flat over Z. In this situation E N , E prim Step 3 Let E denote the moduli stack (over Z) of elliptic curves, and choose a presentation (see [LMB00]) f : M ։ E for it. Here f isétale and surjective, and M is a smooth scheme of finite type over Z. Pulling back the universal elliptic curve on E, we obtain an elliptic curve E on the scheme M. In the the terminology of [KM85], E/M is a "modular family." Now consider an elliptic curve E over an arbitrary base scheme S. We consider the product M × S and write p 1 , p 2 for the two projections. We then have two elliptic curves over M × S, namely p * 1 E and p * 2 E, and we form the M × S-scheme T of isomorphisms between p * 1 E and p * 2 E. Over T the elliptic curves E and E become tautologically isomorphic; the resulting elliptic curve on T will be denotedẼ.
At this point we have a commutative diagram | 2016-04-04T08:54:49.290Z | 2015-10-09T00:00:00.000 | {
"year": 2017,
"sha1": "1974ee7df03f93e3fc9d2ecc9b2c11f945ab63d9",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.1007/s40993-017-0084-8",
"oa_status": "HYBRID",
"pdf_src": "Arxiv",
"pdf_hash": "603f2479ca415fd5aae5129b61ceb6614d73e435",
"s2fieldsofstudy": [
"Mathematics"
],
"extfieldsofstudy": [
"Mathematics"
]
} |
221339044 | pes2o/s2orc | v3-fos-license | Natural compounds as potential inhibitors of novel coronavirus (COVID-19) main protease: An in silico study
COVID-19 pandemic has now expanded over 213 nations across the world. To date, there is no specic medication available for SARS CoV-2 infection. The main protease (M pro ) of SARS CoV-2 plays a crucial role in viral replication and transcription and thereby considered as an attractive drug target for the inhibition of COVID-19,. Natural compounds are widely recognised as valuabe source of antiviral drugs due to their structural diversity and safety. In the current study, we have screened twenty natural compounds having antiviral properties to discover the potential inhibitor molecules against M pro of COVID-19. Systematic molecular docking analysis was conducted using AuroDock 4.2 to determine the binding anities and interactions between natural compounds and the M pro . Out of twenty molecules, four natural metabolites namely, amentoavone, guggulsterone, puerarin, and piperine were found to have strong interaction with M pro of COVID-19 based on the docking analysis. These selected natural compounds were further validated using combination of molecular dynamic simulations and molecular mechanic/generalized/Born/Poisson-Boltzmann surface area (MM/G/P/BSA) free energy calculations. During MD simulations, all four natural compounds bound to M pro on 50ns and MM/G/P/BSA free energy calculations showed that all four shortlisted ligands have stable and favourable energies causing strong binding with binding site of M pro protein. These four natural compounds have passed the Absorption, Distribution, Metabolism, and Excretion (ADME) property as well as Lipinski’s rule of ve. Our promising ndings based on in-silico studies warrant further clinical trials in order to use these natural compounds as potential inhibitors of M pro protein of COVID. investigated as a potential target to inhibit previous coronavirus infections also like SARS and MERS (Jo et al., 2020). This study aimed to screen the natural compounds based on their pharmacokinetic properties, drug likeness and ability to specically bind to the active sites of SARS-CoV–2 main protease so that these leads can be proposed as potential inhibitor to check the virus replication cycle. Lopinavir and Ritonavir are well known protease inhibitor of HIV (Israr et al., 2011). Both drugs were also recommended as repurposed drug in the treatment of SARS and Middle East respiratory syndrome (MERS) (Chu et al., 2004). Therefore, in this study we have taken these drugs as standard reference drugs to compare the ecacy of the binding of our selected compounds. In our in-silico prediction experiment, none of the selected compound showed hepatotoxicity and cytotoxicity except Pectolinarin which showed potential cytotoxicity. The compounds which were found to potentially inhibit the viral protease based on the binding energy were Amentoavone, Guggulsterone, Puerarin, Piperine, Maslinic acid, Apigenin, Epigallocatechin, Daidzein, Xanthohumol, Resveratrol, Luteolin, Cyanidin–3-o-galactoside, Pectolinarin, Herbacetin, Rhoifolin, Ganomycin B, Phloretin, and Crocetin. Among these Amentoavone and Guggulsterone were the top two leads showing lowest binding energy and satisfying our studied parameters. Therefore, we propose that these natural compounds may further be validated as potential inhibitors of COVID–19 main protease M pro . Our promising ndings based on preliminary an in-silico analysis could become a basis for further studies at in-vitro and in-vivo levels in order to use these compounds as potential inhibitors of SARS-CoV–2 protease.
Introduction
Coronavirus disease is an infectious disease caused by severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2) which primarily affects the lungs and shows certain types of pneumonia-like symptoms (Huang et al., 2020;Kumar et al., 2020). SARS-CoV-2 is a novel strain of coronavirus, rst time emerged in December 2019, during an outbreak in Wuhan, China, and subsequently expanded to all over the world in a very short period of time (World Health Organization, 2020;Hendaus, 2020). The outbreak was declared a Public Health Emergency of International Concern by WHO on 30 January 2020 (WHO, 2020). As on June 02, 2020, this contagious disease has led to over 6, 140, 934 con rmed cases and 373, 548 fatalities (https://covid19.who.int/). To date, there is no speci c treatment for this ongoing COVID-19 pandemic. Some preliminary study results investigated potential drug combination of Lopinavir and Ritonavir to treat COVID-19 infected patients, which were earlier used in human immunode ciency virus (HIV) and SARS CoV or Middle East respiratory syndrome (MERS) coronavirus patients (Lu, 2020;Chu et al., 2004). The drugs which can speci cally target the virus replication cycle and subsequent infection are urgently required to develop effective antiviral therapies as early as possible. Natural compounds due to the presence of enormous structural and chemical diversity, availability of more chiral centers, and relative biosafety are considered as an excellent source of drugs in several diseases including viral infections. This is further strengthened by the fact that around 45% of today's bestselling drugs have either originated from natural products or their derivatives (Lahlou, 2013). Natural compounds possesses antiviral property could become a valuable resource in this regard. crystallized the COVID-19 main protease (M pro ), which has been structured and repositioned in the Protein Data Bank (PDB) and is publically accessible.
SARS-CoV-2 main protease (M pro ) is reported to play an inevitable role in virus replication and transcription, suggesting it to be a promising target for inhibition of the SARS-CoV-2 cycle (Boopathi et al., 2020;Lahlou, 2013;Xu et al., 2002). Keeping this in mind, in this study we have selected several natural compounds based on extensive literature (Zakaryan et al., 2017;Thayil et al., 2016;Jo et al., 2020).
In the present study, we screened and explored the potential of selected natural compounds to inhibit the M pro of COVID-19 using molecular docking, followed by MD simulations, MM/G/P/BSA free energy calculations, ADME, drug-likeness, target speci c binding, and toxicity analysis validation.
Material And Methods
A ow chart of pipeline used in present study is summarized in Figure 1.
Literature survey and ligands selection
An extensive literature survey was conducted to select the natural compounds having antiviral properties from different medicinal plants using PubMed and Google scholar platforms. Based on the literature survey, total twenty natural compounds were selected, and their chemical structures were extracted from PubChem (Kim et al., 2016) repository in SDF format. List of all selected natural compounds along with their corresponding chemical Ids, 2D and 3D structures is presented in Table 1. In order to prepare the ligands to perform molecular docking, hydrogen atoms were added followed by PDB structure generation by OPENBABEL program (O'Boyle et al., 2008). Further, all the molecules were allowed for the energy minimization and optimization using universal force eld at 200 descent steepest algorithm of OPENBABEL available in PyRx (https://pyrx.sourceforge.io/) and converted in pdbqt format.
Preparation of protease
The 3D coordinates of main M pro of SARS-CoV-2 was obtained from the RCSB-PDB repository with PDB ID 6LU7 (Jin et al., 2020). In order to prepare the macromolecule for docking, water and other nonspeci c molecules were removed by using UCSF CHIMERA (Pettersen et al., 2004). For protein protonation maintaining cellular pH, polar hydrogen atoms were added to the 3D structure model of M pro . The structure optimization and energy minimization were performed by using SPDB viewer (Guex & Peitsch, 1997). While, clean geometry module embedded in Discovery Studio package was utilized for the side chain angles correction.
Molecular docking
To identify new potential inhibitors against the M pro of SARS-CoV-2, the site-speci c docking-screening of all selected natural compounds were carried out by AutoDock 4.2 (Forli et al., 2012). The box dimensions were kept as 70 Å × 70 Å × 70 Å with total of 50 genetic algorithm run. Other docking parameters were set as default. During performance of molecular docking, the amino acid residues including Thr25, Thr26, Gly143, Ser144, His163, His164, and Glu166 were utilized as the binding pocket sites. Other docking parameters of AutoDock 4.2 were set as default. The protein-ligand interactions were further rendered with the Maestro and Discovery studio programs.
ADME compound screening
An in-silico tool for analysis of absorption, distribution, metabolism, and excretion (ADME) was used to screen the above-mentioned compounds which could be bioactive via oral administration. Drug-like properties were calculated using Lipinski's rule of ve using SWISSADME prediction (http://www.swissadme.ch/) (Lipinski et al., 2012) (Giménez et al., 2010) . .
Target prediction
Molecular Target studies are important to nd the macromolecular targets of bioactive small molecules. This is useful to understand the molecular mechanisms underlying a given phenotype or bioactivity, to rationalize possible side-effects, to predict off-targets (Enmozhi et al., 2020). In this direction, SwissTarget Prediction tool (https://www.swisstargetprediction.ch) was used (Daina et al., 2019). Canonical smile for Amento avone and Guggulsterone was entered and was analyzed.
Molecular dynamics (MD) simulations
The four representative docking complexes of ligands with M pro including amento avone, guggulsterone, puerarin, and piperine were used for further re nement using MD simulations analysis. MD simulation studies were carried out to nd out the stability and exibility of the natural compounds-M pro complexes on 50ns. The method used for the MD simulations of natural compounds-M pro complexes remains same as earlier described in recent studies (Gajula et al., 2016;Jee et al., 2018;Kumar et al., 2020). All simulations of representative natural compounds-M pro complexes were conducted with the fruitful utilization of GROMOS96 43a1 force eld available in GROMACS 5.1.4 suite (Van Der Spoel et al., 2005). Topology les for ligand molecules were created by using PRODRG server (Schüttelkopf & Van Aalten, 2004). The prepared protein complexes were solvated in a cubic box of edge length 10 nm along with SPC water molecules. In order to maintain the system neutrality, adequate numbers of ions were added. To remove the clashes between atoms, system energy minimization calculations were applied with the convergence criterion of 1000 kJ/mol/nm. Long-range interaction electrostatics (Abraham & Gready, 2011) was handled by using PME. For both van der Waals and Coulombic interactions, a cut-off radius of 9 Å was utilized.
Equilibration was completed in two different phases. The solvent and ion molecules were kept unrestrained in rst stage, while in the second stage the restraint weight from the protein and protein-ligand complexes was gradually declined, in NPT ensemble. LINCS constraints were applied to all bonds involving hydrogen atoms (Hess et al., 1997). The temperature and pressure of the system were kept at 300 K and 1 atm respectively by using Berendsen's temperature and Parrinello-Rahman pressure coupling respectively (Berendsen et al., 1995). The production simulation was initiated from the velocity and coordinates obtained after the last step of the equilibration step. All the systems were simulated for 200 ns and snapshots were taken at every 2 ps interval.
MM/PBSA free energy calculations
The calculations of binding energy of the M pro -ligand complexes were calculated by using MM/PBSA (Molecular Mechanics Poisson Boltzmann Surface Area) method. While calculations of MM-PBSA the polar part of the solvation energy was calculated by using the linear relation to the solvent accessible surface area. The g_mmpbsa module available in GROMACS was applied for the determination of different components of the binding free energy of complexes (Kumari and Kumar, 2014). Only the last 10 ns of data were utilized for the MM-PBSA analysis to considering the convergence issue associated with the calculations. In present study, entropy calculations were not calculated as they may change the numerical values of the binding free energy reported for the molecules. In the MM-PBSA calculation, the binding free energy between M pro and a ligand was calculated using following equations:
Toxicity analysis
Toxicity analysis of selected natural compounds was done by the ProTox-II web server (Banerjee et al., 2018). ProTox-II is a kind of virtual lab which integrates several parameters like molecular similarity, fragment propensities and most frequent features. It predicts various toxicity endpoints and incorporates a total of 33 models for the prediction of various toxicity aspects of small molecules.
Similar FDA approved drug compound search with SWISS similarity
The compounds which were giving the best binding energy among the selected natural compounds were checked for similarity, if any, with FDA approved drugs using SWISS similarity tool (http://www.swisssimilarity.ch) (Zoete et al., 2016).
Computational facility details
The MD simulations and corresponding energy calculations were carried out on HP Gen7 server with 48 Core AMD processors and 32GB of RAM.
Results
3.1 Determination of Active Sites: Table 1 shows the structure and amino acids found in the active site pockets of 6LU7. 6LU7 is the main protease (Mpro) found in COVID-19, which has been structured and repositioned in PDB and can be accessed by the public, as of early February 2020.
3.2 ADME (Absorption, distribution, metabolism, and excretion): ADME properties were found by obtaining the canonical smiles from PubChem. These smiles were used to identify ADME properties using SWISS ADME. Then compounds were analyzed on various parameters like lipophilicity, molecular weight, hydrogen-bond donors, hydrogen-bond acceptors, Clog P-value, Ghosh violations, Lipinski violations, etc. Ligands/natural compounds have been selected based on adherence to soft or classical Lipinski's rule of ve. The selected ligands that did not incur more than 2 violations of Lipinski's rule were further used in molecular docking experiments with the target protein. The drug scanning results (Table 2) show that most of the tested compounds in this study was accepted by Lipinski's rule of ve. These compounds were selected for docking to nd their binding a nity with COVID19 main protease Mpro.
List of compounds with suitable ADME properties given below: 00The target prediction analysis was displayed for our two best compounds, Amento avone and Guggulsterone on the web page with the following observations the top 15 of the results were given as a pie-chart ( Figure 2). The pie chart for Amento avone predicts 20% of Family AG protein-coupled receptor, 13.3% Kinase, 13.3% of Enzymes, 13.3% of unclassi ed protein, 6.7% of Phosphatase, 6.7% of protease, 6.7% of Oxidoreductase, 6.7% of primary active transporter, 6.7% of Secreted protein, 6.7% of Ligand-gated ion channel. The pie chart for Guggulsterone predicts 40% of Nuclear Receptors, 13.3% of CytochromE P450, 13.3% of Secreted protein, 13.3% of Oxidoreductase, 6.7% of Membrane receptors, 6.7% of Fatty acid-binding protein family, 6.7% of Enzymes. The output table consisting of Target, Common Name, Uniprot ID, ChEMBL-ID, Target Class, Probability, and Known actives in 2D/3D are given in the Supplementary material. The possible sites of the target which the compound may bind to are mostly the targets which are predicted by the software and the probability score for Amento avone and Guggulsterone are obtained from 1.0 to 0.0868 & 1.0 to 0.101672 respectively. This makes an inference that the small compound may have high target attraction towards the speci c binding site it is directed to.
Molecular docking
Molecular docking is an extensively used in-silico way to predict protein-ligand interaction. To perform the docking analysis, the structures and amino acids found in the active site pockets of 6LU7. 6LU7 is the main protease (M pro ) found in COVID-19, which has been structured and repositioned in PDB databank.
Thereafter, Ligand-protein docking was performed, and the interactions were determined based on the binding a nity of our compounds. Each individual analysis gave positive results, suggesting that the selected natural compounds may directly inhibit COVID-19 main protease M pro . The 14 selected natural compounds were docked with COVID-19 main protease M pro along with the standard ritonavir and lopinavir to compare the results. Further, like previous other ndings, our results also indicated a good binding a nity of ritonavir and lopinavir to the COVID-19 main protease M pro .The results obtained are as follows: Due to technical limitations, Table 2 cannot be displayed in the text. Please nd Table 2 in the supplemental le section. Table 2. Shows the molecular docking analysis results for selected natural compounds against COVID-19 main protease M pro (PDB-6LU7). Figure 3 and Figure 4 can be found in the gures section.
Due to technical limitations, Table 3 cannot be displayed in the text. Please nd Table 3 in the supplemental le section. The binding residues and their chains were identi ed from the protein-ligand complex as shown in the above images.
Molecular dynamics (MD) simulations
To further investigate the molecular docking results, the top four natural compound complexes namely, amento avone, guggulsterone, puerarin, and piperine were subjected to 50ns MD simulations. The conformational stability and exibility of the complexes have been analyzed by using various parameters such as root mean square deviation (RMSD), root mean square uctuation (RMSF), solvent accessible surface area (SASA), radius of gyration (Rg), and binding a nity of phytomolecule complexes by using mmpbsa and hydrogen bond formation ability. The RMSD is a commonly used similarity tool to measure the conformational perturbation during the simulation of macromolecule structures. RMSD of the Cα Atoms related to the stability of the complexes. The time dependent RMSD from the initial stage of the simulation to 50 ns simulation. The RMSD of the backbone of these 4 complexes lies between 0.231-0.50 nm, which stabilizes at the 35 ns whereas the RMSD of ligands ranged from 0.35-0.96 nm (Figure 5a). RMSD of the protein backbone of all system was small and comparable, which may conclude that the binding of ligands does not lead to the con rmation perturbation during the simulation (Figure 5b). During MD simulations, RMSF de ne the residual exibility from the average position. The RMSF of the protein ranged from 0.2-0.4 nm of all systems (Figure 5c). Some amino acid shows the high-intensity pick, which may represent a loop region. The presence of low-intensity pick revealed that binding of the phytomolecules does not affect the stability of the structural region of the enzyme.
In MD simulations, Rg determines the compactness of protein, induced by the movement of a ligand. The lower the exibility of the Rg during the simulation associated with the structural stability of the protein.
The Rg values of all phytochemical's complexes were lies 2.20-2.05 nm (Figure 5d). The Rg values of all four phytochemicals complexes support their consensus architecture as well as size. The SASA associated with the exposures of the hydrophobic residue during the simulation. SASA plays a principal role in the van der interaction. The SASA values of all systems were lies between 125-150 nm 2 . SASA con rmations showed that the binding of ligand molecules does not affect the overall folding of the protein (Figure 6a).
In a complex protein and ligand, hydrogen bonding plays a critical role to determine the strength of interaction. During the simulation time, several hydrogen bonds formed between the donor and the acceptor group (Figure 6c). Two hydrogen bonds consistently formed during the time of simulation (Figure 6b). Over all observations indicated that all four complexes are stable during simulation. -173.54511.759 -9.3734.129 50.5737.610 -13.9431.293 -146.287 11.205 Puerarin -180.78720.912 -82.40516.508 148.20017.298 -16.6391.417 -131.63120.483 Table 4. Binding free energy calculation of four stable complexes during simulation
Toxicity analysis
In-silico toxicities of selected natural compounds were predicted by using ProTox-II. As shown in Table 4, ProTox-II toxicity prediction was done to check the safety of the compounds based on two major toxicity end points, hepatotoxicity & cytotoxicity. According to the toxicity analysis, none of the selected natural compounds showed potential hepatotoxicity or cytotoxicity except Pectolinarin which showed potential cytotoxicity.
Due to technical limitations, Table 5 cannot be displayed in the text. Please nd Table 5 in the supplemental le section. of Guggulsterone which are predicted by the software and the probability score for obtained from 0.995 to 0.009. This makes an inference that that these compounds could be very important and unique with pharmaceutical perspectives and need to be explored at in vitro and subsequent pre-clinical and clinical trials.
Due to technical limitations, Table 6 cannot be displayed in the text. Please nd Table 6 in the supplemental le section. (Jin et al., 2020). This protease is considered an attractive target as it is essential for virus functionality, replication, and entry competence. The main protease M pro has been investigated as a potential target to inhibit previous coronavirus infections also like SARS and MERS (Jo et al., 2020). This study aimed to screen the natural compounds based on their pharmacokinetic properties, drug likeness and ability to speci cally bind to the active sites of SARS-CoV-2 main protease so that these leads can be proposed as potential inhibitor to check the virus replication cycle. Lopinavir and Ritonavir are well known protease inhibitor of HIV (Israr et al., 2011). Both drugs were also recommended as repurposed drug in the treatment of SARS and Middle East respiratory syndrome (MERS) (Chu et al., 2004). Therefore, in this study we have taken these drugs as standard reference drugs to compare the e cacy of the binding of our selected compounds. In our in-silico prediction experiment, none of the selected compound showed hepatotoxicity and cytotoxicity except Pectolinarin which showed potential cytotoxicity. The compounds which were found to potentially inhibit the viral protease based on the binding energy were Amento avone, Guggulsterone, Puerarin, Piperine, Maslinic acid, Apigenin, Epigallocatechin, Daidzein, Xanthohumol, Resveratrol, Luteolin, Cyanidin-3-o-galactoside, Pectolinarin, Herbacetin, Rhoifolin, Ganomycin B, Phloretin, and Crocetin. Among these Amento avone and Guggulsterone were the top two leads showing lowest binding energy and satisfying our studied parameters.
Therefore, we propose that these natural compounds may further be validated as potential inhibitors of COVID-19 main protease M pro . Our promising ndings based on preliminary an in-silico analysis could become a basis for further studies at in-vitro and in-vivo levels in order to use these compounds as potential inhibitors of SARS-CoV-2 protease.
Declarations
Con ict of interest Histogram showing molecular docking results between COVID-19 main protease Mpro (PDB-6LU7) and selected natural compounds (the binding energy value ΔG is shown in minus kcal/mol), *Reference compounds.
Supplementary Files
This is a list of supplementary les associated with this preprint. Click to download. table6.pdf table5.pdf table3.pdf table2.pdf | 2020-04-16T09:10:52.309Z | 2020-04-15T00:00:00.000 | {
"year": 2020,
"sha1": "3775a0f5165316cb7b23d8f43c6f0a58cd2bfc9b",
"oa_license": "CCBY",
"oa_url": "https://www.researchsquare.com/article/rs-22839/latest.pdf",
"oa_status": "GREEN",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "4a52294931d46d1d15b5a958dbd55694ccf96d85",
"s2fieldsofstudy": [
"Engineering"
],
"extfieldsofstudy": [
"Biology"
]
} |
244890135 | pes2o/s2orc | v3-fos-license | Emergency Laparoscopic Management of Perforative Peritonitis: A Retrospective Study
Background Peritonitis was previously considered a contraindication for minimally invasive surgery due to the risk of malignant hypercapnia partial pressure of carbon-dioxide (PCO2) and toxic shock syndrome. The objective of this retrospective study was to evaluate the role of laparoscopic surgery (LS) in selected patients with perforative peritonitis and to study its feasibility, safety, and outcomes. Patients and methods This was a retrospective study of 25 patients spanning over five years from 2015 to 2020. This study comprised all patients who were diagnosed with perforative peritonitis on preoperative physical/clinical examination, radiological evaluations, and who were stable enough to withstand pneumoperitoneum. Patients were evaluated for causes, operative time, duration of hospital stay, intra-, and postoperative complications, time taken to resume normal activity, and conversion to open surgery. Data was extracted from the hospital electronic medical records, for the above-mentioned parameters. Results Twenty-five patients with perforative peritonitis underwent diagnostic and therapeutic LS in our institute. The mean age was 46 years (35-79 years). Ten patients (40%) were diagnosed with gastro-duodenal perforation. Out of these ten patients, ninepatients (90%) were managed totally laparoscopically, while one patient (10%) required conversion to open surgery. There were 15 patients (60%) with small bowel perforation. Thirteen of the 15 patients were managed laparoscopically, with the remaining two requiring conversion to open surgery. The average time taken for the procedure was 90 minutes. The mean time to initiate the postoperative peroral liquid diet was 3.4 days. The mean postoperative stay was 6.9 days. The time taken to resume normal activity was 10-12 days. Conclusions Laparoscopic management is feasible and safe for patients with perforative peritonitis. Careful patient selection and the surgeon’s experience with the procedure are critical determinants of success.
Introduction
Perforative peritonitis is one of the most common causes of abdominal surgical emergency and intervention. A meticulous surgical technique with systematic evaluation of the peritoneal cavity, complete evacuation of purulent fluid, optimum closure technique, and thorough peritoneal toilet are mandatory for good outcomes.
Historically, peritonitis was considered an absolute or relative contraindication for laparoscopic surgery (LS) due to multiple factors and arguments [1,2]. Firstly, the theoretical risk of hypercapnia due to increased absorption of carbon dioxide is directly related to increased intraabdominal pressure (IAP), infection, and inflammation. Secondly, the risk of toxic shock syndrome due to increased IAP results in the passage of toxins and bacteria into the general circulation. Lastly, the surgeons opted not to use laparoscopic therapy for perforative peritonitis due to inflamed and friable bowel, limited working space, and difficulty manipulating the bowel [3,4].
However, greater acceptance of laparoscopy in recent years has encouraged surgeons to use it due to its proven benefits of less pain, short hospital stays, faster recuperation, and decreased morbidity [4][5][6]. Performing diagnostic laparoscopy in cases of suspected viscous perforation or peritonitis has the advantage of identifying an occasionally unexpected pathology. If favorable abdominal pathology is discovered, it can be managed and repaired laparoscopically. However, if laparoscopy-assisted conversion is to be conducted, it has the advantage of a more selective and shorter laparotomy incision. According to the European Association for Endoscopic Surgery (EAES) guidelines, in cases of the peritonitic abdomen, laparoscopy is no longer an absolute contraindication [7,8,9].
Encouraged by multiple studies and their findings, we extended the many benefits of LS to patients with localized or generalized peritonitis and conducted this retrospective study to analyze the outcomes.
Materials And Methods
We performed a retrospective review of patients diagnosed with peritonitis, who underwent LS at our institution, from 2015 to 2020. Our study included all patients who were diagnosed with perforative peritonitis based on imaging investigations. Intravenous fluid resuscitation and the prevention of secondary organ dysfunction were of the utmost importance for treating these patients. Empirical broad-spectrum systemic antibiotic therapy was initiated at admission in line with the hospital's antibiotic policy (injection ceftriaxone 1 gm intravenous and then BID for 5-7 days, injection metronidazole 500 mg intravenous and then TID for five days), and the further course was tailored according to the antibiotic culture and sensitivity report of the infected peritoneal fluid. Nasogastric tube for decompression, per urethral catheter, and adequate analgesia was instituted immediately on admission. Before the surgical intervention, dyselectrolytemia and coagulation abnormalities, if present, were corrected to the maximum extent possible.
In our institute, only those patients of perforative peritonitis who satisfy the following criteria are subjected to LS: all patients with hollow visceral perforation, as confirmed by a plain X-ray chest with domes showing free gas under the right diaphragm dome ( Figure 1A) or plain computed tomography of the abdomen revealing free gas in the peritoneal cavity; patients who present with peritonitis early, i.e., within the first 24 hours of the onset of acute symptoms. Our exclusion criteria were: hemodynamically compromised and unstable patients (blood pressure less than 90/60 mmHg, pulse more than 110 beats per minute); patients with irreversible coagulopathy or hypercapnia greater than 50 torrs; patients with previous extensive open abdominal surgeries (two or more); patients with compromised cardiovascular or respiratory systems; patients with known cases of underlying conditions, such as Crohn's disease, ulcerative colitis, diverticulosis and diverticulitis.
Data for the following parameters were extracted from the hospital's electronic medical records: sex, age, surgical result (time/procedure/result), the reason for the conversion (where applicable), etiology of the perforation, perioperative and postoperative complications, duration of hospital stay, and mobility. We monitored the patients for two weeks after surgery.
Preoperative Preparation
Preoperative preparation is in the form of adequate intravenous fluid replenishment, antibiotic initiation, and appropriate antithrombotic prophylaxis. Nasogastric decompression should always be conducted in each patient. It facilitates decompression of the stomach and the rest of the proximal bowel, which is usually in the ileus due to peritoneal contamination. This minimizes the risk of aspiration while under anesthesia, and deflation of the distended bowel enables relatively easier and safer handling during surgery.
Patient Position and Surgical Technique
The patient is placed in a supine position with legs straight and split. He/she is firmly strapped-fixed to the table at the lower chest level, allowing for steep Trendelenburg, reverse Trendelenburg, and right and left lateral positions. The pressure points and contact areas are adequately padded. In patients without scars from previous surgery on the abdomen, the umbilical area is our preferred point of entry (supra or infra umbilical, depending on the adequacy of the umbilicus to pubis distance). If the bowels are extremely distended and the abdomen too tense, we prefer to perform a direct 10-mm blunt trocar insertion by the open technique. In other situations, where the nasogastric tube has aspirated copious amounts and the abdomen is relatively soft, we prefer to institute pneumoperitoneum through the targeted site using conventional Verress's needle technique.
In patients who have scars from previous abdominal surgery, we prefer to institute pneumoperitoneum through the Verres needle at Palmer's point (a relatively safe point for entry, on the left midclavicular line two finger breadths below the costal margin). Then, we insert a 5-mm trocar at the same point and a peripheral "bird's eye view" of the abdomen was obtained using a 5-mm telescope inserted through this trocar. Central trocars are then inserted, carefully avoiding any adhesions (if present), under the vision provided by this 5-mm telescope. Dense adhesions, if present, are first lysed through additional peripheral trocars inserted in "safe areas," before inserting the central trocars. After inserting the central 10-mm trocar, we switched to a 10-mm telescope. The operating surgeon stands in between the patient's legs while the camera assistant surgeon stands on the patient's right side and the second assistant surgeon stands on the patient's left side. Then, we insert the right-and left-hand 5-mm working trocars on either side of the umbilical area 10-mm optic trocar. We then examine the upper abdomen and the gastro-duodenal area for obvious perforations. Also, a careful inspection of the nature of the peritoneal contaminating fluid provides a reliable clue as to the site of perforation.
A bilious or non-bilious contamination fluid with or without food particles but without feculent odor ( Figure 1B) indicates upper gastrointestinal perforation. In these cases, an additional 5-mm trocar is inserted in the left lateral abdomen. Through this, the assistant's atraumatic grasper is inserted. This instrument grasps the anterior wall of the stomach and retracts it laterally to adequately expose the prepyloric or duodenal perforation. The gastro-duodenal perforations were sutured closed using the two working trocars with 2-0 or 3-0 silk, with simple interrupted sutures ( Figures 1C, 1D). The individual stitch ends were kept long (Figures 2A, 2B). After optimum suture closure of the peptic perforation, an omental patch was mobilized and placed over the suture line ( Figures 2C, 2D, 3A). The long ends of the sutures were then tied around this omental patch, to maintain its position ( Figure 3B). Before attempting suture closure in suspected malignant gastric perforations, we performed an edge biopsy. If the sutures cut through the indurated suspected malignant tissue, or otherwise through the edges of a perforated chronic peptic ulcer, just an onlay omental patch is fixed in place without prior suture closure of the perforation, with sutures taken farther away from the edges of the perforation. After sucking out all the peritoneal contamination, a thorough peritoneal toilet was administered with normal saline. This is where laparoscopy can be inferior to open surgery on two counts: (i) thick pus and pus flakes are difficult to suck out into a 5-mm suction cannula, and (ii) the optimum visualization of all the recesses, nooks, and corners of the peritoneal cavity is not always guaranteed. Hence, there is a genuine risk of missing collections and formation of postoperative intraperitoneal abscesses. The first issue, thick pus and pus flakes not getting sucked into a 5-mm suction cannula are solved by careful and judicious use of a 10-mm suction cannula. A 10-mm cannula sucks out the difficult pus flakes but poses a real risk of causing "suction injury" to the vulnerable bowel. We avoided this by utilizing the 10-mm cannula only when its tip is visible and in relatively open spaces. To solve the second issue, our unit has implemented the "as many trocars as it takes" policy. This means that we will never hesitate to insert 1-2-3 additional trocars at appropriate places to retract dilated bowel obstructing the view, suck out residual collections, and provide a toilet; even after the main phase of the surgery -the suture closure of the perforation is completed. This enabled us to provide significant and comprehensive peritoneal toilets to all of our cases, and we have not had a single postoperative infected intraperitoneal collection in any of our patients in this series. At the end of the operation, a 32 Fr tube drain was inserted through the right lateral working trocar site, placed in situ in Morrison's pouch, and suture-fixed to the skin. For patients with massive peritoneal contamination, we inserted an additional drain in the pelvis through the left-hand working trocar site.
When the peritoneal contamination is feculent (color and/or odor), we suspected a lower gastrointestinal, i.e., small bowel perforation. We inserted the suprapubic 10-mm trocar after evaluating and ruling out an upper gastrointestinal perforation, which then became our primary optic trocar. An additional 5-mm trocar was inserted in the right iliac fossa (RIF). Once the telescope is shifted to the suprapubic trocar, RIF and umbilical trocars become the surgeon's left and right-hand working ports, respectively. A systematic "bowel walk" was now initiated, starting from the ileocecal junction to the duodenojejunal flexure. We believe that the suprapubic optic trocar provides an optimum view of the central abdomen, thus enabling accurate identification and localization of the pathology and eventual therapy. We believe that while dealing with dilated bowels with a suboptimal view, one should not hesitate to insert an extra trocar or two at optimum locations to enable insertion of extra instruments, such as the fan retractor, for better atraumatic retraction of dilated bowel and safer surgery.
Once the small bowel perforation had been located, we conducted a careful inspection to determine the size of the perforation and any accompanying findings, such as a stricture, adhesive band, etc. In such scenarios, segmental resection and stapled cum sutured end-to-end small bowel anastomoses are preferable. In standalone perforations, the edge was freshened by excising a thin sliver of tissue circumferentially along the edge of the perforation. The same specimen was also sent for histopathological examination. Then, the perforation was suture closed using 3-0 silk in two layers, using inner continuous and outer simple interrupted sutures. An omental wrap was performed around the suture line if the bowel is extremely inflamed and friable. The remaining steps of peritoneal toilet and drainage tube(s) insertion remain the same. However, only the site of the right drain differs. It was placed near the anastomosis.
Results
This study comprised 25 individuals who had perforative peritonitis and underwent LS up until March 2020. There were 15 males (60%) and 10 females (40%) with a mean age of 46 years (35-79 years). Ten patients (28%) were diagnosed with gastric and duodenal perforations. Out of these 10 patients, nine (90%) were managed laparoscopically, while one patient (10%) required conversion to open surgery due to massive peritoneal contamination and unclear anatomy. Peptic ulcer disease was the etiology of all our gastroduodenal perforations. All 10 patients with peptic gastro-duodenal perforations gave a history of taking intermittent empirical treatment with proton pump inhibitors (PPI) for acute peptic gastritis, on the advice of the medical gastroenterologist. However, none of them were actively on PPI therapy just prior to the occurrence of the surgical emergency. Out of the 10 patients with gastro-duodenal perforations, nine had duodenal (anterior wall of the first part of the duodenum-D1) perforations ( Figure 3C), while one had a prepyloric perforation anteriorly on the stomach wall ( Figure 3D). We performed an edge biopsy on this one patient before suture closure. There were 15 patients (60%) with small bowel-ileal perforations. Out of these 15 patients, 13 (87%) were managed laparoscopically, while two (13%) required conversions due to unclear anatomy and technical difficulty. Out of the two patients who were converted to a laparotomy, one had a tuberculous stricture-perforation complex and could not be managed laparoscopically due to technical difficulty caused by the surrounding massively dilated small bowel.
The second converted patient had an isolated terminal ileal perforation due to acute abdominal trauma caused by a road traffic accident. This patient also had a large small bowel mesenteric tear that caused hemoperitoneum, which compromised optimum visualization. Seven of the 13 patients with small bowel perforation managed successfully by laparoscopy underwent suturing of isolated small perforations ( Figures 3E, 3F), while six required segmental resections with end-to-end small bowel anastomosis. Histopathology revealed that all seven of these patients had intestinal (typhoid) perforations. All the six patients who needed a small bowel resection anastomosis had a stricture-perforation complex. Histopathology revealed that all of these were tuberculous in origin. In the six patients who required a small bowel resection anastomosis, the same was performed totally laparoscopically using blue cartridges loaded on an Endo GIA (Medtronic, Dublin, Ireland) surgical stapler.
We divided the mesentery using a harmonic scalpel. At the end of the anastomosis, the mesenteric defect was suture closed. A systematic and thorough "bowel walk" was also performed in these six patients to identify/rule out additional concurrent strictures, both proximal and distal to the perforation. Furthermore, the ileocaecal junction was visualized and palpated with a "soft" atraumatic bowel grasper to identify concurrent ileocaecal tuberculosis. None of these six patients had any concurrent additional small bowel strictures or ileocecal tuberculosis. In our study, only three patients (12%) required conversion, while the other 22 cases (88%) were effectively managed laparoscopically. The operation took an average of 90 minutes to complete. The mean time to start an oral liquid diet was three to four days. The mean postoperative stay was 6.9 days. The time taken to resume normal activity was 10-12 days. Mild complications were observed in four (16%) patients during the immediate postoperative period and two weeks after surgery. Three patients were treated conservatively for minor trocar site wound infections (two with grade IIA and one with grade IIB according to the Southampton wound grading system). All these three patients were cases of solitary typhoid perforations of the terminal ileum. The remaining patient experienced mild paralytic ileus, which got resolved post electrolyte supplementation. None of the patients in this series had a postoperative leak through the suture/staple line. None of the 14 patients (one prepyloric ulcer perforation and 13 small bowel perforations) who underwent a histopathological study of their operative specimens, had a malignancy. Table 1 shows the patient demographics, perioperative data, and etiological information of all the patients of this series.
Discussion
Open surgery, i.e., laparotomy, has conventionally been the standard of therapy for patients with perforative peritonitis all over the world. The feasibility of laparoscopy in an acute abdomen is reported to be approximately 90%, but as high as 98% in other cases, as reported by Kirshtein [10]. In the management of abdominal emergencies, there is no difference for absolute and relative contraindications as far as both laparoscopy or open procedures are concerned [11][12]. However, for peritonitis, there is a concern that pneumoperitoneum (increased CO2) may enhance bacteremia and endotoxemia due to increased IAP [7,13,14]. C.A. Jacobi et al. concluded that the inflammatory response was significantly higher in laparotomy in their study to investigate the influence of laparotomy and laparoscopy on local and systemic inflammation [15]. Acute phase reaction markers, e.g., ceruloplasmin, C-reactive proteins (CRP), fibrinogen, haptoglobin, serum lactate, were lower in laparoscopy than in laparotomy. Over the last few years, there has been an increase in the number of studies supporting laparoscopy for peritonitis. We also agree with the EAES clinical guidelines in favor of LS and creating pneumoperitoneum in the peritonitic abdomen [7]. The diagnostic accuracy of laparoscopy was 100% in our study, compared to international literature reported to be 89%-100% [13].
Laparoscopic Surgery's high and specific diagnostic yield is critical, especially in patients with suspected gastrointestinal perforation (peritonitis), since it allows for a better and thorough examination of the peritoneal cavity and detection of concomitant diseases. In cases of unclear preoperative diagnosis, laparoscopy can shorten the observation period and avoid the need for exorbitant haematological and radiological investigations [14,15].
Laparoscopic allows us to perform the same surgical procedure as open surgery. Many patients with peritonitis do not have an evident perforation, but rather an inflammatory necrotic zone (e.g., edema/abscess) formation. Such patients can be safely treated with peritoneal lavage and broad-spectrum antibiotic therapy. This may allow us to arrange for a second LS for the underlying pathology if necessary, such as in cases of sigmoid resection with diverticular patients under elective conditions [16,17]. A surgeon should never contemplate conversion to open surgery as a defeat. Instead, he/she should constantly keep in mind that by adopting the LS method, he/she can select the most appropriate incision for the patient if the decision to convert to open surgery is made. The results of our study indicated the compatibility and feasibility of LS in the management of selected cases of peritonitis.
The complications can undoubtedly reduce with careful patient selection, increased skill, and confidence with the surgical technique. Although the exact economic benefits of LS are difficult to quantify, it significantly reduces wound infection rates. More importantly, it completely negates the possibility of major wound complications, such as incisional hernias and burst abdomen, both of which would require additional surgical correction, thereby increasing patient suffering and the financial burden on the healthcare infrastructure. Therefore, we believe that the total prevention/avoidance of major wound complications is the most significant advantage of laparoscopic management of selected cases of perforative peritonitis. Also, it leads to faster recovery and return to work [18]. Lastly, a small high-pressure operating room with well-trained and experienced surgeons working with a well-trained team is necessary for the procedure's success. In our study, all the procedures were performed by a single surgeon with extensive experience and expertise in advanced laparoscopic surgeries. | 2021-12-05T16:11:10.477Z | 2022-01-01T00:00:00.000 | {
"year": 2021,
"sha1": "c1fe82559a8e2ad9f687cc5cc8c9c23977a53401",
"oa_license": "CCBY",
"oa_url": "https://www.cureus.com/articles/77937-emergency-laparoscopic-management-of-perforative-peritonitis-a-retrospective-study.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "c1fe82559a8e2ad9f687cc5cc8c9c23977a53401",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
211465633 | pes2o/s2orc | v3-fos-license | Industry-Based Popular Music Education: India, College Rock Festivals, and Real-World Learning
Until recently, opportunities for formal music education in India were few. Music education at large universities concentrated exclusively on Indian classical music. Western popular music was largely the domain of Bollywood. With the rise of the Indian middle-classes in the 1990s, more Indian families began sending their children to school to study a range of disciplines. These students joined rock bands and major Indian colleges began to host rock festivals for student rock bands. Today, nearly every significant rock band in India originated in these festivals. Our research investigates the development, cultural significance, and educational importance of college rock festivals. Interviews were undertaken with established and emerging independent musicians, educators, and music industry professionals. Given the importance of learning within the informal communities of universities and college rock festivals, we adopted a communities of practice theoretical framework informed by grounded theory methodology. We find that, despite the emergence of popular music education in India, college rock festivals continue to educate young Indian musicians on technique, performance, songwriting, and music business.
is to work in. Educational institutions have responded by increasing offerings in popular music performance, songwriting, and music industry. The costs of this music education are borne by governments or by relatively wealthy parents. However, the picture is rather different in developing countries. The music industry of economically emergent nations is more fraught, less developed, and more insecure than in the West (Dumlavwalla 2019;Olugbenga 2017;Fink et al. 2016;Arli et al. 2015;Mascus 2001). Given this, popular music education is often regarded as a risky investment by parents of aspiring musicians. They may steer them towards engineering, business, and other more sustainable and lucrative career choices.
The Indian popular music scene falls into this tradition. Music in India, like the land itself, is a complex, teeming, vast salad bowl of different traditions and influences. Bonny Wade (1999) once said that India is such a vast and teeming country that, for any statement one makes of it, the opposite will be true in another part of the country. The music of India includes such disparate traditions as the Carnatic and Hindustani classical traditions, the wildly popular music of Bollywood, wedding music, brass band music, Hindi EDM, Sufi rock, and Western-influenced rock to make a short list. The music industry, such as it is, focuses primarily on the music of Bollywood and music in the Indian classical traditions. The Indian singer/songwriter rock tradition, the sector most analogous to the Western music industry, is perhaps only thirty years old. At the time that this tradition was emerging, India did not possess the same industry scaffolding. What record labels there were focused on the music of Bollywood. Other traditions were underground "cassette cultures" (Manuel 2001). At the start of the 1990s, there were few established popular music record labels, no industry press, and no music colleges that catered to the Western rock traditions. And yet an underground scene started to gain traction in the first years of the decade. Small venues began to open. Festivals both large and small began to evolve. Bands began to play and record. However, with no established music industry or a resultant popular music education sector, musicians were often left to work out the industry on their own.
Further challenges for this emerging industry are also implicated by a culture that has not developed a bar or nightclub tradition. By comparison, in the West, live music has largely been associated with alcohol and nightlife. Indeed venues and those using music often lack a basic understanding of the music industry. An Indian restaurateur said: Good Lord. It is my job to sell food! Am I going to spend my time negotiating music license fees, finding out which track belongs to whom and checking the legality of these people asking for money? I do not give a rat's ass about who is supposed to be paid, whether performers, composers, authors, publishers or music labels. It is beyond my comprehension and hey, when I bought your CDs and cassettes, I made a one-time payment that covered everyone, isn't it? I would rather not play music than deal with this nonsense! (Churamani 2019) A music professional/educator based in Tamil Nadu (in the South of India) explained this lack of support many musicians in the West take for granted: A large part stems from a social stigma against the club scene in India, it is viewed as something unacceptable for many, especially youth, to attend. Due to this there are great restrictions in many states on alcohol licenses and late licenses for nightlife. Also, many college age students would not think of "going for a night out" as something to do. Many colleges have strict curfews and monitoring of students' activities. The laws on these things do differ state to state. For example, Maharashtra (Mumbai) just raised its legal drinking age to 25, matching Punjab and Delhi.
This paper considers how musicians develop their music industry knowledge and performance skills without the scaffolding of either formal popular music education or an established popular music industry, locating the college festival as a significant part of the independent music scene. Our informant continues to explain: The college fest is seen as a safe environment that occurs within a learning environment that is trusted by Indian families, and so these events have grown majorly into large scale festivals.
We find that people learn about music industry through direct engagement with it, particularly through performance at commercial and college rock festivals.
Authors Kelman and Cashman have been studying the popular music scene of India since 2013. On our travels we have interviewed dozens of music educators, industry personnel, and practitioners. These interviews form the data for our continuing research project into the Indian popular music scene. Although the focus of this paper is Indian college festivals, this research has implications for informal popular music festivals in other geographic locations where learners eschew the traditional universitybased music education system and learn from each other through a community of practice.
College Rock and Indian Education
India is a land of universities. In the 2018 All India Survey On Higher Education (AISHE) report, there were 903 accredited universities throughout India, a figure dwarfed by the number of colleges (39,050) and standalone institutions (10,011) (AISHE 2018). Many of these institutions offer programs in traditional Indian music from offering bachelor degrees right through to doctoral degrees. Despite these offerings, the fine arts (in which music resides) graduate relatively small numbers of students (8,926 in 2018) in contrast with social science (172,921), information technology (158,108), management (123,189), and law (72,486), (AISHE 2018). Further, no universities or colleges offer programs in contemporary music, leaving this discipline to standalone institutions. These music colleges are a fairly recent phenomenon. Some, such as the Global Music Institute in Delhi (founded in 2011), The True School of Music in Mumbai (founded 2013), and Swarnabhoomi Academy of Music (SAM) in Tamil Nadu (founded 2010) are affiliated with Western institutions and offer some form of accreditation. Others, such as the One World College of Music in Delhi, remain unaccredited except for affiliations with Western examining bodies such as Trinity college and AZCAM and classical Indian music boards such as the AMEC and PRSSV. Throughout urban India, individual teachers maintain teaching practices, sometimes addressing contemporary music as well as rock. Sometimes entrepreneurs will open teaching practices and employ teachers to teach on their behalf.
There are some common features and challenges in these early stages of Indian popular music education, mostly due to the nascent stage of the sector and the difficulty of operating a Western music college in a developing economy. Three common areas of difficulty are balancing manageable fees with operational viability, the lack of tertiary-qualified music professionals in India, and the accreditation of degree programs. They have not been in existence long enough to have a major impact upon the Indian music industry. Most emerging and professional Indian pop and rock musicians that we interviewed have not studied music formally, they have learned through a more informal, peer-learning approach during their time at college while studying other disciplines. Despite their lack of formal offerings in rock music, Indian tertiary institutions, like their counterparts in the West, have always been hotbeds of student rock bands and amateur performance.
Indian universities have long held arts festivals that have a rock music component. In 1971, a rock festival modeled on Woodstock, Sneha Yatra (Love Journey) was held in Maharashtra. The festival: …had around 4,000 attendees and featured rock bands alongside Indian classical musicians, including Amjad Ali Khan, all of whom performed for free. A journalist for the Junior Statesmen named Mirra wrote of the crowd present, "They came mainly for the atmosphere-three days to be just what you feel like with thousands of others like yourself." At that time, anyone who listened to rock or psychedelic music was still looked at as an anti-social element. (Mint 2015) A band competition hosted by Simla cigarettes was run between 1967 and 1972. However, the market for Western bands was minimal and the music regarded as anti-Indian, Western-style rock that fell outside the social permissible entertainment. Throughout the 1980s, universities and colleges held arts festivals with a broad focus. With Manhoman Singh's liberalizing economic reforms in the early 1990s, India commenced a sustained period of economic growth. This gave rise to an increasingly significant Western-influenced Indian middle-class. Two musical phenomena emerged from the growth of the middle class. On one level, this Westernleaning group, with greater disposable income sought Western music to consume. This middle class also began to send their children to universities in large numbers for education, swelling university numbers. This increased number of university students with exposure to Western music saw increased numbers of bands emerging. To address the issues of limited performance opportunities and venues, from the mid-1990s universities began to organize college festivals specifically focused on Western-style rock. Bands playing in these festivals were populated by students studying parent-approved degree programs in engineering or finance or whatever. A street press arose, personified in the music magazine Rock Street Journal which launched in 1993 and which inaugurated the commercial "Great Indian Rock Festival". Overall this was a huge boost to Indian rock music.
The college rock festivals, in the meantime, became so successful and so overwhelmed by amateur bands that many festival organizers began to impose additional requirements. The first was that bands must have a record album and many bands consequently began recording vanity albums of limited circulation, but enough to satisfy the requirements. Then organizers added the requirement that those albums must include original music, which contributed to the rise of an Indian songwriting tradition. The importance of these festivals cannot be overstated.
Campus festivals have been pivotal to the formation of rock bands and their survival during the nineties, when Independence Rock was the biggest event on the gig calendar and when the idea of a music festival was entirely implausible for both bands and their audience. Almost two decades on, campus festivals continue to hold a significant place in an Indian band's career graph. Says Yohan Marshall, vocalist of Mumbai-based jam band The Family Cheese, "If you want people to know you in a city, the best thing to do is to play a college festival." (Miranda 2014) It is a testament to the importance of these festivals that almost all of our informants, whether professional or developing, emerged from the college rock festival circuit rather than by attending local tertiary music institutions. These festivals are where these musicians learned about the industry, about playing in public, and about audience development. Resultantly, there are several interacting and overlapping communities, individual bands, groups of friends, universities, college rock festivals, and the wider industry in general, where aspiring musicians learn about music performance and music industry.
Conceptual Framework and Methodology
Music education in the high school years in India is not career-oriented. It exhibits a low awareness of genres, lack of facilities, a studentteacher based pedagogical approach, and a general underestimation of student-capabilities. Higher education traditional music programs are based on a guru-shishya (master-apprentice) model of teaching, quite contrary to how popular musicians have learned in the West. It is also significant to note that there are no formal music business courses within higher education in India-courses that would possibly demonstrate to Indian families potential career pathways in music (Britto 2019). Given that the Indian popular musicians we have interviewed report learning within the informal communities of universities, college rock festivals, and fellow-musicians, we have adopted a communities of practice theoretical framework informed by grounded theory data collection and analysis tools to explore how people learn about popular music practice and industry in India.
Communities of practice (CoP) as coined by Wenger (1998) are groups of individuals with shared interests who, through interacting with one another, learn how to do what it is they do in a better and more meaningful way. We find this model particularly relevant to the learning of popular music industry and practice in India given the lack of formal opportunities and that "learning is essentially a fundamentally social phenomenon, reflecting our own deeply social nature as human beings capable of knowing" (p. 5). Learning is a function of activity, context, and the culture in which learning is "situated" (Lave and Wenger 1991). Situated learning theory often refers to the idea of a "community of practice" as an informal, pervasive and integral part of our daily lives. CoP theory asserts that learning takes place when individuals within communities negotiate and renegotiate meaning. In this way, CoPs have the potential to expand and extend learning experiences and outcomes for both the individual and the community.
In order to understand college rock and the college festival as learning environments, we use Wenger's (1998) three broad stages of CoPs. These musical CoPs begin as places where people share interests and develop their competence together (engaging); and, eventually, they then move towards connecting to the broader social systems of which they are a part (imagining); and, eventually, they coordinate or align their achievements to apply their learning in a way that demonstrates its impacts or effects (aligning). Adopting a CoP framework acknowledges learning as emergent, heuristic, and the result of lived experience through participation in the industry and the world.
We have used a grounded theory analytical approach in order to help both the researcher and reader understand the meaning or nature of experience. Grounded theory, and its mantra of "everything is data" permits us to engage with substantive exploration of these novel communities of practice of which little is known and traditional data sources may be scanty (Strauss and Corbin 1998). Our grounded theory approach also recognizes the contextual elements that make the college festival phenomenon a learning environment different from others, and affords us the opportunity to engage with learning theories together with our shared vision for effective teaching and learning from the perspective of a professional musician. Between the two of us, we have conducted over thirty semi-structured interviews with both emerging and established musicians in India. Grounded theory analysis of this data generates greater understanding of the participants' points of view with the opportunity for us to probe and expand the participants' responses (Hitchcock and Hughes 1989). All interviews were audio recorded and transcribed. We do not identify the participants due to ethical consideration and requests from some participants. After transcription, we applied the grounded theory approaches of Strauss and Corbin to the data set to produce a set of codes that apply to the broader CoP concepts of engaging, imagining, and aligning.
College Rock Festivals as Music Education Enablers
All three CoP concepts create relations of belonging that expand our identity through space and time in different ways. Most of what we do involves a combination of all three, though more emphasis on one or the other gives a distinct quality to our actions and their meanings. When students first encounter the college music environment, perhaps improve their instrumental technique, perhaps join a band, we consider this to be engaging with the community of practice of college rock music. When they perform in college rock festivals and begin a professional journey of performing for and engaging with audiences, dealing with fan development, learning the technical aspects of music, and encountering the judgement of festival judges, industry, and fans, we describe this as the imagin-ing section of their industry journey. This develops further as they align their performance with the demands of the wider Indian music industry. They begin to utilize their industry connections made at college rock festivals. They develop sponsorship deals and apply branding. One of our informants described this startup process as: A bunch of friends just get together, or they get to know about each other. It's usually how this works, or at least at my college. It was like, they have auditions at the start of the year, where people come to showcase their abilities be it music, art, dance…and for me, I took part as a soloist, and a couple of my friends played guitar and bass. And that's how I got to know them. And in my final years, some juniors really caught my eye, so I asked him, "Dude do you wanna make a band, and start writing some music and start competing," and they were like, "Yeah I'm down with that." Before that, I'd met my friend Joe, through another friend, and he's from a different college, and we all met, and started hanging out together, and we put all our influences together to make a band, and out of the nineteen competitions in my final year we won seventeen.
A professional musician in Delhi started out as he: …went to Delhi University of Commerce and the Arts, and started meeting a lot of like-minded people who were into music. They were playing in local bands and circuits, so there we all formed a band, and that was a band that got me into the professional music circuit.
We do not regard the three concepts of CoP theory as a linear temporal progression, whereby one occurs when the previous is finished. Students do not "graduate" to college rock festivals from being musicians working in a band at a university. These areas are neither discrete nor progressive and can be engaged with in different orders. Many bands that have reached the stage of pursuing successful music careers in the music industry still return to play in college festivals.
Engaging: The College Music Environment as a Community of Practice
The college music environment, the bands, the collective learning, develops through the formation of the actual communities of practice, which is essentially about learning through engaging with people. Such a community of practice is comprised of a group of individuals who engage in a social network based on shared core values and knowledge in order to pursue a joint enterprise (Wenger 1998). At Indian universities, aspiring musicians create CoPs by engaging with other like-minded students. One of these enterprises, and the object of this paper, is the collective learning that takes place within the CoP. In the case of the college musical environment, community participants may learn instruments together, learn how to play in ensemble, collectively discover songwriting, and establish patterns of music industry behavior. Notably in this early stage of CoP development, people come together and establish norms and relationships. Learning is thus informal and peer-to-peer. A musician observed that: A lot of young musicians in college keep asking me, "How do you do it?" so whatever help I can offer them, I always do. All of my friends-we do that. It's like a really tight community here.
This mutual engagement defines the community as it draws on what participants do and what participants know, as well as on their ability to connect meaningfully with what they do not know and do not do; that is, to the contributions and knowledge of others (Wenger 1998). One of our informants noted that: There are some people who are self-taught, and others who had some extra-curricular lessons when they were young. I'm self-taught. I couldn't play guitar for shit, but I taught myself how to play, and learned from my friends as well. Joel taught me a lot of things. He'd gone for guitar classes and things. Then I'd teach him singing, whatever I know. And we'd grow together.
In the college musical environment, the processes of learning from the community involve such industry matters as developing repertoire, rules, tools, artifacts, documents, and identity formation. Everyone appears to learn from everyone else. One of our informants remembered: …composing songs when I was a college student, subjects [that were] quite localized to the events happening at the time, things like college life, exams, friends, relationships.'Cause of the boredom and disinterests, I was always drifting towards learning music or trying to find an outlet of some sort of way to express myself.
The bounded character of engaging in CoPs has both strengths and weaknesses. CoPs form through mutual engagement, joint enterprise, and shared repertoire, and deep knowledge can be accumulated among the individuals and the collective. However, while a strong boundary formed around the CoP can indicate learning and cohesion and a critical competence, this can also make CoPs become hostage to their history; that is, they can become insular, defensive, closed in, and oriented only to their own focus.
CoPs cannot be considered in isolation from the wider communities in which they are located, and so our discussion moves to consider how CoPs continue to grow and evolve through the balancing act of developing deep competence at the core and straddling the risky unknown at the periphery or boundary of the CoP. It is these disturbances or discontinuities that perturb the CoP and thus spur the history of practice onward.
Imagining: The College Festival as Extending Learning
At college, once musicians have formed and engaged in CoPs, learned the basics of performing rock, their instruments, and how to play with each other, they tend to move toward popular music performance. The most obvious and accessible of these are the college rock festivals. As members interact, they negotiate new meanings and learn from one another. Learning musicians share their competence with others at the same time, developing their own competence.
We describe the learning experiences of the college rock festival as engaging with learning from other CoPs. Engaging is the first crucial step in the CoP stages of development as it gives students control of their own learning, which becomes the enterprise of the community. Imagining gives a sense of possible trajectories, and it is here that the college festival circuit provides a learning ground that extends beyond the boundaries of CoPs and transcends engagement. Imagination according to Wenger (1998) enables us to recognize our own experience as reflecting broader patterns, connections, and configurations, and to push the CoP to conceive new developments, explore alternatives, and envision possible futures (178).
A college rock festival is a loose and reflective learning experience, but it also possesses a defined and organized operating procedure. One of our informants described the stages of a college rock festival: You had a band. If you wanted to play live, you had to first make a name for yourself. How you made that name for yourself was college competitions. That was sort of your stepping-stone. You had to play like three/four college festivals and make it to the finals. They have these preliminary competitions which are all-nighters where you got fifteen minutes to get on stage, set up your shit, play your set, and get off. And there'd be like twenty bands in one night on the rostrum. It would start at like eleven o'clock at night and go on till like seven am, in this one small auditorium on a college campus. By the end of the night they'd select five winners which would compete in the finals. Then there would be one final winner and the finals would be in front of an audience of say like seven thousand people, which was in the open-air theatre. That was one of the largest shows you could play. They were a large venue with like massive sound for like five or seven thousand people, it was a different experience which you never really got to do unless you were a big band or you got to play in the Great Indian Rock Festival.
Imagination is evident in the way that students recognize the transition from school to university. One informant observes, "It is college where most young people shine the most. Because there's a lot of opportunities for them, they can dream big." Another of our informants also describes possible futures and trajectories: In my first year, I was just trying to work out how this works. A lot of huge bands like F16s and all, they've made it quite big. They're signed to Universal. They have two million streams on Spotify. They're quite big in India, and a lot of huge acts in India, most of them start from college The process of being on stage in a performance setting encourages the development of real-life learning. However, by performing initially in heats, perhaps in the very early hours of the morning, the risks of failure are managed. These practical industry skills include: • Rapid Setup and Sound Check ("We try to get our sound right on stage. It's very hard for bands because we get really limited time to do our sound check. It's like performance time of twenty to thirty minutes with a setup time of ten minutes. In ten minutes you can't do a lot of things, so we try to make the most of it right, like during rehearsals we'll make our own tricks, to make ourselves sound better") • Audience Development ("No one streams our music, so our chance of getting, of building, an audience is to capitalize on our live show") • Event Management ("Organizing committees have to start from ground zero. They're just students and they don't know how it works. They just do their own research and start from the ground up. They'll have to do everything within three to four months") • Sponsorship and Branding ("You get to play instantly for a few hundred to even thousands of people sometimes. From a brand's perspective, investing in a college fest is going to help them reach the young audience directly and position itself as a 'cool' brand that's associated with the youth. A few popular brands that are regularly associated with college fests are Pepsi, Coke (Coke Studio), Red Bull (Red Bull tour bus), One Plus, VH1, Vodafone, Monster, etc. They keep investing on college fests year after year and that shows they are able to achieve the numbers") Despite the small and defined nature of the college rock festival, the opportunity exists to perform to larger crowds. An informant noted that one can play instantly for hundreds and even thousands of people and earn reasonable money-he observed that the rewards can be as high as ₹2.5 lakhs (₹ is the symbol for Indian Rupees; "lakh" = 100,000; thus ₹250,000 is equal to approximately US$3,500).
I have been playing at college fests across the country right from when I was an engineering student, for about eleven years now. College fests are a great platform for young and upcoming bands to showcase their music, and performing for a larger crowd definitely helps you shape up your performance and helps you grow as a performer. Playing as a band from college is like a starting point for many full-time music professionals like me.
Even if (as most of them must) they fail to win, the community around college rock festivals is supportive. One of the informants noted: In Andhra, although we didn't win because it was a rock competition and our set was electronic, the way they accepted and encouraged us goes to show how open they are to even supporting independent acts like us.
New relationships can create a ripple of new opportunities, awaken new interests that can spark a renegotiation of enterprise and provide an experience that opens our eyes to a new way of looking at the world. For example, one of our informants explained, "I met a few interesting people that really changed the way I was thinking." Another informant commented on how the experiences changed their songwriting, "Yes, it helps us understand genre so much more, and also understanding the point of connection between the audience and the music can help us in our music production to produce such moments in our song." Exposure to other CoPs allows members to bring that experience back into their own communities thus changing the way their community defines competence and deepening their own experience. One informant commented generally on bands playing at the festivals:"When they go for these competitions, they sort of analyze their performance and the others and find out their weak spots, their strengths and for the next fest, work on things that are lacking." Another successful young artist recalled, "We learn from our mistakes, like every time we make a mistake on stage or offstage, we learn something like that's kind of made us better." Boundary work acknowledges that CoPs are situated within a wider social system. Being at the boundaries of our communities involves flirting with mystery and can encourage members to extend themselves beyond their own competence. They can be sources of opportunities as well as potential difficulties. For example, if the competencies of the core (old) and the boundary (unknown) match or are too close, there can be a lack of learning, and if the distance between core and boundary practices is too great, that is, if the difference between competence and experience is too disconnected, learning is also unlikely to occur. College festivals provide bands with new knowledge and perspectives which can spur their own CoP in new directions, helping them to become less insular, defensive, and closed. One informant describes a new skill he learned and deemed to be significant: I think first of all you should be very open to ideas from others. I think that is a skill that is lacking in India in general, we have an attitude that keeps coming through a lot, you're not open to critics which I think is a skill to have. To be open to critics. You are making your own music, but only an outsider can tell you whether it's good or not, you can think it's good but at the end of the day your crowd speaks to you and if your crowd think this is right, this is wrong, if you don't take the wrongs then you'll never get everything right. So that is a very important skill to have.
Building networks is an important feature of Imagining in CoPs. Relying solely on close ties developed through Engagement in CoPs limits access to new resources, new knowledge, new perspectives, and potential opportunities. However, networking does present risks around building trust, as individuals and collectives do not know what lies beyond their boundaries or within indirect or weak ties (Granovetter 1978). For our informants, reaching out beyond their own CoPs was a significant aspect of their learning: The organizing committees, they are engineers, or arts students, or science students, just normal college-going people, whatever help they can get, they will. Like they'll ask their fellow musicians in their own cities and they'll get some help as to how to approach all these bands or acts, and they'll get it done, even if they don't know how it all works." Aspirant musicians taking part in the industrial aligning process of the college rock festival participants learned to network, sometimes making long-term friends and collaborators. Another of our informants involved in event organization noted that, "The college festival is quite useful in terms of networking and just building awareness about different possibilities. It sure worked for someone like me because I come from a family where no one is into music." While bands form their own CoP, engagement with others upsets the "safety net" of the group and pushes community members forward. By encountering and considering new ideas, new ways of doing things, and new modes of practice, community members will either improve their own practice, or, by discarding irrelevant information, understand more why they do things in a certain way.
The benefits of accessing knowledge and experiences outside of their own CoP gives rise to a collective competence that begins to align with industry standards and expectations. By exposing their individual CoPs to an intense, competitive environment, by watching other bands, and by interacting with them, students generate learning about genres, the realities of live performance, and different ways of songwriting. This often gives them the impetus to try something new. By overlapping with other CoPs, they push themselves forward.
Aligning: Moving Into the Wider Music Industry
If imagining at college rock festivals allows performers to reach beyond their community boundaries, alignment within the wider music industry grounds community members and ensures that learning is effective. It aligns local activities with other processes so that they can be effective beyond their own engagement. Alignment is about CoPs connecting their efforts with the broader social system in which the music industry operates. For example, there is a focus and direction in Indian college rock bands to realize higher goals, perform more regularly, play bigger gigs, and the like, thus raising the stakes for participation and accountability. The Indian music industry with its broad systems of styles and discourses is accessible through the coordination and alignment of CoP action.
College festivals, no matter the size and scope, no matter how prestigious, are ultimately run by students. They are not taught how to stage and manage these events, but rather, they learn through a need to know. As a music business CoP, these students develop their practice over time and gradually move to align their products, tools, resources, processes, and procedures to industry standards. One of our informants who had participated in seventeen college festivals in 2017 alone observed that: Festivals like IIT Madras, Bangalore, and Bombay, these reputed universities they are reputed for a reason, because everything goes according to plan. Everything is on time. A lot of sponsors want to put in money and fund them.
By coordinating competencies and perspectives, alignment expands the scope of the community's effects on the world and gives their energy some focus and direction. A CoP can exploit this focus and direction to create unique artifacts, and to give the community a sense of what is possible and how it might realize higher goals. The college festival circuit can be lucrative, and through our interviews it was apparent that most bands were strategic about how they spent their prize money to become even more competitive within the broader industry. One opportunistic musician explained how the band redirected its practices, efforts, and energies: So after we won everything in the first year, we made a whole bunch of, well we made sufficient money. We had a lot of money because we used to save up. So we saved up like ₹2 lakhs [₹200,000/US$2,850] and then we were like, what's the next move? How are we going to progress from here? At that time in 2011 there weren't bands which were bringing out EPs and recording their material. It's like very rare, like hardly…actually…no independent band did it, so we were like one of the first bands to even come up with this concept.
In doing alignment work, CoPs engage in activities that have consequences beyond their boundaries. In this way, members learn what it takes to become effective in the world (Wenger 1998, 274). To be effective, a learning community becomes self-conscious about appropriating the styles and discourses of what Wenger describes as "constellations" of communities of practice. This type of alignment learning is described below by a professional musician who demonstrated a nuanced approach to professionalism and industry standards: Performing for college audiences required me to be a bit formal, but it definitely helped me on my stagecraft: how I dress, the way I communicate with the audience, and the kind of material I presented. I took these learning experiences and applied them to performances outside the college environment which immensely helped me out. I guess what came out of the college experience was learning to present myself as an artist, and performing in outside venues gave me the experience needed to realize the lessons.
Another informant described performing in a festival as, "a real learning experience as to how one should present himself or herself." The informant also went on to explain the importance of "gaining contacts in the industry which helps us in entering the scene more easily." Wenger emphasizes the importance of generational encounters, that is, "the mutual negotiation of identities invested in different historical moments" (1998,275). If "old-timers" (experienced musicians and industry) and "newcomers" (inexperienced musicians and industry) are engaged solely in their own separate practices, then this is a learning opportunity missed. Unfortunately, such segregation is typical of the modern youths' lived experience. One of our successful, and young, informants described the rate of learning he has experienced as a result of the "generational encounters" college rock festivals afforded him: We won ₹40,000 (US$580) at a Loyola college competition in a single night. We started to get a few gigs after that. Eventually people started taking notice and we landed our first festival gig, out of college. People in my college rec-ommended us to this promoter who hired us for a music festival in Tamil Nadu. It was pretty cool. Then last year, I released a song and won the Best Young Indie Award, hosted by Radio City Freedom. They flew me down and accommodated me. I was still in college then and I was like "woah". And then I went to Bombay. I got to meet a lot of famous award-winning musicians, and I got to share my experiences with them. That's where I met this guy from Bangalore who became my manager. His band won best metal act. For the last six months I've been playing shows I wouldn't have dreamt of. Everything is happening so fast. Like, this weekend we're playing in Hyderabad, as a support act for [Australian guitarist] Plini. He's one of my favorite guitarists and he's played everywhere. As a result, people started following me and taking notice.
Alignment requires generational encounters, a mixing of the experienced and the novice. However, the advantages are not one way. A fresh youthful energy and approach can push histories and practice forward. Alignment recognizes that CoPs cannot exist in isolation, but that, "They must use the world around them as a learning resource, and be a learning resource for the world" (275). One young informant discussed key learnings of bad practice that exist in the Indian Western-influenced rock industry, particularly around young musicians agreeing to work for free. This learning can be redirected in CoPs aligning their efforts towards an agreed standard of industry engagement: I didn't know the scene and the people took me for granted because I was interested and not seeing money as a first thing. But to artists who are getting into full-time music they should know that money is also important, how equally they want to take their passion to the next level, money is also important and everywhere there is money, it's up to you to take it or not OR it's up to you to ask or not. If you feel it won't be good to ask for money because he's giving an opportunity here, will it be ok if I go and ask the next time? So if people think this way then that's hard. Maybe it will take some time to change this.
Another more experienced informant discussed how independent artists can do the alignment work of overlapping practices, in this case law and music, to build a more sustainable industry infrastructure: There is talk about forming a musicians association, but there are several problems with that. There is far more supply of artists in Delhi than there is demand, which means that belonging to an association may cause problems. Once [a venue owner] finds that this artist belongs to the association, [he] moves onto another artist who does not belong to the association. Thus [he's] avoiding getting tied down by legalities, or even a community moving against you in case [he] defaults. Apart from these issues, there is definitely a way that you can form an association, backed by a few pro bono lawyers, who may be musicians themselves…singer/songwriters who have nurtured their talent although in law school. And so they are ripe for such a bond to be formed with other musicians to come together.
In this sense, CoPs have the power to align and direct their learning for change. Wenger explains this as a kind of "allegiance to a creed, or a movement" where the commitments to unite them often have little to do with personal commonality or differences (1998,182).
At the beginning of this discussion the three modes of CoPs were highlighted as not operating in a linear progression, but that a community will quite often move back and forth between the modes. In this way, learning in CoPs is most effective because it reflects a way of living in the world. Engaging is necessary for building a joint enterprise and shared vision, Imagining shakes CoPs up and keeps them moving, and Aligning ensures that the imagining is grounded and effective.
Conclusion
Fundamentally, educational opportunities in Indian college rock festivals are a form of that buzzword in modern university education: realworld learning. They are foundation stones of the popular music industry in India. Western music colleges mount performances, do concert practices, teach performance skills or recording skills courses, and lecture on mu-sic industry. Presumably, students learn from it. However, by non-music Indian colleges engaging aspirant musicians in communities of practice, learning Indian musicians are prepared for the realities of the music industry. By being forced to do it themselves-by networking, by performing, by losing competitions, by going back to the drawing board to get better-students learn by doing. Perhaps they have not had the opportunity to hone their technique to the same standard as Western music college graduates. Perhaps they haven't been able to learn about rock history. Perhaps their equipment is not as up-to-date and their instruments as beautifully made and maintained. Perhaps they haven't had the opportunity to be in an aesthetically beautiful and cutting-edge recording studio. However, they are learning about performing music, pleasing an audience, and working in, and engaging with, their local industry. Western music colleges focus a great deal of effort to create real-world and industry-facing learning experiences. The Indian college music festival movement has achieved similar-and potentially better-results by empowering and encouraging communities of practice to engage with each other, giving them time to come together and imagine the possibilities and align themselves with industry standards. It was also apparent in the data that their learning not only has the power to align with the industry, but contribute to its ongoing development for the better.
This does not mean that music colleges in the West should throw in the towel. There are some things that we do very well. However, we should take every opportunity to improve our educational methods, create better outcomes for our students, and prepare them for an increasingly competitive market. There is much we can learn from the example of the Indian college rock festivals. In many ways these festivals align more closely to an andragogical educational model than Western music conservatories sometimes employ.
The third of Knowles' (1973) adult learning principles, for example, states that adults learn by doing. This describes precisely the approach of the Indian college rock festivals. Everyone, the organizers, the techs, the promoters, the musicians, are doing this and learning how to do it at the same time. The sixth of Knowles' adult learning principles states that adults learn best in an informal situation. Learning within the collegebased CoPs is entirely informal with no classes, no curriculum, just musicians motivated to learn. Paulo Freire (1970) argues that adults learn by generating knowledge rather than a banking model of education, where students wait for professors to drop wisdom into their empty vessels. In the spirit of Freire, learning in CoPs is emergent and acknowledges one's own experience and interests as resources for community learning, therefore potentially avoiding a didactic, colonizing education embedded in political agenda. This model of CoP learning is liberating, and precisely how Indian college rock festivals operate. | 2019-11-22T00:45:29.821Z | 2019-03-23T00:00:00.000 | {
"year": 2019,
"sha1": "6aa0608c070154b802deffa145fd2d11715879d7",
"oa_license": null,
"oa_url": "http://www.meiea.org/resources/Journal/Vol.19/MEIEA_Journal_2019_Kelman_Cashman.pdf",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "e4d19e338891360d446b4c8fdf901d4e3624f0f3",
"s2fieldsofstudy": [
"Education",
"Art"
],
"extfieldsofstudy": [
"Sociology"
]
} |
263640999 | pes2o/s2orc | v3-fos-license | Clinical observation of dexmedetomidine nasal spray in the treatment of sleep disorders on the first night after undergoing maxillofacial surgery: a single-center double-blind randomized controlled study
Purpose: Dexmedetomidine exerts a sedative effect by promoting the sleep pathway endogenously and producing a state similar to N2 sleep. This study aimed to study the efficacy and safety of dexmedetomidine nasal spray in the treatment of postoperative sleep disturbance. Methods: This study enrolled 120 participants [men and women; age, 18–40 years; American Society of Anesthesiologists grade, I or II] who underwent maxillofacial surgery under general anesthesia through nasotracheal intubation. The participants were randomly divided into three groups: blank control group (BC group), 1.0 μg/kg dexmedetomidine group (1.0 Dex group), and 1.5 μg/kg dexmedetomidine group (1.5 Dex group), with 40 patients allocated to each group. At 21:30 on the night after the operation, the intervention groups were administered their corresponding doses of dexmedetomidine nasal spray. The Pittsburgh Sleep Quality Index (PSQI) scale was used to evaluate the baseline sleep status of participants 1 month preoperatively and on the night after the operation. Polysomnography (PSG) was used to record the sleep status on the night after the operation. We recorded the rescue times of sedative and analgesic drugs on the first night after surgery, adverse reactions, total hospital stay duration, and total costs. Results: Compared with patients in the BC group, those in 1.0 Dex and 1.5 Dex groups had longer N2 sleep duration, were awake for a shorter time after dose administration, woke up less often, and had significantly improved sleep efficiency (p < 0.05). Compared with the BC group, the PSQI scores of 1.0 Dex and 1.5 Dex groups were significantly lower on the night after operation, and the proportion of PSQI > 5 was significantly lower (p < 0.05). Compared with patients in the BC group and the 1.0 Dex group, those in the 1.5 Dex group had significantly prolonged N3 sleep, reduced frequency of requiring sufentanil rescue, lower incidence of sore throat after surgery, and shorter average length of hospital stay (all, p < 0.05). Conclusion: The sleep quality of participants on the night after having undergone maxillofacial surgery was safely and effectively improved by 1.0–1.5 μg/kg dexmedetomidine atomized nasal sprays. Notably, only the latter could prolong N3 sleep. Level of Evidence II: Evidence was obtained from at least one properly designed randomized controlled trial.
Introduction
Postoperative sleep disturbance (POSD) refers to the changes in sleep structure and quality of patients in the early postoperative period.POSD is mainly characterized by decreased rapid eye movement (REM) sleep, increased wake time, and fragmented sleep [1].During hospitalization, many factors can affect patients' sleep after operation, such as anxiety, tension, pain, postoperative weakness, medical ward rounds, and noise.Patients' sleep was disturbed the most on the first night after operation [2].POSDs can affect patients' postoperative recovery and adversely affect the aspects of cognition, mood, memory, pain perception, psychomotor function, and metabolic, inflammatory, and immune markers [3].Improving the sleep quality of hospitalized patients can increase patient comfort and improve surgical outcomes [4].Dexmedetomidine is a selective alpha-2 adrenergic receptor agonist with sedative, analgesic, and anxiolytic effects [5].It can effectively alleviate postoperative pain and anxiety and improve the postoperative sleep quality of patients [6].Dexmedetomidine administration via a nasal spray is simple and convenient and does not irritate the nasal mucosa; furthermore, it is ideal for its higher bioavailability [7].This mode of administration can avoid the pain and inconvenience associated with venipuncture and intramuscular injection.It has a high degree of patient acceptance and is currently the most commonly used delivery method for this drug clinically [8].However, there are few clinical studies on the intranasal administration of dexmedetomidine for the treatment of POSD in patients having undergone maxillofacial surgery.The appropriate dose of dexmedetomidine nasal spray for the treatment of sleep disorders requires further validation in clinical trials.
In this double-blind randomized controlled study, different high doses of dexmedetomidine were administered nasally to patients having undergone maxillofacial surgery to compare their effects on the patients' sleep on the first night after surgery.
Research hypothesis
Dexmedetomidine nasal spray is safe and effective for alleviating postoperative sleep disturbance in patients undergoing maxillofacial surgery.
Study participants
This study was a single-center double-blind randomized controlled study.The protocol for this trial was approved by the Hospital Ethics Committee of Plastic Surgery Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College (Z2020185).The trial was registered with the China Clinical Trial Registration Center 1 before patient recruitment (ChiCTR2100041597, Principal investigator: YW, Date of registration: 1 January 2021).The trial was conducted at Plastic Surgery Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College in Beijing, China.All participants were informed of the purpose of this study and provided signed informed consent.
Participants undergoing maxillofacial surgery under general anesthesia and endotracheal intubation at the Hospital, between 2 January 2021 and 27 January 2022 were eligible for this trial.In this study, we included patients who were men or women aged 18-40 years; had an American Society of Anesthesiologists (ASA) physical status of I or II; entered the operating room by 8:00 a.m.; had their endotracheal tube removed within 2 h of operation; were not given patient-controlled analgesia (PCA); and stayed in the post anesthesia care unit (PACU) on the first night after operation.We excluded participants who had a history of other systemic diseases, such as congenital heart disease, hypertension, and epilepsy; who had obstructive sleep apnea-hypopnea syndrome (OSAHS), depression, or were taking sedative and analgesic drugs; who had cysts, tumors, or polyps in their respiratory tract; who had a history of an upper respiratory tract infection in the past 2 weeks, who could not cooperate because of hearing or speech impairment or both, and who refused to enroll.
Randomization and blinding
In this study, we included 120 participants undergoing maxillofacial plastic surgery.The participants were randomized using a computer-generated random number table 2 and sealed envelopes and assigned to three groups in a ratio of 1:1:1: blank control group (BC group, n = 40), 1.0 µg/kg dexmedetomidine group (1.0 Dex group, n = 40), and 1.5 µg/kg dexmedetomidine group (1.5 Dex group, n = 40).Treatment allocation was concealed from patients but not from anesthesiologists.The investigators performed intraoperative evaluations and postoperative follow-ups, and participants were blinded to the treatment allocation.
Anesthesia procedure
After 8 h of fasting, we monitored electrocardiogram (ECG) findings, oxygen saturation (SpO 2 ), heart rate (HR), and blood pressure (BP) of the patients.After establishing intravenous access, 0.05 mg/kg midazolam and 0.2 μg/kg sufentanil were administered.Once the patient was sedated, mask ventilation was started, and 10 mg of ephedrine was used to treat the selected nostril; ephedrine could constrict the blood vessels of the nasal mucosa and reduce bleeding during intubation.Thereafter, 2.0 mg/kg propofol and 0.6 mg/kg rocuronium were intravenously injected, followed by continuous mask ventilation for 2 min.Nasotracheal intubation was started after the mandible relaxed.Then, the patients were connected to the anesthesia machine for intermittent barotropic ventilation.Anesthesia was maintained with 7 mg•kg −1 h −1 propofol or 1-2% sevoflurane and 2 µg•kg −1 min −1 remifentanil, with tidal volume (VT) of 8-10 mL/kg, respiratory rate of 12-15 breaths/min, flow rate of 2.5 L/min, and the O 2 : Air ratio of 1.0:1.5 L/min.Intraoperative controlled BP was performed to reduce blood loss.
Surgical classification
Single surgery was defined as undergoing one of the following procedures: mandibular angle and masseter resection, maxillary Lefort I osteotomy, and mandibular sagittal split osteotomy.Multiple surgeries were defined as undergoing two or more of the following procedures: maxillary Lefort I osteotomy, mandibular sagittal split osteotomy, mandibular angle/ zygomatic/chin osteotomy, and masseter resection.
Procedures
Patients in the BC group were not administered any nasal spray.The night after the operation, the 1.0 Dex group was given 1.0 μg/kg dexmedetomidine nasal spray (100 μg/mL, Jiangsu Hengrui Medicine Co., Ltd.Lot number: 210309BP).Similarly, the 1.5 Dex group was given 1.5 μg/kg dexmedetomidine nasal spray.Firstly, the anesthesiologist prepared dexmedetomidine based on patient weight using the oral and nasal aerosol device (2 mL*42 mm, Anhui Discovery Medical Device Technology Co., Ltd.China.Figure 1) and used dexmedetomidine stock solution without dilution.The nurse on duty in the PACU administered the drug at 21:30 on the night of the operation by alternately spraying a small amount of the drug into the left and right nostrils to reduce swallowing.The nurse was not aware of the patient grouping.
We used polysomnography (PSG, Alice PDx, Philips, Respironics Inc. Murrysville, PA, United States; Figure 2) to monitor patients' sleep from 21:30 to 7:00 the next day, and electroencephalogram (EEG), electromyogram (EMG), electrooculogram (EOG), and SpO 2 findings were recorded.In the night, if the VAS pain score was 4-6 points, the patient was given oral oxycodone/paracetamol (5 mg).If the VAS pain score was 7-10 points, intravenous 0.5 μg/kg sufentanil was administered.If the patient could still not fall asleep after 0: 00, 0.02 mg/kg midazolam was intravenously administered.In the event of respiratory depression, the patient was woken up immediately and given oxygen with mask ventilation.
Before premedication, all participants were asked to answer the Pittsburgh sleep quality index (PSQI) questionnaire, which was used to assess their sleeping patterns over the last month (baseline) and the first night after surgery.All participants stayed in the same single room for 1-2 people of the PACU and experienced the same sleeping environment.All questionnaires were scored by the same anesthesiologist with 5 years of experience.
The PSQI questionnaire has seven components (18 items): A, sleep quality; B, sleep latency; C, sleep duration; D, sleep efficiency; E, sleep disorders; F, use of sleep medication; and G, daytime dysfunction.
Each component is scored separately and weighted equally on a scale of 0-3.Thus, total scores range from 0 to 21, with higher scores indicating poorer sleep quality and a score of >5 indicating the presence of a sleep disorder [9].For questions 5-14 (which discuss the various reasons that could keep patients from falling asleep) and 16-18 (which discuss the frequency of trouble staying awake while driving, eating meals, or engaging in social activity, problems with keeping up the enthusiasm to get things done), the patients could select one of the following options to answer these questions: none, <1 time/week, 1-2 times/week, and ≥3 times/week.The sleep assessment on the night after operation was assessed as none, light, medium, or severe.Item 14 pertained to any specific cause of sleep disturbance that was not covered in items 5-13.
Primary parameters
PSG report covered the following aspects: sleep stage, frequency and duration of awakening from sleep, sleep efficiency, SPO 2 , and HR.The PSQI scores were also the primary parameters.
Secondary parameters
The following were the adverse reactions experienced by participants on the night after operation: tachycardia, bradycardia, pain, sore throat, nausea, and vomiting.Other secondary parameters included sedative and analgesic drug rescue times and length and total cost of hospital stay.
Sample size calculation
After pre-trial dexmedetomidine treatment, the total sample size was calculated based on a previous study that compared the effects of dexmedetomidine (continuous infusion at 0.1 μg kg −1 h −1 ; n = 31) and placebo (n = 30) on postoperative sleep of elderly patients in the ICU.Dexmedetomidine infusion increased the percentage of stage N2 sleep from median 15.8% with placebo to 43.5% with dexmedetomidine (p = 0.048) [10].We assumed that the dexmedetomidine group can prolong N2 phase sleep compared to the blank control group, and there was a statistically significant difference.According to α = 0.05, 1−β = 0.8, and 10% dropout rate, the total sample size calculated using the Power Analysis and Sample Size software (version 11.0; NCSS, Kaysville, Utah, United States) was about 120, with 40 cases in each group.
Statistical analysis
All statistical analyses were performed using the Statistical Package for Social Sciences software (version 26.0; SPSS Inc., Chicago, IL, United States).For the three patient groups, demographic features, namely, age, weight, height, and BMI, were presented as means ± standard deviations and ranges, whereas gender, ASA class, and surgical complexity were presented as percentages.PSG sleep staging results are presented as mean ± standard deviation.Sleep patterns were assessed using the PSQI questionnaire, and the scores were calculated.Categorical data are presented as frequencies and percentages, and continuous data are presented as mean ± standard deviation.
The Shapiro-Wilk test was used to analyze whether the data were normally distributed.The ANOVA was used to compare normally distributed data among the three groups.The least significant difference test was used for post hoc pairwise comparisons.The Kruskal-Wallis H test was used to compare non-normally distributed data among the three groups.The χ 2 test was used to compare proportions.p < 0.05 was considered statistically significant.
Results
A total of 120 participants were initially included and randomly divided into three groups.Three participants were excluded because of electrode displacement and loss to follow-up.The final analysis included data collected from 39 patients in each group (Figure 3).The three groups did not significantly differ in terms of continuous variables (age, height, weight, and BMI) or categorical variables (gender, ASA grade, and surgical complexity; Table 1).The three groups did not statistically significantly differ in terms of anesthesia time, operation time, and the major intravenous drugs used during the operation: remifentanil, sufentanil, propofol, and midazolam doses (Table 2).
Primary parameters PSG reports
Compared with the BC group, the 1.0 Dex group and the 1.5 Dex group had prolonged N2 sleep (BC, 1.0 Dex, 1.5 Dex:
PSQI score
The baseline PSQI scores did not statistically significantly differ among the three groups.Compared with the baseline, the PSQI scores of the three groups were significantly increased on the night after the operation, and the proportion of PSQI > 5 was significantly increased as well.Compared with the BC group, the PSQI scores of 1.0 Dex and 1.5 Dex groups were significantly reduced on the night after the operation (BC, 1.0 Dex, 1.5 Dex:7.8 ± 3.6, 4.9 ± 2.8, 4.1 ± 2.0), and the proportion of PSQI > 5 was also significantly reduced (BC, 1.0 Dex, 1.5 Dex:71.8,25.6, 25.6%).However, there was no statistically significant difference between 1.0 Dex and 1.5 Dex groups (Table 4 and Figure 4).
The baseline scores of the aforementioned A-F items did not significantly differ.Regarding scores for sleep quality (item A), sleep duration (item C), and sleep efficiency (item D), the postoperative scores of the three groups were significantly increased compared with the baseline score.Postoperative scores of items A, C, and D decreased significantly in the 1.5 Dex group, but there was no statistical difference between the 1.0 Dex group and the 1.5 Dex group.Regarding the sleep latency score (item B), the postoperative sleep latency scores of the BC group and the 1.0 Dex group were significantly increased compared with the baseline score, and the 1.5 Dex group had the least score.Regarding the scores for sleep disorders (E) and use of sleep medication (F), the scores on the night after operation in all three groups were significantly increased compared with the baseline score; however, there was no statistically significant difference in these scores among the three groups.Regarding the score for daytime dysfunction (item G), the 1.5 Dex group had a lower baseline score than the BC group.The scores of the BC group and the 1.0 Dex group were decreased and that of the 1.5 Dex group was increased compared with the baseline score; however, the scores of the three groups did not statistically significantly differ on the night after operation (Table 4).
Secondary parameters
In terms of the supplementation of sedative and analgesic drugs on the night after operation, the three groups did not There is a statistical difference compared with the 1.0 Dex group.
Journal of Pharmacy & Pharmaceutical Sciences
Published by Frontiers Canadian Society for Pharmaceutical Sciences 07 statistically significantly differ in terms of the administered doses of paracetamol, oxycodone, and midazolam.Sufentanil rescue was needed fewer times in the 1.5 Dex (10.3%) group than in the BC group (33%) (Table 5).
Regarding the adverse reactions noted on the night after operation, the incidences of bradycardia, tachykinesia, easy or early awakening, surgical wound pain, nightmares, and nausea and vomiting did not significantly differ among the three groups.
p < 0.05 was considered statistically significant, and the χ 2 test was used for proportional analysis.Bold: p-value < 0.05.a There is a statistical difference compared with the BC group.There is a statistical difference compared with the 1.0 Dex group.
Journal of Pharmacy & Pharmaceutical Sciences
Published by Frontiers Canadian Society for Pharmaceutical Sciences The incidence of sore throat was lower in the 1.5 Dex group (10.3%) than in the 1.0 Dex group (33.3%).The total cost of hospitalization did not significantly differ among the three groups; however.the length of hospital stay was significantly lower in the 1.5 Dex group (5.4 ± 1.6 days) than in the BC group (7.1 ± 2.5 days) (Table 6).
Discussion
Maxillofacial plastic surgery changes the contour of the face by modifying the bone structure of the maxillofacial region.The procedure is long and traumatic and requires general anesthesia and controlled BP reduction to reduce blood loss during the operation.Postoperative pressure bandaging is required, and there is excessive oral secretion [11,12].Patients having undergone maxillofacial plastic surgery are extremely uncomfortable and have high levels of anxiety on the night after operation; these issues are severely detrimental to their sleep.If these issues remain unresolved, they can affect the prognosis and cognitive function of the patients, exacerbate postoperative pain, and even induce cardiovascular events [13].Dexmedetomidine exerts its hypnotic action through activation of central pre-and postsynaptic α 2 -receptors in the locus coeruleus, thereby inducting a state of unconsciousness similar to natural sleep, with the unique aspect that patients remain easily rousable and cooperative [14].
In this study, 1.0 and 1.5 μg/kg dexmedetomidine stock solution atomized nasal sprays were used to treat sleep disturbance on the night after operation in participants having undergone maxillofacial surgery.The results showed that dexmedetomidine nasal sprays at both concentrations could effectively prolong N2 sleep (BC, 1.0 Dex, 1.5 Dex: 181.4 ± 72.2, 240.7 ± 89.4,259.4 ± 71.5 min), shorten the waking time, reduce the number of awakenings from sleep, significantly improve sleep efficiency, reduce PSQI score, and reduce the incidence of sleep disorders.Notably, 1.5 μg/kg dexmedetomidine nasal spray could also effectively prolong N3 sleep.In addition, 1.5 μg/kg dexmedetomidine nasal spray could also reduce the number of times sufentanil rescue had to be used postoperatively; furthermore, it reduced the incidence of postoperative sore throat and the length of hospital stay.Wu XH, et al. founded that dexmedetomidine infusion increased the percentage of stage N2 sleep from 15.8% with placebo to 43.5% with dexmedetomidine; it also prolonged the total sleep time, decreased the percentage of stage N1 sleep, increased the sleep efficiency, and improved the subjective sleep quality.Dexmedetomidine increased the incidence of hypotension without significant intervention [10].Although the administration methods were different, the results were consistent with our study.
Sleep includes REM sleep and non-rapid eye movement (NREM) sleep.NREM sleep is subdivided into N1, N2, and N3 sleep, representing progressively deeper stages of sleep.In this study, using PSG, we could objectively analyze sleep staging by studying EEG, EMG, and electrooculography findings [15].Unlike other sedative drugs, dexmedetomidine exerts a sedative effect by promoting an endogenous sleep pathway and producing a state similar to N2 sleep [16].Xu et al. [17] showed that intravenous infusion of dexmedetomidine (average dose, 104.60 μg ± 27.93 μg) can induce N2 sleep with a similar proportion to natural sleep.Chamadia et al. [18] confirmed that oral dexmedetomidine solid capsules at night promote N2 sleep.The results of the present study revealed that dexmedetomidine nasal spray effectively prolongs N2 sleep.Increasing the dose to 1.5 μg/kg could also prolong N3 sleep.
Dexmedetomidine is convenient to administer via a nasal spray, which supports rapid onset of action.Intranasal bioavailability was estimated to be 40.6% and 40.7% for atomisation and drops respectively.Degree and duration of sedation were similar for i.v. and intranasal administration [7].Following intranasal administration, peak plasma concentrations of dexmedetomidine were reached in 38 min and its absolute bioavailability was 65% [19].Yoo et al found that intranasal bioavailability was 82% [8].Intranasal route has onset of action in 45 min with peak effect in 90-100 min.There is no difference in the pharmacokinetic profile of either males or females, and both have similar protein binding [14].Our findings also confirmed that 1.5 μg/kg dexmedetomidine when administered via a nasal spray not only improves the sleep quality but also reduces the frequency of requiring sufentanil rescue, the incidence of postoperative sore throat, and the length of hospital stay.Dexmedetomidine administered via a nasal spray is a non-invasive, safe, and effective method for the treatment of postoperative sleep disorders with a wide range of applications; for example, it reduces pain and improves sleep quality after nasal endoscopic surgery and works as an effective sedative agent for pediatric examination [20][21][22].The protocol of administering 3 μg/kg dexmedetomidine injection combined with 0.3 mg/kg midazolam nasal drops has been reported to be safe, easy to use, and highly successful in pediatric patients when administered before their craniocerebral magnetic resonance imaging examination [22].Xu et al. [23] reported that the effective dose of dexmedetomidine nasal spray to induce sleep in 3-6 years-old children was 1.76 μg/kg.The majority of anesthesiologists use dexmedetomidine in pediatrics for premedication and procedural sedation and in the ICU.The dosage varied widely and ranged from 0.2 to 5 μg/kg for nasal premedication and 0.2 to 8 μg/kg for nasal procedural sedation [24].In the present study, the use of 1.0-1.5 μg/kg dexmedetomidine nasal spray on the night after maxillofacial plastic surgery could not only improve the postoperative sleep quality of the participants but also ensure unobstructed airway in the participants.The doses were safe and effective.Using the optimized nasal spray method can greatly improve the bioavailability of the test drug in healthy adults [25].Intranasal administration of 1.0 μg/kg dexmedetomidine is reportedly more effective than buccal administration of 1.0 μg/kg dexmedetomidine for premedication in children [26].Intranasal dexmedetomidine is a superior sedative to administer before performing electroencephalograms in children with autistic spectrum disorders [27].Dexmedetomidine effectively induces sleep when administered via a nasal spray, and continuous low-dose intravenous infusion is effective for maintaining sleep [28].Dexmedetomidine is now being used as part of ERAS protocols to create a satisfactory postoperative outcome with reduced opioid consumption in the PACU [29].
This study selected patients who underwent maxillofacial surgery in the morning.When dexmedetomidine was postoperatively administered via a nasal spray, the anesthesia withdrawal time was more than 4 h.After the operation, additional sedative and analgesic drugs were used according to the patient's voluntary requirements.A common complication of dexmedetomidine is slow HR.In this study, bradycardia occurred in 8, 14, and 18 patients in the BC group, 1.0 μg/kg Dex group, and 1.5 μg/kg Dex group on the night after the operation, respectively; however, the HR did not reduce beyond 50 beats/min in any of the patients, related to sleep state.Although the HR decreased during sleep, no heart rateincreasing medication was administered.
Limitations
The study did not observe the prognosis of patients and the longer term impact of dexmedetomidine nasal spray on postoperative sleep disorders.Inconsistent surgical methods may have different effects on postoperative pain and sleep of patients.The PSG polysomnography did not collect data on respiratory events.This study did not compare the effects of different drug administration methods, such as continuous intravenous infusion, sublingual administration, and nasal drip, on sleep in patients undergoing maxillofacial surgery.Furthermore, this study did not compare the effect of atomized nasal spray of dexmedetomidine with that of other sedatives.
Conclusion
Notably, 1.0-1.5 μg/kg dexmedetomidine administered via a nasal spray on the night after operation can safely and effectively prolong N2 sleep and shorten wake-up time in participants having undergone maxillofacial plastic surgery on the night after operation.Furthermore, it was associated with fewer awakenings from sleep, significantly improved sleep efficiency, and reduced incidence of sleep disorders.Interestingly, 1.5 μg/kg dexmedetomidine could also prolong N3 sleep, reduce the number of times sufentanil rescue had to be used postoperatively, reduce the incidence of postoperative sore throat, and reduce the length of hospital stay.
FIGURE 4
FIGURE 4Comparison of PSQI scores among the three groups.
TABLE 1
Participant demographics and surgical complexity.< 0.05 indicates statistically significant difference.ANOVA was used to compare age, height, weight, and BMI, and the χ 2 test was used to compare gender, ASA class, and surgical complexity.
TABLE 2
Comparison of intraoperative conditions of three groups of participants ( x± s).The Kruskal-Wallis H test was used to analyze the time of anesthesia, the time of operation, and the main intravenous drugs used in the operation among the three groups.
TABLE 3
Comparison of sleep rhythms in the three groups of participants on the night after operation ( x± s).
p < 0.05 indicates statistically significant difference.For non-normally distributed data, the Kruskal-Wallis H test was used for between-group comparisons and pairwise comparisons.Bold: p-value < 0.05.a There is a statistical difference compared with the BC group.b There is a statistical difference compared with the 1.0 Dex group.
TABLE 4
Comparison of PSQI scores of three groups.
TABLE 6
The occurrence of adverse reactions in the three groups on the night after operation [cases (%)], hospitalization time, and total hospitalization expenses.< 0.05 was considered statistically significant.The Wilcoxon signed-rank test was used to analyze data with a skewed distribution, and the χ 2 test was used for proportion analysis.Bold: pvalue < 0.05.
a There is a statistical difference compared with the BC group.b | 2023-10-05T15:13:38.542Z | 2023-10-03T00:00:00.000 | {
"year": 2023,
"sha1": "2ed42835ee8ae44570c78cd8c3dabbceef8ca0d9",
"oa_license": "CCBY",
"oa_url": "https://www.frontierspartnerships.org/articles/10.3389/jpps.2023.11699/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "c44068be9ae2508d25329fa3d964b37b15bb8d6c",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
21737177 | pes2o/s2orc | v3-fos-license | Phenolic and non-polar fractions of Elaeagnus 2 rhamnoides ( L . ) A . Nelson extracts as virulence-3 modulators-in vitro study on bacteria , 4 fungi and epithelial cells 5
Butanol extracts from leaves, twigs and fruits of Elaeagnus rhamnoides (L.) A. Nelson (sea 18 buckthorn, SBT) were fractionated into phenolic and non-polar lipid components. Their chemical 19 composition was analyzed using the Thermo Ultimate 3000RS chromatographic system, equipped 20 with a diode array detector, a corona-charged aerosol detector, and coupled with a (Q-TOF) mass 21 spectrometer. Assuming that an effect on natural microbiotaand host epithelial cells needs to be 22 assessed, regardless of the purpose of using SBT formulations in vivo, the MIC/MBC/MFC of the 23 fractions and reference phytocompounds were screened involving 17 species of Gram-positive and 24 Gram-negative bacteria and Candida species. The impact of the fractions (subMIC) on the important 25 in vivo persistence properties of S. aureus and C. albicans strains was evaluated. Tests for adhesion 26 and biofilm formation on an abiotic surface and the surfaces conditioned with fibrinogen, collagen, 27 plasma or artificial saliva showed the inhibitory activity of the fractions. The effects on 28 FITC-labeled staphylococci adhesion to fibroblasts (HFF-1) and epithelial cells (Caco-2), and on 29 fungal morphogenesis, indicated that SBT extracts have high anti-virulence potential. Cytotoxicity 30 tests (MTT-reduction) on the standard fibroblast cell line showed variable biological safety of the 31 fractions depending on their composition and concentration. 32
Introduction
The ongoing therapeutic crisis connected with antibiotic resistance has led to the launch of an intensive search for alternative ways of fighting infections.Plant-derived extracts/compounds intended for individual use or in combination with classical drugs are seriously being considered and tested worldwide.The main advantage is usually in their equal effectiveness, irrespective of the particular drug susceptibility profile of a microorganism and its preferred growth phenotype (sessile or planktonic).No less important is the fact that, due to the complex composition of the plant extracts and different mechanism of single phytocompound action, the risk of microbial resistance developing is relatively low.These positive properties have attracted even more attention over the years, with questions being asked as to what can be offered instead of chemotherapeutics that are ineffective in combating, for example, difficult to eradicate biofilm-related infections.This is because there is a widely accepted opinion, supported by clinical observation, that the majority of bacteria and microscopic fungi form highly drug-tolerant/resistant free-floating aggregates and/or biofilms attached to abiotic (medical devices), or can live on the necrotic surface of host tissues [1][2][3][4][5].
The aim of the study was to assess antimicrobial/anti-biofilm activity of butanol extracts from the leaves, twigs and fruits of Elaeagnus rhamnoides (L.) A. Nelson (sea buckthorn, SBT, fractionated into phenolic and non-polar components to find those that are the most active.The main goal was to check whether phenotypic features of Staphyloccoccus aureus bacteria and Candida albicans yeasts, which determine their success as invasive pathogens, could be the possible targets.A number of products of E. rhamnoides (previously known as Hippophae rhamnoides) have been used for centuries as dietary supplements and also in folk medicine against a wide range of diseases.Currently, they are being applied in some divisions of modern medical practice and the cosmetic industry.Due to the lack of standardization of plant phytocompounds, there is usually no common or scientific belief about the pro-health situation that can fully justify the use of SBT-derived products.Indeed, it seems that extracts from different parts of this plant show in vitro diverse and interesting activities, including antioxidant, anti-inflammatory, antithrombotic, anticancer, antimicrobial, and many other properties.However, the available data concerns mainly sea buckthorn fruit-derived products, although other parts of this plant can also prove to be efficient sources of biologically active ingredients [6][7][8][9].In our previously published study, we provided some original evidence that SBT-derived extracts of leaves, and even of waste twigs, possessed significant activity affecting important Candida spp.virulence factors and showing synergism with anti-mycotic drugs [6].
Nonetheless, there is little interest in the wider use of the antimicrobial potential of SBT compounds in medical practice, although there is plenty of evidence of their direct biocidal effect in vitro, ex vivo and in animal models.From the above (microbiological) range, no ongoing or planned clinical study has been found on the ClinicalTrials.govportal [10].In 2018, this website showed that only 5 of 12 registered studies on SBT have been completed, but data on them are still unavailable.The phenolic fraction from fruit extract was used in only one trial.In the other cases, unfractionated extractsmainly from fruits (1 from leaves) or intact SBT oil -were tested.In general, the goal was to demonstrate the beneficial effects of SBT containing dietary supplements in patients with diabetes type 2, notably in reducing obesity, improving eye health, and relieving inflammation of the mucous membranes.New clinical trials, on which there is little information of their status or implementation, refer to the topical usage of SBT oil in cream formulations for dermatology and gynecology.The latest trial to be announced, with a pronounced immunological inclination, concerns the impact of a single dose of SBT berry-based proanthocyanidin extract on adult stem cells.A number of different types of stem cells will be tested to examine the effect on cell mobilization and homing after being treated with plant-based extracts.Therefore, there is much to be done regarding the potential use of sea buckthorn products as antimicrobials.The objects of our study belong to the group of opportunistic bacterial/fungal pathogens well-equipped with pathogenicity factors.At individual stages of infection, they are successively and successfully expressed, depending on the changing conditions of the host microenvironment, which makes these microorganisms "difficult opponents".
Results and Discussion
One of the main questions asked herein was which of the sea buckthorn (SBT)-derived phenolic and non-polar fraction finally separated from butanol extracts has better antimicrobial activity.
Details of fraction preparation and component characterization are shown in Supplementary
Materials.Briefly, chemical analysis of fractionated SBT extracts from leaves, twigs and fruits indicated significant differences in their main constituents.In the phenolic fraction of leaf extract, hydrolysable tannins / ellagotannins and tritepenoid saponins dominate; the twig-derived fraction is rich in compounds of proanthocyanidins/catechin type; and isorhamnetin glycosides dominate in the fruit-derived preparation.The non-polar fraction of leaf extract contains a high amount of tritepenoid saponins, which are virtually absent in the 2 other source fractions (below 5%), which are richest in triterpenoids, including those acylated with phenolic acids (Table 1).
of 19
The relative content of individual compound groups in the fractions of SBT leaf and twig extracts, is expressed as a percentage of the total peak area (Corona charged Aerosol Detector, CAD) and is presented in Tables S1 and S2.Secondary metabolites in these fractions, as the listed compounds corresponding to UHPLC-CAD peaks (with area ≥1% of the total peak area), are given in Tables S3-S6.Data concerning the individual composition of SBT fruit-derived fractionated extract are not presented here since they have already been published [11].Considering the demonstrable differences in composition of the SBT fractions, one could expect them to have different antimicrobial activities, usually dependent on a quantitatively dominating group of the compounds.
This assumption was supported by the results of our previous study in which the high antifungal activity of proanthocyanidins-rich fraction of SBT-twig extract was reported.It almost equalled the effectiveness of the same type of fraction separated from leaf extract, rich in hydrolysable tannins [6].
The present research was to determine whether further fractionation of the extracts results in changes or not, i.e. either increasing or decreasing their activity.
Direct antimicrobial activity of fractionated SBT-extracts
Starting from the unquestionable health-beneficial effects of SBT-based products as nutrients or dietary supplements, we go further in looking for justification for their use in more specialized areas of medical practice than previously proposed.The search for agents that turn off the production of virulent factors or diminish their expression can bring about a new generation of species-specific anti-virulent drugs.In the case of SBT-products, they have applications in the supportive therapy of local skin lesions, mucosal infections and associated inflammatory symptoms in wound healing and other conditions [8,9,[11][12][13][14][15][16][17][18], directions that are in part reflected in the topics of ongoing clinical trials [10].However, the sites of possible action of SBT-derived compounds in human body are the niches richly inhabited by many microorganisms constituting natural microbiota, which play a significant role in the host immune system [19][20].Therefore, the influences of fractionated SBT extracts have been tested on representatives of pathogenic, opportunistic and commensal microorganisms of the following genera: Staphylococcus, Streptococcus, Helicobacter, Bacillus, Escherichia, Proteus, Pseudomonas, Lactobacillus, Candida.Individual species from these genera are present on the skin and the mucous membranes of the oral cavity, colonize the gastrointestinal tract and the urethra, and constitute part of the microbiome of the vaginal mucosa [3].Our screening experiment showed that Minimal Inhibitory Concentrations (MICs) of the fractions varied depending on the type (phenolic or non-polar), the origin of the extract (different organs of the plant), and target microorganism.The non-polar fraction separated from all vegetative parts of SBT had no direct antimicrobial activity over the concentration range that was tested, the only exception being activity against C. albicans ATCC 10231 at a MIC level of 1 mg•mL -1 .The phenolic fraction of SBT fruit extract was also inactive (MICs >1.0 mg•mL -1 ; data not shown).However such phenolic fractions obtained from the extracts of leaves and twigs expressed moderate activity to a comparable degree (Table 2).It was noted, however, that their action was stronger against Gram-positive than Gram-negative bacteria, and against most species of the yeasts, resulting from known differences in cell wall structure.
Noteworthily positive is the low sensitivity of Gram-positive "probiotic" bacteria, Lactobacillus acidophilus, inhabiting various ontocenoses of the human body, and of Gram-negative intestinal bacilli of E. coli species.On the other hand, Gram-negative Helicobacter pylori, colonizing the stomach mucosa of more than half the population, as well as Proteus vulgaris, present in the microbiome of the gastrointestinal and the urinary tracts, are characterized by having only an average degree of sensitivity to these products.
The MIC of reference compounds, adapted to their qualitative/quantitative representation in the fractions against selected bacterial and fungal strains (herein after explored in more detail) are presented in Table 3.Only ursolic acid used separately was substantially biostatic.However, the most frequent additive or hyperadditive synergy of the individual components occurs when combined.The question is whether such a range of activity of fractionated SBT products is to their advantage or disadvantage.In our opinion, this is valuable positive information that can be used in the future to develop targeted "personalized" therapy.This suggestion does not differ fundamentally from the idea of the differential action of antibiotics and chemotherapeutics, as in the saying "everything does not work for everything".Unfortunately, proposals/suggestions for scientifically justified targeted and "personalized" use of the antimicrobial potential of plant, including SBT-derived products, are not easily introduced for many reasons.Our data and those in numerous other reports, refer to different kinds of extracts, and they may not always have been well enough characterized.Moreover, different microbiological methods with non-comparable specificity and sensitivity are used for the abovementioned purposes.Hence, inter-laboratory comparison of the results from a given range of studies on their biological activity is difficult, unreliable and not very constructive [15,16,18].
Anti-virulence properties of fractionated SBT-extracts
We have researched other possibilities of using phytochemicals than with their direct microbicidal activity.At least 3 clinical situations associated with infection and inflammation may be considered regarding the topical use of SBT products.Above all, chronic wound infection, other skin or oral infections with mixed etiology, and vulvovaginitis caused by bacteria and fungi, can be mentioned.In all of them, the participation of the microorganisms we examined, S. aureus and C.
albicans, that can form sessile (biofilm) populations, is significant.It is well known that most biofilm-associated infections are connected to a discontinuity in the skin, mucosa and/or the underlying tissues.These are easily invaded due to normal colonization of the given portal by physiological and environmental microbiota.Moreover, damaged tissues are much more susceptible to colonization because of local oxygen deficiency, necrosis, or a lower activity of vascular endothelial cells and fibroblasts participating in repair.The most significant problem related to the treatment of biofilm infections, irrespective of microbial origin and localization, is their resistance/tolerance to antibiotics.This results not only from increased number of drug-resistant microorganisms, but from facilitated gene transfer within biofilm, as well as their very unique structure and physiology [21][22][23].recognizing adhesive matrix molecules) family.Similarly, Candida yeasts possess numerous cell surface structures that help their adhesion to the surface of medical polymers or tissues "decorated" with ECM molecules [22][23][24][25][26].
Adhesion and biofilm formation
At this stage, we have demonstrated that both types of SBT-derived fractions have in vitro anti-adhesive properties against S. aureus ATCC 43300 reference and S. aureus H9 clinical isolate (MRSA), as well against fungi C. albicans ATCC 10231 reference and C. albicans C4 clinical isolate (from a patient's stool).Despite the relatively weak direct biostatic/biocidal activity of SBT preparations, they at 0.5× MIC (0.125, 0.25 or 0.5 mg•mL -1 ) strongly inhibited microbial adhesion to an inert surface (up to 45.92.7%,p=0.0199 and 75.23.7%,p=0.008 for S. aureus and C. albicans, respectively).The use of sub-inhibitory concentrations of these products during in vitro studies is justified by the presence of similar conditions in vivo.From the pharmacodynamic/pharmacokinetic analysis, it is clear that in soft tissues (e.g.subcutaneous layers, intestine and lung mucosa), the pathogens or physiological microbiota might only be exposed to sub-minimal inhibitory concentration levels of biocides.Moreover, biofilm-forming microbes are often exposed to sub-lethal doses of antibiotics or disinfectants, since the biofilm structure generates a concentration gradient from the surface to their deeper parts [26].
The effects we observed were highly concentration dependent; at low concentration (0.1 mg•mL -1 ), the anti-adhesion activity of the fractions was much lower, or microbial adhesion even increased (especially with respect to non-polar fractions).Fortunately, these unwanted adhesion-promoting effects were transient and did not decrease anti-biofilm effectiveness after a longer co-incubation time of 24 h.In general, anti-biofilm activity of phenolic fractions obtained from all parts of the plant (at 0.5 × MIC -0.125-0.5 mg•mL -1 ) was stronger in relation to C. albicans than S. aureus, whereas the opposite tendency was found with the non-polar preparations (Figure 1).
The higher efficiency of the non-polar fraction at 0.1 mg•mL -1 against S. aureus was also seen when we examined the adhesion and biofilm formation on the surfaces conditioned with ECM proteins/glycoproteins.In the case of phenolic fractions, the weakest effect under these experimental conditions was noted with the surface coated with fibrinogen (Figure 2), which is unsurprising considering many more than one fibrinogen receptors is present on S. aureus cells as surface-anchored or secreted receptors.evasion of an immune response [21,27,28].Thus, limiting these interactions has great therapeutical potential, and our results with SBT-derived products fulfill these expectations.
Until now, research on C. albicans adherence has mainly addressed the 3 gene families ALS, HWP, and IFF/HYR encoding at least 25 adhesins of C. albicans with different spectra of ligands.
However, a recent bioinformatics approach identified a plethora of proteins not previously implicated in adhesion and needing experimental confirmation of their significance.Among known candida adhesins there are numerous receptors for plasma ECM proteins and saliva, as well as those ligands found on the surface of host cells [22,23,29,[30][31][32][33].We found the effect of inhibiting the formation of C. albicans biofilm by the components of SBT lipid fractions at low concentration (0.1 mg•mL -1 ) was poor, especially with respect to surfaces coated with collagen or with plasma.
However, co-incubation of the yeast with phenolic fractions used at the same low concentration much more strongly inhibited biofilm formation, mainly on surfaces coated with fibrinogen or collagen. of biomass activity compared with the control (untreated), which was considered as 100%, is presented.
For comparison, the same experiment was done with reference compounds present in the tested fractions.Individual compounds occurring in a leaf phenolic fraction, such as ellagic acid and epicatechin present in twig-derived phenolic fraction were less potent than the corresponding fraction type.Their anti-biofilm activity varied, inhibiting biofilm formation in the range of 0-56.8%, which depended on both the type of microorganism and the type of proteins/ glycoproteins deposited on the surface.A better result was obtained for staphylococci (up to 56.8% biofilm inhibition, p=0.0004) than fungi (up to 27.3% inhibition, p=0.24).In contrast, quercetin (a component of the phenolic fraction of the fruit extract) reduced biofilm formation of S. aureus by 58.8-86.0%It should be emphasized that in experiments involving biofilm formation on surfaces conditioned with ECM proteins, SBT fractions were used at a low concentration of 0.1 mg•mL -1 ; however, the anti-biofilm effect was in most cases significant, and could be enhanced by using higher concentrations (0.25 or 0.5 mg•mL -1 ) of the phytopreparations.However, the results are not presented here since such concentrations are rarely achieved in vivo.An excellent set of data on this topic can be found in the report of Manach et al. [34,35].It should be noted, however, that the research on the metabolism of phytochemicals after oral intake and the concentrations achieved in the blood serum and tissues of internal organs, concern supplementation mostly with products in their natural forms.Their chemical nature and routes of processing in the gastrointestinal tract determine the parameters of bioavailability and bioaccessibility, which commonly serve as references for predicting bioefficacy.Xiao et al. [36] have published a perspective paper in which edible nanoencapsulation vehicles (ENVs) for oral delivery of phytochemicals were discussed as bioefficacy enhancers.According to this literature review, ENVs influence the transportation of phytochemicals across the endothelial layer, enhancing paracellular transportation, opening tight junctions, strengthening mucosal adhesion, inhibiting efflux pumps, and inducing lymphatic absorption.Thus, ENVs can efficiently influence the bioavailability and also exert an effect on phytochemical metabolism with the participation of the gut microbiota.Therefore, it can be assumed that the technological progress of ENVs production will soon expand and improve the pharmacological use of phytochemicals.
C. albicans invasive properties -morphological transformation
In the case of dimorphic fungi, interference in morphogenesis, i.e. transformation of blastospores through filaments (germ tubes) up to real hyphae formation, is the most desirable property of a given natural product.Because both morphological forms play a role in C. albicans biofilm development, it means that these products can have therapeutic potential [22,24,29].From experiments on the influence of SBT products on blastospore morphogenesis, a significant effect was achieved through the use of 0.5× MICs of the products.They reduce blastospore filamentation of C.
albicans ATCC 10231, which progresses with the time of co-incubation.The formation of germ tubes after 1 h contact with SBT products was reduced by 50-65 times compared to control cells incubated in media containing only GT stimulating factors, i.e. serum (10%).This cell cycle "arrest" effect was maintained for the next hour, with about 5-8 times reduced morphogenesis occurring towards the formation of hyphae.High germ tube blocking activity of SBT was also maintained during the third h of co-incubation.In the SBT-treated fungal cells, 16-20% cells were GT-positive, whereas in the controls it was 46% (Figure 4).This conversion from yeast cells to hyphal growth seems to be one of the most prominent factors contributing to tissue invasion and resistance to phagocytosis.These forms also play a unique role in the process of C. albicans biofilm development by providing stability of the structure of the sessile population [22,23].Interestingly, the reduction in the ability of C. albicans to form filaments was irreversible, as verified during prolonged co-culture for a total of 24 h when mycelium formation can be evaluated.The control culture in the optimal medium looked like densely entangled threads of hypha, whereas in C. albicans cultures in the presence of phytocompounds, and regardless of their source (leaves, twigs, fruits), the fungi formed aggregates with few pseudohyphaes and real hyphaes.A representative microscopic image of such a culture is shown in Figure 5.
S. aureus invasive properties -adhesion to monolayers of eukaryotic cells
Considering that staphylococci are etiologic agents of local and systemic infections, their interaction with fibroblasts and intestinal epithelial cells were examined in the presence SBT-derived products, such as adhesion to a cell monolayer.This work was preceded, however, by an assessment of the biological safety of fractionated SBT-extracts for the host cells (pro-proliferation activity/cytotoxicity).The results of an MTT test with HFF-1 fibroblasts showed that the fractionated SBT-extracts at 0.007-1.0mg•mL -1 did not reduce living cell numbers compared with control cells.
IC50 values determined 24 h after exposure reached >1.0 mg•mL -1 for phenolic fractions of SBT fruit and twig extracts, and 0.865 mg•mL -1 for leaf extract.Non-polar fractions yielded IC50 = 0.109, 1.394 and 0.236 mg• mL -1 for fruit, twig and leaf extracts, respectively.This is encouraging for the future application of the preparations to eukaryotic tissues (e.g. as topically active ointments, lotions or dressings).
Greater understanding is needed on the possibility of diminished bacterial adherence to and invasion into eukaryotic cells.Therefore, an anti-adhesion strategy can potentially be an alternative therapeutic means of overcoming the global threat of the antibiotic resistance of S. aureus.These bacteria possess a number of adhesins allowing the above processes to occur; thus the weakened adhesion to host cells achieved in our experiments is a real success, the more so because this effect occurred at a relatively low concentration of the extracts (0.1 mg•mL -1 ), which can be achieved in vivo, e.g. by oral intake [34,35].It is necessary, however, to explain that the anti-adhesion efficiency of all the tested fractions was not equal, but depended on the cell type (fibroblasts or intestinal epithelial cells) and the source of the extract.The phenolic fraction of the twig extract had the highest activity in this area as it decreased the adhesion of bacteria to a HFF-1 fibroblast monolayer by 7.3-9.8%and to a monolayer of Caco-2 intestinal epithelial cells by 19.7-32.4%.Nonetheless, we are convinced that the reduction of microbial adhesion by ~30% implies significance.The possibility is not excluded that the mechanism is through reducing the efficiency of sortases (SrtA, SrtB) responsible for the correct expression of surface adhesins.It is important achievement since SrtA is now known to be a virulence factor of S. aureus that plays a major role in invasion and infection, whereas there are few reports concerning SrtB inhibitors [19][20][21]27,28].
Considerations on the antimicrobial activity of fractionated SBT-extracts, in relation to their composition
Analyzing the results of our research, we asked which fraction and/or its main component could be considered as the most promising product regarding therapeutic potential.Phenolic-rich fractions of fruit, leaf and twig extracts differed significantly -flavonoids, including quercetin, kaempferol, methylated metabolite of quercetin (isorhamnetin) have a quantitative advantage in fruit extracts, whereas hydrolysable tannins (ellagitannins) and tritepenoid saponins in the leaf extract and condensed tannins (proanthocyanidins, PACs) dominated in twig extract (Table S1-6).
All of these chemical groups contain compounds that have been extensively studied in vitro regarding their antimicrobial activity.For example, Singh et al. [37] demonstrated that quercetin is a modulator of C. albicans quorum sensing, which stimulates cell apoptosis, decreases fungal enzymatic activity, morphogenesis and biofilm formation.There is anti-biofilm activity of quercetin and kaempferol against various bacterial species, including S. aureus [38,39].Moreover, quercetin and isorhamnetin have been described as the compounds attenuating the virulence of S. aureus, causing down-regulation of agr system, which consequently decreases synthesis of hemolysins [40].oligomers according to recent reports [41].Rane et al. [42] reported that cranberry A-type PACs significantly reduced C. albicans adherence to an abiotic surface and biofilm formation.Alshami and Alharbi [43] found that Hibiscus sabdariffa extract, containing flavonoids and cyanidins, inhibits in vitro biofilm formation by C. albicans.Similar anti-yeast effects were described by Luiz et al. [44].Sea buckthorn seeds contain a substantial amount of proanthocyanidins, but little is known about their antimicrobial activity [45].From our study it is now known that SBT twig extract is rich in PACs with B-type linkage, and that it influences bacterial and yeast behavior during multiplication, expression of cell-associated or secreted virulence factors and all processes connected with biofilm formation.
Triterpenoid saponins present in phenolic and non-polar fractions of SBT leaf extract are a diverse group of bioactive compounds possessing various activities, including antimicrobial, cell membrane perturbing, hemolytic and cellular cytotoxicity [46].However, our most interesting results concern the impact of the twig phenolic fraction components, such as the above mentioned PACs, and the components of the lipid fraction of the leaf extract containing mainly tritepenoids and acylated tritepenoids [11].In particular, pentacyclic triterpenoids, such as oleanolic and ursolic acid, are worthy of more attention as these compounds are constituents of numerous plants, and oleanolic acid is often present in combination with its isomer, ursolic acid.Together they share many pharmacological properties, such as hepatoprotective effects, anti-inflammatory, antioxidant, or anticancer activities.Oleanolic acid, ursolic acid, α-amyrin, betulinic acid and betulin aldehyde and other related tritepenoids are known to possess antimicrobial activity, which we also found in our explorations.It is important that some of these compounds, besides their direct antibacterial activity, have a synergistic effect in combination with antibiotics against multidrug resistant pathogens and suppress of bacterial virulence.The anti-staphylococcal properties of ursene and oleanene derivatives from Castanea sativa leaf extract reported by Cuave et al. [47] are of interest, and in showing that the extract strongly inhibited S. aureus and a panel of skin commensals.Serial passaging of the extract did not result in acquisition of resistance to the quorum quenching composition.Ta et al. [5] published a review in which data on the abovementioned properties of plant secondary metabolites (anti-biofilm, anti-QS) were collected.The main findings were the identification of plant phenolics, including benzoates, phenyl propanoids, stilbenes, flavonoids, gallotannins, proanthocyanidins and coumarins, as important inhibitors with both activities.
Compounds with QS inhibition activity can be promising tools to combat of bacterial infections, although currently there are no such compounds on the market.
Plant material and chemical analysis of the fractionated SBT-derived extracts
Sea buckthorn (Elaeagnus rhamnoides (L.) A.
S. aureus and C. albicans adhesion and biofilm formation on the abiotic (polystyrene) surface
The suspensions of S. aureus ATCC 43300 (reference, MRSA) and S. aureus H9 (clinical, MRSA) at a density of OD535=0.9 (~5 × 10 7 cells•mL -1 ) in TSB/0.25%glucose; C. albicans ATCC 10231 (reference FLU sensitive) and C. albicans C4 (clinical stool isolate, FLU sensitive) at 1 × 10 6 cells•mL -1 in RPMI-1640/0.25%glucose were seeded (100 µL) into the wells of 96-well polystyrene culture microtiter plates (Nunc, Denmark).The fractionated SBT-extracts at final concentrations of 0.125, 0.25 or 0.5 mg•mL -1 (corresponding to an earlier established value equal to 0.5 × MIC in respect to a given strain) were added (100 µL).An additional concentration tested was 0.1 mg•mL -1 (the reason being explained in the Results section).To measure of staphylococci or yeasts adhesion, samples were incubated at 37°C in static conditions for 1 or 2 h, respectively; and to measure biofilm formation the incubation time was prolonged to 24 h.Microbial suspensions in medium (100 µL : 100 µL) and medium alone (200 µL) served as the positive and negative controls, respectively.After incubation, at the indicated time-point, the non-adherent cells were removed by washing the wells with 200 µL PBS with Ca 2+ and Mg 2+ (Biowest, USA) and the viability or metabolic activity of the sessile population was measured.In the case of S. aureus LIVE/DEAD BacLight Bacterial Viability kit (Molecular Probes, USA) was used.Finally, the fluorescence in the wells was measured at 485ex/535em nm for green Syto9 and at 485ex/620em nm for red PI, using SpectraMax i3 Molec.Devices.
The results are given as a percentage of adherent cells or biofilm biomass calculated from the mean fluorescence values ± S.D. of the control wells containing bacteria in medium without SBT (taken as 100%), and of the test wells.For C. albicans, a self-modified "FDA reduction" method was used as previously described [6].Briefly, 100 µL FDA (fluorescein diacetate, Sigma, USA) solution (0.2 mg•mL -1 in phosphate buffer, pH 6.8) was added to the wells for 1 h incubation at 37°C in the dark and the fluorescence emitted was read at 485ex/520em nm using SpectraMax i3 Molec Devices.The results are given as the percentage of adherent cells or total biomass metabolic activity, calculated from the RFU (relative fluorescence units) values ± S.D. in the test wells compared to the controls (taken as 100%).The experiments were carried out twice in quadruplicate.
Steps and conditions of coating: 18 h at 4°C; removal of proteins/body fluids; blocking of tested and control (uncoated) wells with 250 µL/well 2% bovine serum albumin (BSA, Sigma, USA) in PBS for 18 h at +4°C; washing 1× with PBS.The next stages of the experiment regarding the application of microorganisms, tested SBT fractions at low concentration (0.1 mg•mL -1 ), co-incubation conditions and the method of evaluating and interpreting the results were the same as described in the previous section.The wells containing only bacterial or fungal suspensions in the culture medium (without SBT) were taken as the positive control (100%).The experiments were repeated twice with 6 replications of each.
C. albicans invasive properties -evaluation of morphogenesis potential
To determine the serum-induced filamentation of fungi, a previously described microscopic method was used.Briefly, C. albicans ATCC 10231 and C. albicans A4 suspensions (8 × 10 6 blastospores•mL -1 ) in RPMI-1640 without phenol red, and supplemented with 10% (v/v) of FBS
Few
mechanisms of anti-biofilm action are considered for many natural products: direct biocidal activity, inhibition of the expression of adhesins, and interruption of intercellular communication.All of the suspected mechanisms have been reflected in our study.By reproducing the in vitro conditions simulating real situations in vivo that bacteria and fungi can theoretically encounter during infection, target surfaces for adhesion and biofilm formation conditioned with extracellular matrix proteins (ECM) have also been assessed, besides inert surfaces.To mimic wound bed or mucosa of oral cavity/gastrointestinal tract, the surfaces were coated with fibrinogen, collagen, blood plasma and artificial saliva.S. aureus can adhere to and invade tissues/host cells usually with the participation of the surface MSCRAMMs (microbial surface components Preprints (www.preprints.org)| NOT PEER-REVIEWED | Posted: 16 May 2018 doi:10.20944/preprints201805.0224.v1
Figure 1 .
Figure 1.Biofilm formation of Staphylococcus aureus ATCC 43300 and clinical S. aureus H9 (a), Candida albicans ATCC 10231 and clinical C. albicans A4, (b) on the abiotic (polystyrene) surface in the presence of subMICs of fractionated SBT extracts.LF, GF, OF mean phenolic fractions of leaf, twig, fruit-derived extracts, respectively; LL, GL, OL mean non-polar (lipid) fractions of leaf, twig, fruit-derived extracts, respectively.Inhibitory effect was analyzed in terms of metabolic activity of biofilm mass by LIVE/DEAD BacLight Bacterial Viability kit (S. aureus) and "FDA reduction" method (C.albicans).The percentage ± S.D. of biomass activity compared with the control (untreated), which was considered as 100%, is presented.
Figure 2 .
Figure 2. Staphylococcus aureus ATCC 43300 and clinical S. aureus H9 biofilm formation on the surface conditioned with host derived proteins/body fluids (fibrinogen, collagen, plasma, saliva) in the presence of subMICs of fractionated SBT extracts.LF, GF, OF mean phenolic fractions of leaf, twig, fruit-derived extracts, respectively; LL, GL, OL mean non-polar (lipid) fractions of leaf, twig, fruit-derived extracts, respectively.Inhibitory effect was analyzed in terms of metabolic activity of biofilm mass by LIVE/DEAD BacLight Bacterial Viability kit (S. aureus).The percentage ± S.D. of biomass activity compared with the control (untreated), which was considered as 100%, is presented.Staphylococci express a broad range of surface proteins involved in their adhesion to ECM, plasma proteins or directly to host cells.This binding capacity is closely related to their pathogenicity, adherence being a crucial step in the formation of biofilm and tissue invasion.As targets for staphylococci, fibrinogen, fibronectin and collagen have the greatest significance during the process of infection.Collagen adhesins (CNA) allow bacteria to adhere strongly enough to tissue structures containing the corresponding ligand to resist clearance by the host defense system.The fibrinogen adhesins (specific FnBPA/FnBPB, ClfA/ClfB, and several others with a wider substrate range) play a role in staphylococcal aggregation or "microcolony" formation -a process slightly different from classic biofilm formation.Examples of infections that may involve staphylococcal aggregates or microcolonies rather than typical biofilms include chronic wound infections, osteomyelitis, soft tissue abscesses and endocarditis.In these cases, interactions with host matrix molecules are particularly important in colonization of the site, eukaryotic cell invasion by endocytosis, and
Figure 3 .
Figure 3. Candida albicans ATCC 10231 and clinical C. albicans C4 biofilm formation on the surface conditioned with host derived proteins/body fluids (fibrinogen, collagen, plasma, saliva) in the presence of subMICs of fractionated SBT extracts.LF, GF, OF mean phenolic fractions of leaf, twig, fruit-derived extracts, respectively; LL, GL, OL mean non-polar (lipid) fractions of leaf, twig, fruit-derived extracts, respectively.Inhibitory effect was analyzed in terms of metabolic activity of biofilm mass by "FDA reduction" method.The percentage ± S.D.
Figure 4 .
Figure 4. Percentage of C. albicans ATCC 10231 morphological forms (B-blastospores, BB-budding blastospores, GT-germ tube positive cells) after 1 (a), 2 (b) and 3 h (c) exposure to SBT fractions at 0.5× MIC. C. albicans cell morphology was examined by light microscopy (400× magnification) at these time-points.The results were expressed as the proportion ± S.D. of each morphotype after SBT treatment, compared to control C. albicans, assessing 500 cells.
Tannins are a heterogeneous
group of polyphenolic compounds, naturally present in various plants, that exerts several pharmacological effects, including antimicrobial properties.Two different types can be distinguished: hydrolysable tannins (based on gallic acid and/or hexahydroxydiphenic acid, usually as multiple esters with D-glucose) present in the phenolic fraction of SBT leaf extract; and condensed tannins, called also proathocyanidins (PACs), abundant in the fraction of twig extract.Proanthocyanidins are oligomeric or polymeric flavan-3-ols.They are divided into two classes, A-type, and B-type, on the basis of the linkage among their monomeric units.Proanthocyanidins extracted from cranberry reduced biofilm formation by S. mutans in vitro and dental caries development in vivo due to the presence of specific bioactive A-type dimers and Preprints (www.preprints.org)| NOT PEER-REVIEWED | Posted: 16 May 2018 doi:10.20944/preprints201805.0224.v1
preprints.org) | NOT PEER-REVIEWED | Posted: 16 May 2018 doi:10.20944/preprints201805.0224.v1
Nelson branches were provided by a horticultural farm in Sokółka, Podlaskie Voivodeship, Poland.A voucher specimen (IUNG/HRH/2015/2) has been deposited at the Department of Biochemistry and Crop Quality, Institute of Soil Science and Plant Cultivation -State Research Institute, Pulawy, Poland.The phenolic-rich and low-polarity fraction of the butanol extract from sea buckthorn (SBT) fruit were prepared and analyzed according to Olas et evaporated, dissolved in a mixture of tert-butanol and water and lyophilized.Samples were analyzed using Thermo Ultimate 3000RS chromatographic system, equipped with a charged aerosol detector (CAD), a diode array detector (DAD), and coupled with a Bruker Impact II (Bruker Daltonics GmbH, Germany) quadrupole-time of flight (Q-TOF) mass spectrometer.UHPLC-ESI-MS analyses were carried out in negative and positive ion mode.Components of the analyzed fractions were identified on the basis of their HRMS and UV spectra, aided by data available in the literature.Preprints (www. | 2018-05-19T17:50:18.770Z | 2018-01-01T00:00:00.000 | {
"year": 2018,
"sha1": "033758d9ed4eab67634f4ff6578b788fd986af40",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/1420-3049/23/7/1498/pdf?version=1529560668",
"oa_status": "GREEN",
"pdf_src": "ScienceParseMerged",
"pdf_hash": "eaa5b9b9be17bd3a15170f4c5f13249edf69684a",
"s2fieldsofstudy": [
"Biology",
"Chemistry",
"Environmental Science",
"Medicine"
],
"extfieldsofstudy": []
} |
55719757 | pes2o/s2orc | v3-fos-license | CloudSat-constrained cloud ice water path and cloud top height retrievals from MHS 157 and 183.3 GHz radiances
. Ice water path (IWP) and cloud top height ( h t ) are two of the key variables in determining cloud radiative and thermodynamical properties in climate models. Large uncertainty remains among IWP measurements from satellite sensors, in large part due to the assumptions made for cloud microphysics in these retrievals. In this study, we develop a fast algorithm to retrieve IWP from the 157, 183 . 3 ± 3 and 190.3 GHz radiances of the Microwave Humidity Sounder (MHS) such that the MHS cloud ice retrieval is consistent with CloudSat IWP measurements. This retrieval is obtained by constraining the empirical forward models between collocated and coincident measurements of CloudSat IWP and MHS cloud-induced radiance depression ( T cir ) at these channels. The empirical forward model is represented by a look-up table (LUT) of T cir –IWP relationships as a function of h t and the frequency channel. With h t simultaneously retrieved, the IWP is found to be more accurate. The useful range of the MHS IWP retrieval is between 0 . 5 and 10 kg m − 2 , and agrees well with CloudSat in terms of the normalized probability density function (PDF). Compared to the
Introduction
Ice clouds have profound impacts on the global energy budget (Stephens et al., 1990), hydrological cycle (Chahine, 1992), atmospheric structure (Ramaswamy and Ramanathan, 1989) and circulation (Richter and Rasch, 2008).Cloud ice water amount is one of the largest sources of uncertainty in quantifying cloud-climate feedbacks and sensitivities.For example, the mean cloud ice water path (IWP) ranges from 10 to 120 g m −2 in the tropics among a variety of global climate models (GCMs) in the most recent 20th century Coupled Model Intercomparison Project Phase 5 (CMIP5) runs (Li et al., 2012).Accurate cloud IWP measurements are critically needed to guide model developments and reduce model uncertainties.
However, observations of cloud ice have not met the requirement by climate models, showing several folds of IWP differences among various techniques (Wu et al., 2009;Eliasson et al., 2011).Until cross-instrument consistency is achieved, current cloud ice observations will allow too much variation in cloud properties and become insufficient for constraining the model physics (Waliser et al., 2009;Li et al., 2012).Difficulties for accurate IWP and microphysical measurements arise mainly from remote sensing in the presence of cloud inhomogeneity and sensitivity limitations associated with each technique.On one hand, large spatial and temporal variabilities in cloud microphysics make it difficult to compare ground-based measurements with remote sensing observations (Waliser et al., 2009).Hence, statistical representations of cloud microphysics are assumed or parameterized in order to enable satellite remote sensing (e.g., McFarquhar and Heymsfield, 1997).Even for simple optically thin cloud, there are still a great number of uncertainties Published by Copernicus Publications on behalf of the European Geosciences Union.
in the assumptions made for the IWP retrieval.On the other hand, passive satellite sensors have limited penetration capability to observe thick and dense ice clouds from space.As a result, only partial columns of IWP (pIWP) can be measured by passive sensors, and the column bottom varies with atmospheric absorption, cloud amount, droplet size and phase, and cloud top height.These uncertainties about cloud column create additional errors in the IWP retrieval using passive sensors.
As an active sensor, CloudSat radar provides an unprecedented opportunity to measure the ice water content (IWC) profile and its vertical integral (i.e., IWP) globally since 2006.The CloudSat cloud ice retrieval still depends on the cloud microphysics constrained by in situ and ground-based observations (Austin et al., 2009).CloudSat data are confined in a narrow curtain (∼ 1 km width) along the orbital track, and thus are used mostly for climatological and case studies.Like other A-train sun-synchronous satellites, it samples only two local solar times (01:30 and 13:30 LST) of the cloud diurnal cycle.However, CloudSat data still provide the best characterization of the vertical distribution of global cloud ice (Eliasson et al., 2011), and can be used to cross-calibrate other techniques, especially the passive sensors with limited vertical resolution.(Wu et al., 2009).
Passive nadir-viewing microwave techniques such as Advanced Microwave Sounding Unit-B (AMSU-B) and the Microwave Humidity Sensor (MHS) have an advantage over infrared/visible sensors in penetrating deeper into cloud layers to measure IWP.More importantly, MHS has a swath width of ∼ 2300 km to capture synoptic-mesoscale systems in motion as well as variabilities not captured from the curtainonly sampling by CloudSat.Instead of slicing a single vertical cross section of a hurricane, the entire cyclonic structure can be mapped out with one MHS orbit.Since 1998, satellites carrying instruments like AMSU-B and MHS have been operational and now fly across the Equator at more than eight local solar times every day, the mosaic of which can be used for cloud diurnal cycle studies.Moreover, at microwave frequencies ice scattering signals are approximately linearly proportional to cloud ice amount in the path, resulting in a relatively straightforward relationship between IWP and cloud-induced radiance depression (Wu et al., 2009).These advantages make nadir-viewing microwave sensors attractive for monitoring global long-term IWP.
Retrieval of IWP requires radiative transfer models (RTMs) or forward models that relate cloud ice to the measured radiance.The cloud ice models can be formulated either theoretically or empirically.RTMs are also widely used in climate models but primarily for calculating clear-sky radiative forcing from atmosphere gas, cloud, aerosol and surface.Although studies demonstrated the use of RTMs for IWP retrievals from AMSU-B/MHS channels, considerable uncertainties exist with RTMs in representing complex physical processes (e.g., land surface radiative fluxes, ice particle shape) and with oversimplified assumptions (e.g., plane-parellel atmosphere and cloud layers, cloud droplet size distribution, etc.).The errors in liquid drop size, surface emission/scattering, cloud layer height, and water vapor amount can all degrade the quality of the retrieved IWP.For example, the current operational IWP retrieval algorithm from Microwave Surface and Precipitation Products System (MSPPS), which is based upon a two-stream approximated radiative model solution (Zhao and Weng, 2002) at AMSU-B 89 and 150 GHz channels, was found to under-estimate IWP in comparison with other observations (Wu et al., 2009;Waliser et al., 2009;Eliasson et al., 2011;Chen et al., 2011).Contamination of cloud ice retrievals was also found over snowy/icy surfaces (Wu et al., 2009).
While further improvements are still needed for ice scattering calculation in the microwave RTMs, empirical forward models have been used for cloud retrievals (Holl et al., 2010).Empirical approaches establish some ad hoc relationships between cloud ice variables and radiance/reflectivity measurements from the data themselves.Such empirical forward models are developed from a finite ensemble of observations, and are therefore limited to specific conditions, environments and dynamic ranges of the cloud variable of interest.The algorithms are usually fast in the form of a look-up table (LUT) and bypass the complex microphysical calculation in cloudysky radiative transfer in individual cases.Empirical methods have also been used in surface remote sensing where land properties are too complicated to be modeled or validated (e.g., Pulliainen and Hallikainen, 2001).
In this paper, we develop an empirical model and retrieval algorithms for IWP using cloud-induced radiance depression (T cir ) from MHS at 157,183.3±3 and 190.3 GHz.The empirical forward model is obtained by regressing MHS T cir radiances on collocated CloudSat IWP and cloud top height measurements in the tropics.The sequential estimation method is then used to retrieve IWP for all MHS footprints.The instruments and methodology will be described in Sect.2, followed by the detailed retrieval algorithm in Sect.3.An evaluation of the retrieved products and the associated errors is going to be given in Sect.4, followed by the summary in Sect. 5.
Description of data sets and models
The data sets used in this study are Level-1 brightness temperature (T B ) from MHS, ice water content (IWC) from CloudSat, and Modern Era Retrospective-Analysis for Research and Applications (MERRA) three-hourly analysis variables on a 1.25 • ×1.25 • latitude-longitude grid.The two radiative transfer models used in this study are Joint Center for Satellite Data Assimilation (JCSDA) Community RTM (CRTM) and an ice scattering cloud radiance model (CRM).
MHS T B , IWP and historical issues
MHS is a cross-track scanning radiometer aboard the National Oceanic and Atmospheric Administration (NOAA) satellite 18, 19, European Organisation for the Exploitation of Meteorological Satellites (EUMETSAT) Metop-A, and Metop-B, which is a slightly improved version of AMSU-B onboard 16,and 17.MHS makes 90 footprints (with a beam width of 1.1 • ) continuously in each cross-track scan and the outermost scan angle is ±48.95 • from nadir.For NOAA-18, the MHS scan and satellite orbital altitude produce a nadir footprint size of 16 km at half-power field of view (FOV) and a swath width of 2200 km.The FOV size and swath vary slightly among satellites due to different orbital altitudes.MHS has five microwave channels, which are 89, 157, 183.3 ± 1, 183.3 ± 3 and 190.3 GHz (for AMSU-B, the second and last channels are 150 and 183.3 ± 7 GHz, respectively).For consistency, these channels are labeled as CH#1-CH#5 hereafter.MHS CH#1, CH#2 and CH#5 are vertically polarized, and the other two are horizontally polarized (for AMSU-B, all five channels are vertically polarized).The designed radiometric noises (NE T ) for CH#1-CH#5 are 0.22, 0.34, 0.51, 0.40, and 0.46 K, respectively (John et al., 2012).The 89 and 157 GHz are window channels, and those around 183.3 GHz water vapor absorption line are designed to profile the atmospheric water vapor.Under clear-sky conditions, the peak sensitivity of these 183.3 GHz channels occurs in the upper, middle and lower tropospheres, respectively.16,17,18 and 19 orbits drift slowly with time, while Metop-A and Metop-B are maintained at a sunsynchronous orbit with fixed Equator passing time (EPT).
For ice particle scattering measurement, the higherfrequency channels (157, 183.3 and 190.3 GHz) work better for IWP retrievals because the Mie scattering is proportional to frequency to the fourth power.Scattering-based microwave cloud remote sensing has some unique properties as well as limitations.First, it penetrates deeper into ice clouds than IR and visible techniques for cloud ice measurements, but can become saturated for very optically thick clouds (Seo and Liu, 2006;Arriaga, 2000).In the case of saturation, only partial cloud ice column pIWP can be retrieved.As shown in Seo and Liu (2006), the window channels near 183.3GHz can penetrate a cloud layer with IWP as large as 10 kg m −2 , which covers most of the IWP values observed by Cloud-Sat.However, in the case of graupels, or frontal astrostratus clouds, saturation may occur (Arriaga, 2000).Saturation is also more prominent in the oblique views than nadir (where the line-of-sight path is longer).
Secondly, among all MHS/AMSU-B channels, CH#3 is most sensitive to water vapor because it is adjacent to the 183.3 GHz water vapor absorption line.The absorption from upper tropospheric water vapor, so-called "water vapor screening", prevents CH#3 from seeing the surface and clouds in the lower troposphere.To some extent, CH#4 has quite amount of water vapor screening and can observe some ice clouds, but remains little contaminated by the surface unless at dry, high latitudes.In other words, CH#4 can be used to distinguish between surface and clouds in the situation where other channels have difficulties, as will be shown in Sect.2.1.3.
Lastly, microwave radiances are dependent on scan angle at these frequencies.Under clear-sky conditions, the radiance may decrease with scan angle from nadir, as a function of the cosine of angle, due to the increasing path length at line of sight (LOS).This is similar to the 6.7 µm IR channel where the longer LOS path gives a weighting function at a higher altitude, or cold temperature (Soden, 1998).Under the cloudy-sky condition, the radiance scan dependence may vary with cloud inhomogeneity as cloud size and distribution are often not homogeneous.In addition to the atmosphereinduced scan angle dependence, there are some instrument errors in all five channels that are scan dependent and asymmetric about nadir.These instrumental errors can severely degrade the quality of the retrieved IWP if not properly corrected.For example, there was a radio-frequency interference (RFI) problem in CH#3 and CH#4 of AMSU-B (Atkinson, 2001;Buehler et al., 2005), and gain variations/degredations are found in CH#3-CH#5 of AMSU-B on NOAA-16 and NOAA-17 (John et al., 2013).MHS exhibits smaller scandependent biases than AMSU-B, but suspicious behaviors have been reported for CH#3 on NOAA-18 and NOAA-19 and Metop-A (John et al., 2013).The MHS instruments on NOAA-18 and Metop-A have so far shown the best overall radiometric calibration for all five channels.Since NOAA-18 has the closest EPT with CloudSat, it is used in this study to develop the cloud ice retrieval constrained by CloudSat.The radiances from CH#3 are not used because they are relatively noisier and provide little information on cloud ice.As in the main weather prediction centers, we use the Advanced Television and Infrared Observational Satellite Operational Vertical Sounder (ATOVS) and the Advanced Very High Resolution Radiometer (AVHRR) pre-processing package (AAPP, v7) developed by Numerical Weather Prediction Satellite Application Facilities (NWP SAF) to process the L1B radiance data to obtain the further quality-controlled and calibrated L1C data.In the NOAA-18 MHS L1C data we have not found any systematic instrumental error.Weng et al. (2003) developed an algorithm to retrieve the IWP using ice scattering at 89 and 150 GHz, which is known as the NOAA operational IWP product.Their retrieval algorithm yields effective ice particle size and IWP with cloud top and base temperatures derived from simultaneous AMSU-A channels.A considerable fraction of false cloud detection was found with this method, mostly over icy/snowy surfaces and on elevated topography (Wu et al., 2009).The NOAA IWP has been reported to have significantly low values compared with radar and IR measurements (Holl et al., 2010;Eliasson et al., 2011).As an extended product, rain rate is derived from the retrieved IWP with an empirical polynomial relationship (Ferraro, 2007).The operational NOAA IWP data, now integrated into the MSPPS in the CLASS website, will also be used in this study for comparisons.
CloudSat IWC
Launched into the A-train in April 2006, CloudSat has a 94 GHz cloud profiling radar (CPR) to provide continuous cloud profiles along its nadir track.A CPR FOV size is 1.3 × 1.7 km.The cloud ice water content (IWC) product from 2B-CWC-RO (R04) is used in this study, which assumes a gamma size distribution of cloud ice particles.The CloudSat IWC retrieval is limited when the temperature is above 0 • C; so is the liquid water content (LWC) retrieval at temperatures below −20 • C. Between 0 and −20 • C, IWC and LWC are retrieved separately and linearly interpolated to the intermediate temperature range (details of the algorithm can be found in Austin et al., 2009).Thus, large uncertainties are expected for this mixed-phase cloud regime, and/or in the ice cloud cases with large snow/graupel particles present.The vertical resolution of the IWC profile is 250 m.In our study, we interpolate it vertically to an evenly spaced grid (250 m resolution), and integrate the IWC between surface and 19 km to compute the total IWP.We also integrate the IWC profile from different bottom heights to represent the pI-WPs measured by MHS channels better.Compared with Holl et al. (2010), who used the CloudSat total column IWP product, our IWC integration approach is more meaningful for comparison with pIWP seen from MHS water vapor channels, although the pIWP value is calculated on a profile-byprofile basis.Hereafter, we use IWP as the abbreviation of pIWP in our study to represent the MHS cloud ice column.
CloudSat IWC has been validated with in situ, groundbased and other satellite IWC measurements (e.g., Austin et al., 2009;Wu et al., 2009;Protat et al., 2009).The uncertainty is claimed to be up to 40 % (Austin et al., 2009), which is much smaller than the divergences among various satellites and models, the latter of which often exceed 100 % (Waliser et al., 2009;Eliasson et al., 2011).In this study, we treat CloudSat IWP as the "truth" to constrain the retrieved MHS IWP difference relative to that of CloudSat.Moreover, since microwave penetrates much deeper into ice clouds than IR/VIS channels, we expect our CloudSat-constrained algorithm to yield a better retrieval at large IWP values.
Radiative transfer models (RTMs) and computation of T cir
The first step in cloud ice retrieval is to determine ice cloudinduced brightness temperature T cir from raw radiance measurements (Wu et al., 2009(Wu et al., , 2014)).In this study, T cir (cloudinduced radiance) is defined as the difference between the measured radiance, T B , and modeled clear-sky background (also called "cloud-cleared radiance"), T ccr : T cir also serves as a critical variable for cloud detection since every measurement has an uncertainty that may lead to a false alarm.T cir error is largely affected by uncertainty in the estimated T ccr .Various methods have been developed to improve the accuracy of T ccr estimation.Generally speaking, T ccr can be obtained using statistical differences between cloudy and clear skies (Wu et al., 2005), or using the radiative transfer model to estimate the clear-sky background from the current atmospheric state.Here we use the second approach with the best estimate of local atmospheric state variables (e.g., temperature, pressure, water vapor, ozone) and surface conditions (e.g., surface temperature, surface type) from a MERRA three-hourly assimilation data set from interpolation of adjacent grid points and closest local time.We allow relative humidity to exceed 100 % in computing clearsky radiation.We also used the MERRA six-hourly finer-grid analysis product and European Re-Analysis Interim (ERA-Interim) data, but no statistically significant difference in T ccr distribution is found among the results so far in the tropics and subtropics.
The JCSDA CRTM v2.0.5 model is employed to calculate T ccr .CRTM is a fast radiative transfer model that uses an advanced doubling-adding method (Liu and Weng, 2006) to compute the radiances and radiance Jacobians at the top of the atmosphere for various instruments, with wavelengths ranging from visible to submilimeter.It includes scattering calculations for cloud, aerosol, gas molecules and surface if specified.As the key backbone of data assimilation (DA) systems, the CRTM has incorporated most space-borne instrument information (e.g., spectral frequency, filter shape, and scan pattern), including AMSU-B and MHS.Therefore, it is also our objective to calibrate our cloud ice retrieval with this widely used CRTM for T ccr estimation so that the IWP outputs can be ready for the DA applications.
Figure 1 presents the probability density functions (PDFs) of T cir , T ccr and T B from a month's worth of MHS nadir measurements in the tropics.Warmer T B values are mostly from the clear sky or surface, while colder T B are the cases of ice clouds or snow/icy surfaces at a high elevation.The T B PDFs all have a broad peak with a standard deviation (σ ) that is so wide that the empirical 3σ cloud detection method (i.e., T B peak − 3σ < 0 for cloud detection) used by many previous studies does not work well when applied directly to the T B data (e.g., McNally et al., 2006;Gong and Wu, 2013).On the other hand, the T cir PDFs have a smaller standard deviation because the CRTM-derived T ccr from MERRA data has removed a lot of clear-sky variability (Fig. 1a).The long PDF tail in the negative T cir values is a distribution of cloudy radiances.Ideally, a perfect T ccr model with perfect clearsky input would produce a singular peak in the T cir PDF at 0 K, and all negative values that are smaller than the radiance noise would be classified as clouds.The uncertainty of T cir measurements, close to a Gaussian distribution, is reflected in the PDF spread near zero, especially in the positive half of the PDF.The ability to separate between cloudy and clear radiances is characterized by this standard deviation σ , which can be computed from this portion of PDF.
However, the CRTM does not always improve cloud detection.For example, the width of the CH#2 T cir PDF is not much narrower than that of T B , indicating limited skills of the CRTM in capturing the clear-sky variability.Large error in the calculated T ccr is found over mountains and arid areas, where it remains challenging for the CRTM to model surface contributions at CH#2.When excluding all land cases, the CH#2 T cir can produce a PDF with a narrower width around zero (not shown).On average, the T cir error is ∼ 5 K (one standard deviation), although it may vary from 7.5 to 10 K.In the cloud ice retrieval later on, the generic value of 5 K is used for all channels.In addition to T cir standard deviation, we also calculate T cir bias for each MHS channel since the clear-sky PDF should peak at zero.We find that the CRTM has a cold bias (∼ 2 K) at 157 GHz (see Appendix B for details), whereas the bias is negligible for other channels (Fig. 6).
Moreover, Fig. 1a also reveals the dynamic range and penetration depth of the four MHS channels in measuring cloud ice.Ch#2 penetrates deepest into clouds.Benefitting from its low frequency (157 GHz) at which cloud scattering and water vapor absorption is lowest among the MHS channels, it produces the longest cloud PDF tail (black line in Fig. 1a).On the other end, CH#3 has the most absorption from water vapor, showing the smallest T cir dynamic range.It has a slightly broader distribution in the positive half of T cir PDF, compared to those of CH#4 and CH#5, indicating that either the upper-tropospheric water vapor from MERRA or the CRTM calculation at CH#3 contains greater uncertainty.
Two popular operational RTMs are also used to explore how the observed T cir -IWP relationship is simulated by models.These two RTMs are CRTM, and a multi-stream "cloudy-sky radiance model" (CRM) that is currently used by the Microwave Limb Sounder (MLS) team to retrieve ice cloud properties (Wu and Jiang, 2004).They both show a certain lack of ability to capture the observed T cir -IWP relationship, which results in large biases in data assimilation and increases the uncertainty of IWP retrieval (Appendix C).
Collocated and coincident MHS-CloudSat measurements
Collocated and coincident measurements (collocations hereafter for briefness) are the incidences where two or more sensors observe the same location at the same time.These measurements provide useful pairs for instrument calibration (e.g., John et al., 2012), cross-validation of a particular variable (e.g., Wang et al., 2010), or development of new retrieval methods (e.g., Lamquin et al., 2008).In this paper, we will be focusing on the last application.
The requirements for collocated-coincident measurements may vary, depending on the variability of the specific variable.Since most of the atmospheric state variables (e.g., wind, temperature, humidity) change relatively slowly and continuously with space and time compared to fast processes like clouds, their requirements for collocation and coincidence should be a bit more relaxed and the allowed windows for space and time should be consistent.In other words, the uncertainty of collocation due to spatial variations should be comparable to one of coincidence due to temporal variations.Another factor in defining the requirements for collocation and coincidence is to assure enough samples for statistics.For the A-train sensors, sample size is usually not a problem.On the other hand, such a near-perfect collocation is rare between radiosonde and Global Positioning System (GPS) measurements (Sun et al., 2010).Neither occurs frequently for two satellites that run in different orbits.Adjustment of the collocating criteria becomes necessary and important in these situations.
In this study we use NOAA-18 measurements to find collocated-coincident cases with CloudSat because NOAA-18 has the closest LST to CloudSat orbit among all operational satellites with the MHS/AMSU-B instruments (Holl et al., 2010).The requirements for collocation and coincidence are 10 km in space and 15 min in time, which yield a total of 6×10 5 MHS samples in the tropics (25 • S and 25 • N) from June 2006 to March 2011.Holl et al. (2010) obtained one order of magnitude more collocated NOAA18-CloudSat measurements with a requirement of the same time difference but 15 km in distance, of which the number increase is roughly proportional to the area differences between the two distance criteria.The sensitivity of the retrieval algorithm to the choice of collocation criteria will be discussed in the next section.
Because of the close orbits between NOAA-18 and Cloud-Sat, the number of collocated measurements peaks at the MHS nadir angle and drops off similarly at the left and right view angles (Fig. 2).There is no significant scan angledependent sampling bias, which would be a factor to consider in the derived T cir −IWP relationship.The number of collocations decreases sharply at oblique views with scan angle θ > 35 • , which may affect the statistical significance of the derived T cir −IWP relationship.
In the case of highly inhomogeneous clouds, larger uncertainty is expected for the IWP within MHS FOV, as CloudSat footprints cover at most 6.7 % of the area of an MHS footprint.As a matter of fact, multiple CloudSat cloud profiles often correspond to an MHS footprint because the CloudSat footprint (∼ 1.5 km) is much smaller than the spatial range of the defined collocation.Thus, we average all the CloudSat IWP values within the collocated MHS FOV to represent the mean IWP for the MHS footprint.The same procedure is applied to calculate the mean cloud top height (h t ) at that MHS footprint, where each individual CloudSat h t is obtained by searching for the highest level where IWC > 10 mg m −3 .
Empirical T cir -IWP relationships
For nadir-viewing sensors like MHS/AMSU-B, negative T cir is caused primarily by ice cloud scattering instead of by emission.Mie theory shows that T cir is proportional to cloud IWP at microwave wavelengths and to the fourth power of frequency.As the ice cloud becomes radiatively thick, cloud self-extinction prevents T cir from penetrating deeper to sense the entire IWP, but rather, it has a sensitivity to pIWP.Hence, an empirical T cir −IWP relationship is derived in the following format: where T cir0 is the coldest T cir (i.e., saturation value) and H is the parameter to determine where T cir becomes saturated.
Both T cir0 and H depend on frequency and can vary with cloud top height (h t ), instrument view angle, and temperature lapse rate (γ ) in the upper troposphere.In this study, since we focus on the tropical region where the lapse rate variation is small, these parameters are assumed to be only a function of channel frequency and cloud top height.For small IWP values, T cir T cir0 (−IWP/H ), which is a linear relationship as described by Wu and Jiang (2004) for Aura MLS.As also suggested by Wu and Jiang (2004), H could be a function of cloud profile shape and the ice-to-water mixing ratio inside clouds, but these dependencies have secondary effects on the T cir -IWP relationship.
To derive the empirical T cir -IWP relationship, we first sort all collocated measurements, CloudSat IWP (averaged onto MHS footprints) and MHS T cir at near-nadir views (scan angle ∈ [−5 • , 5 • ]), to generate a joint PDF separately for each MHS channel.As shown in Fig. 3, the T cir -IWP relationships are scattered with the PDF peaks in good agreement with Eq. ( 2).We then fit the 2-D PDF to obtain T cir0 and H parameters in Eq. ( 2), which is the solid curve in Fig. 3.The fitting is carried out as follows: (1) to determine T cir0 from the coldest T cir .We search all 2010 MHS nadir data and the coldest T cir as T cir0 for each channel.(2) We then compute H in Eq. ( 2) with the ordinary least squares method by fitting the T cir and IWP values at peak 2-D PDF (black dots in Fig. 3) using the T cir0 derived from step 1.
The fitted curves represent bulk characteristics of the joint PDF.Compared to a linear fit, the residual variances decreased by at least 50 %.However, the joint PDF of CH#3, showing a steeper relationship for T cir and IWP at colder T cir values, is not represented well by Eq. (2).Moreover, T cir PDF becomes flat at small IWP values (IWP < 0.5 kg m −2 ), indicating the lower limit of T cir sensitivity to IWP.The spread of 2-D PDF reflects both natural variability and collocation error of the T cir -IWP relationship.One of the cloud variabilities that affect the T cir -IWP relationship is the cloud top height (h t ).
To examine the dependency of H on h t , we further sort the collocated measurements into three height groups using the mean h t (defined as the highest altitude at which IWC reaches 10 mg m −2 ) computed from CloudSat cloud profiles: 9.5 < h t < 10.5 km, 11.5 < h t < 12.5 km and 13.5 < h t < 14.5 km, each group separated by 1 km to avoid overlapping of the regression lines.These three height groups account for about 48 % of all near-nadir collocated measurements.We then apply the same fitting procedure to obtain T cir0 and H for each height group as in Eq. (2) (solid thick lines in Fig. 4).For the three height groups, a cloud bottom height (h b ) is calculated to be within 7.5 ± 1.6 km, 8.5 ± 2.5 km and 7.4 ± 2.5 km, respectively, so that the collection represents tall and thick deep convective clouds in the tropics.The measurements with h t below 8.5 km and above 15.5 km are too few to obtain a statistically robust T cir -IWP relationship.
Fig. 4 shows that T cir is more sensitive to IWP for clouds with higher h t , except for CH#3, where the situation is reversed.This variation in T cir sensitivity is expected, according to the sensitivity expression from a conceptual cloud scattering model (Eq.6.3 in Wu and Jiang, 2004) where τ ceff is the cloud effective optical depth that is positively correlated with IWP, T scat is the cloud scattering radiance from a convolution of the upwelling and downwelling radiation, and T AB is the background clear-sky radiance beneath the cloud.For a given channel, T AB remains the same no matter how thick or thin the ice cloud is.For thick, high h t clouds, T scat is colder due to more contributions from higher altitudes, resulting in a larger T cir sensitivity to IWP.The CRM used by Wu and Jiang (2004) for Aura MLS predicts a similar but weaker h t dependence, due to the fact that MLS is The h t -dependent H parameter allows a simultaneous retrieval of h t and IWP.By including or constraining h t in the retrieval, it improves the IWP retrieval.Other approaches, e.g., using IR channels from the CO 2 slicing method (Kahn et al., 2008) may be used in the future to constrain h t in the IWP retrieval.As seen in Fig. 4, the error bar for each of the three cloud groups is smaller than one without the height separation.Relaxing the collocation requirements would increase the number of measurements for statistics, but we find that it does not reduce the error bar of the derived T cir −IWP relationship.
To complete the empirical model for the T cir −IWP relationship, we need to extend the parameters listed in Table 1 from the near-nadir case to all MHS scan angles.For offnadir views, to account for longer off-nadir LOS (ζ is the local zenith angle), the T cir off−nadir needs to be multiplied by cosζ to achieve an equivalent nadir T cir , assuming planeparallel cloud layers.This is not a bad assumption in the case where clouds are not opaque.For opaque clouds, inhomogeneity plays a more important role in relating off-nadir and nadir views.In other words, the scan-angle correction for T cir is a function of T cir as well.Thus, we develop an empirical solution for this correction, which is given in Appendix A.
Joint retrieval of IWP and h t
The IWP and h t are retrieved using the sequential estimation approach as described in Rodgers (2000) and Livesey et al. (2006).Equation ( 5) in Livesey et al. (2006) Eq. ( 5) below: (q) annotates the qth step of iteration.In our case, x = [IWP,h t ] is the retrieved result, y = [T cir 2 ,T cir 4 ,T cir 5 ] is the observation, and cir can be calculated using Eq.(2) and x (q) .K is the Jacobian matrix, which is defined as Plotted in Fig. 5 are the analytical solutions of K using the coefficients listed in Table 1 and Eq. ( 6).∂T cir /∂IWP (left column of Fig. 5) monotonically increases with IWP for all three channels without any singularity point or multiple solutions.However, ∂T cir /∂h t (right column of Fig. 5) has a singularity point at h t = 18 km for CH#2, where multiple solutions exist.For CH#5, multiple solutions can also occur for h t .If we define the bottom of the ∂T cir /∂h t curve as h t critical , then the smaller the IWP is, the higher h t critical is.For instance, h t critical at IWP = 3.0 kg m −2 is 18 km, meaning that if the cloud has h t > 18 km and IWP = 3.0 kg m −2 , the Figure 5. Analytical solutions of the two components of the Jacobian matrix K: ∂T cir /∂IWP with fixed h t (left) and ∂T cir /∂h t with fixed IWP (right).For the left column, the fixed h t value increases from 6 (thin, blue) to 20 km (thick, red) with an interval of 2 km.For the right column, the fixed IWP value increases from 0.5 (thin, blue) to 18 kg m −2 (thick, red) with an interval of 2.5 kg m −2 .retrieved h t has a possibility of being underestimated.The K matrix responses at CH#2 and CH#5 suggest that the h t retrieval could significantly underestimate the truth when cloud top is above 18 km, especially for thick, dense clouds.a = [IWP 0 ,h t 0 ] is the a priori (initial guess) of x.In practice, if T cir from all three channels is less than −5 K, there is a strong possibility of ice cloud presence, and h t 0 is set to 5 km to speed up the convergence of the iteration.Otherwise, h t 0 is set to 0 km instead.The initial guess of IWP 0 is always set to 0. Once the iteration begins, a is forced to equal to x (q) to avoid "artificial preference" of retrievals to the a priori.That is to say, the last term on the right-hand side of Eq. ( 5) can be eliminated.The total number of iteration steps is set to 20 regardless of whether the final results converge or not.Within each iteration, IWP (q) is not allowed to exceed 25 kg m −2 or become negative, and the h (q) t value must be within the range of [0,18] km.The lower bounds assure physically meaningful solutions.The upper bound of h t is where CH#2 and CH#5 are problematic in retrieving a trustable h t with the set of coefficients listed in Table 1.Therefore, the protection of the h t solution again significantly under-evaluates the h t for those high, dense clouds.Nevertheless, IWP rarely exceeds 25 kg m −2 , and the monochromaticity of K with respect to IWP assures the robustness of IWP retrievals.
S y , S a and S x are the matrices describing the error covariances associated with the measurements, the a priori, and the final retrieval results, respectively.S y = [5, 5, 5] 2 K 2 as the measurement error is estimated to be 5 K (Sect.2.1.3).S a associates with the CloudSat IWC retrieval error, which is estimated to be less than ∼ 50 % (Austin et al., 2009).In practice however, S a defines the step allowed to jump in each iteration, which needs to be small in very nonlinear cases where multiple solutions exist and large steps could result in an unstable retrieval.Since the retrieval function is monotonic for all channels, a large step S a = [6 kg m −2 , 6 km] 2 is chosen, as in the so-called Newtonian iteration, to accelerate the retrieval convergence.Once S y and S a are fixed, S x at each iteration step can then be calculated from Eq. ( 5), which is shown as Eq. ( 7): The retrieval is not carried out if T cir at all three channels is greater than 5 K, a strong indication of a clear sky.In that case, we directly assign a clear-sky flag to the scene.CH#2 radiance is excluded for retrievals over arid areas because of its contamination by surface signals.This is realized by checking land pixels with T cir < −5 K for all three channels (i.e., ice cloud likely).As long as this criterion is not satisfied, only CH#4 and CH#5 are used for the retrieval over land, whereas CH#2 is always used over oceans.After retrieval, the IWP value that has the standard deviation ( √ S x [1]) greater than or equal to itself is flagged as "bad quality"; so is h t .The rest is flagged as "good quality".
Assessment of IWP and h t retrievals
Comparisons of IWP retrievals have been challenging and sometimes even confusing because not all sensors measure the same portion of pIWP.Different cloud bottom and top heights can affect the cloud ice sensitivity and retrieval results.For MHS, the channel penetration depth varies with water vapor loading above cloud and with liquid water amount inside clouds if it is a mixed-phase case.In addition, cloud inhomogeneity along LOS introduces more uncertainties to this comparison task.Active microwave sensors such as CloudSat do not have the penetration depth issue for most clouds.In this study we treat its IWP as the truth when comparing it with the measurements from passive sensors (e.g., Wu et al., 2009).Since the retrieval algorithm developed here is constrained by CloudSat IWP, the IWP retrieved from MHS is expected to be statistically close to CloudSat cloud ice.In other words, MHS penetration depth is "extrapolated" to reveal the total column IWP using this algorithm.In this section we compare the PDFs of monthly IWP as well as the mean IWP maps for MHS and CloudSat data.
Comparison of IWP PDFs
Normalized PDF has been used to compare the cloud ice products and sensitivities from multiple sensors (Su et al., 2009;Wu et al., 2009).The fundamental assumption of this approach is that cloud ice should have the same probability distribution if both sensors are measuring the same ensemble of clouds (e.g., in similar latitude regions and local time).Unlike the apple-to-apple comparison, vast data can be digested in one PDF plot that reveals ample information.The basic philosophy of this approach is that the variable of interest should have the same probability of observing a certain value with what nature shows within the product's visibility range.Therefore, if the probability is smaller (greater) than that from the truth, the variable (e.g., cloud occurring frequency) is under-(over-)estimated.The PDF comparison also overcomes the instrument geometry difference, as explained in Wu et al. (2009).
As expected for the CloudSat-constrained retrieval, MHS IWP PDF agrees well with CloudSat, as shown by the grey and black lines in Fig. 6.The decreasing probability with IWP reflects the natural variability of cloud ice.CloudSat IWPs here are 15-FOV averaged values in order to mimic the MHS footprint diameter, which is slightly steeper than the original (non-averaged) PDF, or a higher (lower) possibility at smaller (larger) IWP.The averaging effect (< 10 % in PDF values) is negligible compared to the differences among various data sets/retrievals.When all good and bad retrievals from the 90 MHS views are included, the PDF (solid black line) in Fig. 6 rises more sharply at small IWPs (∼ 500 g m −2 ) due to the arbitrary retrieval suppression for negative IWP values and false detection of clear-sky scenes.The dropping PDF at IWP < 500 g m −2 is mostly noise.When the quality flag is applied to exclude bad retrievals, the PDF (dots) agrees better with CloudSat at IWP > 300 g m −2 .At large values (> 8 × 10 3 g m −2 ), our algorithm tends to slightly over-estimate IWP when compared to CloudSat.
The PDF of NOAA operational MSPPS data (crosses in Fig. 6) is lower than CloudSat at all IWP values.At large IWP values (IWP > 10 3 g m −2 ), it differs by 10 times or more, indicating that the operational product significantly underestimates cloud ice, compared to CloudSat.This low www.atmos-meas-tech.net/7/1/2014/Atmos.Meas.Tech., 7, 1-18, 2014 bias was also reported in other studies (e.g., Waliser et al., 2009;Eliasson et al., 2011).The quality of our cloud ice retrieval is demonstrated clearly in a scene over Hurricane Earl on 31 August 2010 (Fig. 7a-c).T cir in all three MHS channels captured the structure of Hurricane Earl very well, showing the eye, eye wall and spiral rain bands.CH#2 radiances, penetrating the deepest, reveal more ice cloud structures than other channels.The retrieved IWP from our algorithm (Fig. 7d) retains most of the fine structures in CH#2 and also shows a hint of an additional two outer arms.Although the values of these arms are below the noise level, they are probably real because they are also present in the geostationary satellite IR image (not shown).The IWP from CloudSat overpasses (colored crosses) have slightly larger values than MHS, whereas MSPPS operational IWP (Fig. 7f) are significantly smaller than CloudSat and our retrievals.This hurricane case also highlights the value of MHS IWP in studying the 2-D atmospheric dynamics and cloud structures that are not captured by the CloudSat curtain sampling.Using the CloudSatconstrained IWP measurements, we can obtain good spatial and temporal coverage from the MHS/AMSU-B sensors onboard all operational satellites.
Retrieved h t (Fig. 7e) also agree reasonably well with CloudSat, especially at the hurricane periphery and the eye wall, but are lower by ∼ 4 km over the hurricane deck (13 km versus > 18 km).This is probably due to the fact that the cloud top at the deck is dominantly higher than that at the hurricane periphery, i.e., higher than 18 km.They hence exceed the upper limit of the reliable h t retrieval range from our algorithm.Pixel-by-pixel comparisons are done for some other cases that have CloudSat cloud tops lower than 18 km, and the h t retrieval seems quite promising (not shown).Nevertheless, the h t retrieval here is mainly to improve IWP retrieval, rather than the purpose of scientific study.
As the first CloudSat-calibrated column-wise IWP measurement that has excellent spatial coverage, the MHS IWP retains numerous potential usages for model input, for validation of other instrument measurements and for modelobservation comparisons in the future.
Geographic distribution of IWP
Monthly mean IWP maps show good correlation between MHS and CloudSat cloud ice for August 2010 (Fig. 8), where the correlation is 0.81 in the tropics.Sampling error is evident in these maps.With a relatively coarse grid box (5 • ×5 • ), the CloudSat monthly maps (Fig. 8a, c) are spotty due to a lack of swath coverage.This sampling is also aligned to the westward-traveling fast cloud systems, leading to cloud ice spikes (e.g., eastern Pacific) and scatters (e.g., Amazon rainforest) on the CloudSat maps (e.g., Amiridis et al., 2013).The sampling bias is largely mitigated by the 90-FOV MHS swath, maps of which look much smoother instead (Fig. 8b, d) because it overpasses one grid box 6 times as often as CloudSat on average.If the footprint size is taken into consideration, MHS could pass every corner of each 5 • ×5 • grid box in the tropics by as many as 42 times within a month, while CloudSat covers only 4 % of the area in the tropics.The major features between the two data sets agree well, especially in deep convective regions where IWPs are large.The day-night differences in ice cloud thickness seen in CloudSat are also evident in the MHS maps, e.g., in central America and central Africa.
Interestingly, in the scatter plot of MHS and CloudSat IWPs at a logarithm scale, the correlation is not along the 1 : 1 line, showing a higher bias in MHS at smaller IWP values.The overall regression yields IWP CloudSat = (0.83 ± 0.017)IWP MHS − 14.7 [g m −2 ], shown as the blue dots in Fig. 9b.The −14.7 g m −2 offset partly comes from elevated topographies, e.g., the Andes, and from deserts, e.g., central Australia.The bias is slightly worse during night (MHS descending orbit) than during the day (MHS ascending orbit).If CH#2 is included for the MHS IWP retrievals over land, the high bias would increase over Australia, which may suggest a warm bias in MERRA surface temperature or aproblem with surface emmisivity in that region during nighttime (i.e., a cold bias of T cir for CH#2).It is suggested that our retrieval algorithm has some limitations over complicated surface conditions, which will be discussed in the next section.Part of the −14.7 g m −2 offset is caused by the fact that MHS tends to slightly over-estimate IWP with respect to Cloud-Sat, especially for thick and dense clouds.Besides, CloudSat probably misses some convections due to its sampling bias, for instance, over the Amazon rainforest and the maritime continent.Visual comparison between MODIS ice cloud optical depth (Fig. 7 of Meyer et al., 2007) and MHS IWP shows better agreement in these regions.It is worth mentioning that collocated MHS-CloudSat retrieved IWP showed the same feature.However, by subtracting the square root of the error (S x ), MHS IWP does not have such a positive bias (not shown).Therefore, despite the fact that MHS is still noisy below 0.5 kg m −2 at a single retrieval, the error estimation is very reasonable, and helps in filtering out the bad retrievals.
Limitations of the algorithm
The empirical forward model (T cir -IWP relationships) and retrieval algorithm presented in this paper are designed for tropical regions and have difficulties in retrieving IWP over elevated topography and desert.In the cases of mixed-phase clouds or excessive water vapor abundance above cloud tops, the retrieval error for IWP might increase.The major causes of the biases over land are likely the CRTM surface emissivity error in modeling the surface radiation, or surface temperature error in the mountain and desert regions in the MERRA data.Since CH#2 radiance contains surface signals (CH#5 sees arid and snow surfaces as well), uncertainties in surface temperature and emissivity will induce T cir biases.As a matter of fact, we do see a systematic warm bias of 2 K in CH#2 T cir (Appendix B), which could be due to the instrument calibration error or T ccr model error.Moreover, the PDF of T ccr for CH#2 (Fig. 1b) extends to a temperature as low as 220 K, which is strong evidence of contamination from clouds or cold surfaces (i.e., ice pack on mountains can also cause this low T B ).With the development of a neural network approach, the initial guess of T ccr could be used to improve the cloud ice retrieval over complicated surface conditions (Chen and Staelin, 2003).The parameters in Table 1 assume that the atmospheric temperature lapse rate γ is constant in the tropics.As predicted by Wu and Jiang (2004) using CRM, the T cir -IWP relationship is also a function of γ (Fig. 6.10 therein).Evaluating the PDFs of retrieved IWP outside the tropics, we find that the PDF of extratropical IWP starts to oscillate at its large-value tail (Fig. 9a) compared with that in the tropics (Fig. 6).The quality-controlled PDF in this case is still comparable with CloudSat PDF though for this bin assuming a 100 % tolerance level of the PDF difference.Hence, our algorithm is expected to perform well within latitudes of 30 • N, S, but degrades in the extratropics.The mean vertical temperature profiles are also similar to those in the tropics up to 30 • in latitude (Fig. 6.9 in Wu and Jiang, 2004).At middle to high latitudes beyond 30 • , the quality-controlled PDFs are too low or even alter its shape, and the retrieved MHS IWP merely correlate with CloudSat IWP (not shown).In future algorithm development, γ should be treated as an independent variable, such that the algorithm can be applied for IWP retrievals at higher latitudes.
Liquid clouds occur frequently below 5 km, where temperature is usually greater than 0 • C (Riedi et al., 2001), which may have little impact on CH#4 but can significantly affect CH#2 and CH#5 T cir .For deep convective clouds, liquid droplets can be lifted to a much higher altitude.The mixing of liquid droplets into ice cloud enhances the cloud emission contribution at microwave frequencies and hence decreases the T cir sensitivity to IWP.Wu and Jiang (2004) showed that this impact could be as large as 30-50 % in a strong mixedphase case, which alters the relationship in Eq. ( 2) with different parameters.Therefore, mixed-phase clouds can contribute significantly to the spread of the 2-D PDF shown in Figs. 3 and 4.
Water vapor above and inside cloud plays a screening role in reducing the sensitivity to IWP, in a way similar to liquid droplets.Since CH#4 and CH#5 are water vapor channels, they are sensitive to the water vapor abundance above and inside ice clouds.As a result, T ccr calculation could be biased if MERRA water vapor is too dry or too wet above clouds.The water vapor impact was only evaluated using CRM with different water vapor profiles, assuming variability within the uncertainty of observed upper-troposphere water vapor.The water vapor impact is found to be small and negligible in these CRM simulations (less than 5 % with doubling water vapor amount above clouds).
Error analysis
The retrieval error (S x ) provides a direct estimate of the retrieval uncertainty.This uncertainty is completely independent of CloudSat IWP retrieval uncertainty.There are further sources of error, for example, the imperfect "coincident collocation", the uncertainty induced by limited collocation samples, cloud misclassifications, etc.The total error is a combination of the three.This section will be focused on delineating the error sources and quantifying them one by one.
The retrieval error S x is a dependent of the observational uncertainties from MHS (S y ) and from the forward model (K matrix), represented by Eq. ( 7).For small IWP values (thin ice cloud), S x is dominated by S y (i.e., MHS instrument noise).Since S y is fixed at 5 K for all three channels, this algorithm loses its sensitivity for IWP values smaller than 0.5 kg m −2 , which is evident in Figs.9a and in 10b.For large IWP values (thick or precipitating ice cloud), S x is controlled by the uncertainty of the forward model.In the case when T cir saturates (i.e., smaller than T cir0 in Table 1), the forward model uncertainty is infinite as the Jacobian curve of the K matrix with respect to IWP flattens out from T cir0 onward.However, the K matrix is still very sensitive to IWP as long as T cir does not saturate, as shown in Fig. 5. Therefore, the induced error in S x from the forward model uncertainty is relatively small for large IWP values.This is confirmed by Fig. 10a, where the percentage range of retrieval error ( √ S IWP /IWP) decreases quickly from above 100 % at small IWP values to as low as 20 % at large IWP values (IWP > 1 kg m −2 ).Error is also reflected in the retrieval results, shown in one month's worth of collocated CloudSat-MHS IWP retrievals in Fig. 10b.It is apparent that collocated retrievals are highly agreeable above 1 kg m −2 , while MHS tends to overestimate thin cloud IWP.
Ice cloud misclassification is an unavoidable issue for any cloud retrieval technique.Cloud misclassification of this retrieval algorithm is partly induced by the beam-filling effect and mismatch of CloudSat and MHS footprints spatially and temporarily.It is hard to separate these two effects, as they are essentially the same.They are the major cause of the spread of the 2-D PDF and the uncertainty bars in Fig. 3 and Fig. 4, respectively.As a CloudSat footprint is much smaller than an MHS footprint, cloud inhomogeneity within an MHS FOV cannot be captured fully by averaging CloudSat footprints within the corresponding MHS FOV.Since we further relax the collocation and coincidence criteria, mismatches also occur.Both of them result in the cloud misclassification as described in the above paragraph.However, imperfect clear-sky background (T ccr ) can also cause a bias in T cir , which can also induce cloud misclassification.One thing to notice is that the error of clear-sky radiance is internally included in S y , as S y is directly estimated from T B -T ccr .
As truth is not given in our case, CloudSat is again used as the "truth" since the philosophy of this paper is to align MHS measurements with those of CloudSat.Two types of cloud misclassification exist: for one thing, MHS treats a footprint as an ice cloud being present, but collocated Cloud-Sat measurements averaged onto the same MHS footprint do not report a positive IWP value.This is named "Type I" misclassification; for the other, which is on the contrary, it is called "Type II" misclassification.All collocated CloudSat-MHS observations between 30 • S and 30 • N are compared through the entire year of 2010 to generate the statistics.For Type I misclassification, there is an 18.1 % chance that MHS will detect an ice cloud but CloudSat will not, among which only 1 % of the MHS retrieval reports an IWP greater than 0.5 kg m −2 (noise level).That means that using our technique, MHS nearly never misclassifies a clear-sky scene as cloudy sky and reports an IWP value beyond its noise level.Interestingly, the 18.1 % misclassification cases do not prefer mountain or snowy regions like the Andes, albeit there is a slight enhancement of occurring frequency over the Australian desert.The latter is expected as discussed in Sect.4.3.For Type II misclassification, there is a 45.8 % chance that CloudSat will detect an ice cloud but MHS will not.Among these cases, 90.5 % of the time CloudSat reports an IWP value less than 0.5 kg m −2 .All in all, our technique shows a very strong confidence level for retrieved values greater than the detection threshold (2 and 4 % misclassification rate for Type I and II, respectively), which means that our technique nearly never misses a thick ice cloud, even when the surface signal is complicated.On the other hand, our technique misses or gives large retrieval uncertainties of thin ice clouds like cirrus, which are not very important hydrologically (Fig. 8) but are important to the radiation budget.
It is difficult to justify the magnitude of other errors, which are blended in the error bars of the fitted curves in Fig. 4. The IWP error bar can be as large as ±1 kg m −2 at IWP = 3 kg m −2 for CH#4.Given the fact that the retrieval error should be smaller than any of the error bars generated from each individual channel, it is fairly reasonable to claim that the total retrieval error would be smaller than 100 % when the retrieved IWP is beyond the detection threshold.
The overall retrieval error from this algorithm is quite small.As a matter of fact, it reflects the "precision" of the instrument and the forward model rather than the total error.One should always keep in mind that this is the error on top of the CloudSat IWP retrieval uncertainties.SPARE-ICE by Holl et al. (2014) showed a larger error in general, but their algorithm could quantitatively evaluate the contribution from each source, including CloudSat itself.The two algorithms are not directly comparable since we are under different metrics.Our algorithm meets the goal presented at the beginning, that to make cross-platform, cross-instrument consistent retrievals.
Conclusions
A fast empirical forward model built upon T cir -IWP relationships at MHS 157,183.3 ± 3 and 190.3 GHz channels is developed and used to retrieve tropical cloud IWP from MHS radiance measurements.The T cir -IWP relationships at these channels are dependent on cloud top height h t in the tropics (Fig. 4), and the algorithm for retrieving IWP and h t simultaneously can improve the IWP accuracy.The IWP PDFs from MHS and CloudSat retrievals agree quite well, as expected for this constrained empirical forward model, over a wide dynamic range of cloud ice (IWP = 0.5-10 kg m −2 , Fig. 6).The retrieval errors are also about the same magnitude (smaller than 100 %, Fig. 10).The empirical forward model is valid for clouds with h t lower than 18 km and IWP greater than 0.5 kg m −2 , but only in the tropics between 30 • S and 30 • N at present (Fig. 9).Beyond that latitude range, temperature lapse rate variations need to be taken into account to refine the T cir −IWP relationship.In addition, the algorithm is not accurate for retrieving IWP over elevated and arid topography (Fig. 8).
Producing a CloudSat-consistent MHS IWP product has several direct benefits and important implications for studying clouds.Firstly, it helps to extend CloudSat cloud coverage with a wider swath width because frequent sampling from different operational satellites will allow frequent updates of fast-evolving weather phenomena such as hurricanes and frontal systems.The new data can be used to improve weather prediction (e.g., cloud diurnal cycle) and longterm regional climate monitoring (e.g., IWP trend).Secondly, our improved IWP retrieval method renders generally larger IWP values than the NOAA operational product (Figs.6 and 7).The approach we implemented with highfrequency microwave channels improves cloud detection in scenes with high IWP.Compared with CloudSat monthly climatology as well as the single-orbit measurements, we found that our results are closer to the CloudSat integrated ice water path.Thirdly, we show that replacing the 89 GHz channel with 183.3 GHz channels for cloud ice retrieval reduces false detection of ice clouds and improves sensitivity to IWP as the higher-frequency channels are more sensitive to ice particle scattering.Lastly, the derived empirical T cir −IWP relationships can be used to evaluate RTM simulations of cloudy-sky radiances, validate model assumptions, and improve model skills for data assimilation applications in the future (Fig. C1).
Although the empirical T cir −IWP relationship developed here was from NOAA-18 MHS, it is applicable to the similar channels used by other AMSU-B/MHS instruments on the NOAA and Metop operational satellites for obtaining a longer data record and more frequent coverage.It can also be applied to other instruments that have the same combination of channels, for example the Advanced Technology Microwave Sounder (ATMS) onboard the Suomi-NPP satellite, or the Special Sensor Microwave Imager/Sounder (SSMI/S) onboard the Air Force F-16, F-17 and F-18 satellites.Fur-thermore, the approach we demonstrated in this study can be applied to IR/VIS sensors with the measurements collocated with CloudSat, such as Aqua Atmospheric Infrared Sounder (AIRS) and MODIS, to extend the sensitivity to lower IWP values and enhance the dynamic range of remote sensing of cloud ice from space.examination of other months of T cir PDFs for this channel shows the same warm bias (named as T ).At off-nadir views, T becomes smaller, which follows the theoretical clear-sky limb-darkening curve T nadir •cos ζ = T side .This means that the estimated T ccr for clear skies has a systematic error at 157 GHz, which could originate from the inaccuracy of MERRA surface emissivity or errors inside CRTM.To account for this offset, all T cir values for CH#2 are subtracted from this offset before carrying out the retrieval:
Appendix C: Comparison with RTM simulations
As one of the most important motivations of this work, it is worthwhile checking whether the existing operational RTMs can reproduce the observed T cir -IWP relationship.Some research RTMs that have more sophisticated radiative transfer treatment produced comparable results with observations (e.g., Davis et al., 2007;Kulie et al., 2010).However, some of the major operational RTMs still have large biases in highfrequency microwave channels, which result in poor usage of cloudy/precipitating scenes observed by instruments such as MHS and SSMI.The goals of this exercise are hence to quantify operational model uncertainties and to determine how reliable these RTMs are for calculating cloudy-sky radiances as observed by satellite microwave channels/instruments.We would also like to identify possible causes of these model errors and develop a quick remedy for these operational models.Since this problem by itself is very complicated, it will only be touched briefly in the appendices here, and a paper to explore further details is underway.Two popular models, CRTM and CRM, will serve as the representatives among different RTM variants.CRTM is widely used as the centerpiece of data assimilation in United States weather forecast systems, and CRM is used every day www.atmos-meas-tech.net/7/1/2014/Atmos.Meas.Tech., 7, 1-18, 2014 to generate MLS cloud products.In both models, three ice clouds are fed in one by one, with cloud bottoms at 7.5 km and cloud tops at 10, 12 and 14 km, respectively.Two cloud shapes (convection with anvil cloud top; Gaussian shape) are tested.Since their results differ little, the first cloud shape is applied to all following studies.US standard atmosphere in the tropics is used as the background atmosphere.For the cloud droplet size distribution, CRM has several options.Only McFarquhar-Heymsfield (MH) and Gamma distributions are tested with different combinations of parameter values.MH distribution was applied to deliver MLS IWC products (Wu et al., 2009), and Gamma size distribution was assumed for CloudSat ice water product retrievals (Austin et al., 2009).In CRTM, only the cloud ice effective radius is tunable with fixed-width Gamma size distribution assumption.Both models and the observation are compared at nadir view only.
Comparing T cir responses from different channels to the same cloud is a straightforward yet very effective way of presenting many of the differences.As one can see from Fig. C1a, CH#2 should have a larger response to thin and medium thick clouds, and the penetration depths of CH#2 and CH#5 are about the same when they encounter dense and thick clouds.Raw T B from the two channels showed the same features (not shown).CRTM produces almost identical reposes for CH#2 and CH#5, both of which are however too weak compared with the observation (triangles).CRM produces comparable dynamical ranges of T cir , with an effective radius of 160 µm and a width parameter of 2 (refer to Evans et al. (1998) for the format of the Gamma distribution), but it always generates a weaker response for CH#2, contradicting the observation (crosses), while the T cir -IWP relationship for CH#5 is simulated quite well.The main caveat of CRM is that it only considers scattering while ignoring the ice emission.The observed CH#2 / CH#5 T cir ratio suggests that ice cloud emission offsets as much as 30 % of the cloud scattering impact for thin and medium thick clouds, while cloud scattering dominates for dense and thick clouds.Moreover, liquid droplets in mixed-phase clouds could contribute more to the emissions and further reduce the CH#5 T cir response.That may explain the difference between CRTM and the observation.
The models simulate the observed T cir ratio between CH#4 and CH#5 better, as shown in Fig. C1b.Nevertheless, models tend to over-predict the CH#4 response.Because CH#4 is closer to the 183.3 GHz water vapor absorption line, it is more sensitive to water vapor variations than CH#5.Therefore, the air above the cloud top might have been drier than the ambient air to lead to a smaller magnitude of CH#4 T cir .This derivation is supported by some observational evidences for deep convective clouds (e.g., Chae et al., 2011).Again, CRTM overall produces weaker T cir , probably due to the heavy extrapolation from thin cloud LUT to thicker clouds, and the narrow fixed-width Gamma size distribution.
To summarize, the two widely used operational RTMs are far behind the empirical model in capturing the observed T cir -IWP relationships for tropical ice clouds.For relatively thick ice clouds that should produce 30 K or more T B depressions, the warm bias of CRTM easily exceeds 100 %, making it impossible to assimilate the observed thick/precipitation ice clouds from high-frequency microwave instruments.For CRM, the biases can be as large as 50 % for the 150 GHz channel.Some plausible explanations are given to explain the observed model discrepancies, which include but are not limited to the wrong extrapolation of the look-up table of ice clouds, oversight of emission from liquid droplets and ice particles, indifference to humidity above clouds and ambient air, and too narrow a width of Gamma size distribution.One should be advised that Gamma distribution is not indicated as the best cloud ice particle size distribution.Rather, it reflects the inter-model consistency between the two RTMs and the original assumptions made for CloudSat IWP retrievals.Due to the popularity of these two RTMs, this part of work is of much interest to a broad community.
Figure 2 .
Figure 2. Total number of collocated and coincident MHS footprints as a function of scan angle between June 2006 and March 2011.
Figure 4 .
Figure 4. PDF peaks (uncertainties given as error bars) and the corresponding regression lines based on Eqs.(2) and (4) for clouds with h t between 13.5 and 14.5 km (black), 11.5 and 12.5 km (blue) and 9.5 and 10.5 km (red) for CH#2-CH#5 at near-nadir views.
Figure 6 .
Figure 6.PDFs of CloudSat IWP (grey thick line; smoothed over 15 CloudSat footprints and integrated between 5 and 19 km), all retrieved MHS IWP (black solid line; from all views), retrieved MHS IWP that is quality controlled (black dots), and MSPPS IWP (black crosses; from all views; from NOAA-18 only) for August 2010 in the tropics.
Figure 7 .
Figure 7. T cir at CH#2 (a), CH#4 (b) and CH#5 (c) and retrieved NOAA-18 MHS IWP (d), h t (e) and MSPPS IWP (f) for Hurricane Earl at 01:54 LST on 31 August 2010 (Cuba is the island to the left of the plot).The IWP (h t ) calculated from the collocated and coincident CloudSat overpasses (averaged onto MHS footprints) are marked by color crosses that share the same color bars with MHS.Note that the blank in the IWP map means that the IWP retrieval is not performed or has failed because it is below the sensitivity level of this algorithm, while CloudSat overpasses show a zero IWP value in most places away from the hurricane.
Figure 8 .
Figure 8. Monthly averaged IWP from CloudSat and MHS ascending (a, b) and descending (c, d) orbits during August 2010.MHS IWP is averaged over all views.Data are sampled to 5 • × 5 • grid boxes.
Figure 9 .
Figure 9. (a) is the same as Fig. 6, except from different latitude bins (see sub-titles for the latitude range).(b) is the scatter plot of MHS (abscissa) and CloudSat (ordinate) gridded monthly mean IWP for latitude bins between [25 • S, 25 • N] (blue filled dots), [25 • , 30 • ] N, and S (light blue triangle).The map grid size is 5 • ×5 • , and data are then smoothed by a two-point window along latitude and longitude before making the scatter plot.The black thick (thin) line marks the 1 : 1 (1 : 5 and 5 : 1) ratio.
Figure 10.(a) PDF of relative error ( √ S IWP /IWP, %) of onemonth retrieved IWP between 30 • S and 30 • N. Contours are in log scale.The peak of the PDF is roughly marked by hand to represent an exponential decrease (thick dashed line).(b) Collocated CloudSat-MHS IWP retrievals for July 2010 within the same latitude band.The thick solid line is the 1 : 1 line, and the other two thin solid lines mark the 1 : 2 and 2 : 1 ratios, respectively.
Figure A1 .
Figure A1.PDF of T cir in the range of [−100, −99 K] (a) and [−50, −49 K] (b) as a function of scan angle derived from a month of NOAA-18 MHS T cir data (December 2010).Thick solid curves are calculated from the mean PDF values averaged over the 10 nadirview FOVs divided by a factor of cos(ζ /2.1)(a) and cos ζ (b), respectively, and are used to fit the observed PDF curves.ζ is the zenith angle.
Figure C1 .
Figure C1.Scatter plots of T cir relationships between CH#2 and CH#5 (a), and CH#4 and CH#5 (b) from observed T cir at MHS nadir view (black dots), simulated T cir from CRTM (colored triangles) and from CRM (colored stars).Blue/cyan/red colors represent cloud layer with tops at 10, 12 and 14 km and bottoms at 7.5 km in the simulations.
Table 1 .
Look-up table for the parameters of the joint I W P -h t retrieval.
limb sounder that observes the cloud side in its LOS rather than the cloud top seen from a nadir sensor.Since we do not have an accurate model of H dependence on h t , a quadratic function is assumed to interpolate and extrapolate H (h t ) to the cases beyond the values at the observed h t , i.e., h t = 10, 12 and 14 km.The coefficients in Eq. (4) are solved from the observed H values for three h t groups with the mean values at 10, 12 and 14 km.Including T cir0 , all the parameters of the empirically derived T cir −IWP relationships for CH#2, CH#4 and CH#5 are listed in the look-up table 1. a | 2019-04-05T03:38:18.012Z | 2013-09-04T00:00:00.000 | {
"year": 2014,
"sha1": "63ce970034e41470688f935af96217fa46b38676",
"oa_license": "CCBY",
"oa_url": "https://www.atmos-meas-tech.net/7/1873/2014/amt-7-1873-2014.pdf",
"oa_status": "HYBRID",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "7ddf69b7565c443aded472abd439481755c5a3dc",
"s2fieldsofstudy": [
"Environmental Science"
],
"extfieldsofstudy": [
"Environmental Science"
]
} |
5846653 | pes2o/s2orc | v3-fos-license | Interim Prosthetic Rehabilitation of a Patient Following Partial Rhinectomy: A Clinical Report
Surgical defects often have adverse effects on patient perception of esthetics and self-esteem. Rehabilitation of such surgical defects poses a challenge to the clinician. Presented here is a clinical report of an interim prosthetic rehabilitation of a patient who underwent partial rhinectomy for basal cell carcinoma. Nasal resection included part of the nasal septum, the entire cartilage, and the ala. An interim nasal prosthesis was fabricated for this patient 3 weeks after surgery, to provide early rehabilitation. This prosthesis provided a sociopsychological benefit to the patient, and the prosthesis was well tolerated. The spectacle glasses retained the prosthetic nose.
Surgical defects of the midface resulting from malignant disease pose a challenge to patient rehabilitation. Basal cell carcinoma is a cancer that arises in the basal cell layer of the epidermis.
Sunlight is a contributing factor in 90% of the cases. The disease is usually triggered by damage to the skin caused by sunrays.
Basal cell carcinoma of the nasal vestibule is common in Caucasians but rare in blacks and subcontinent Indians. In contrast to one-third of malignancies of non melanoma skin cancer in whites, among Indians, only 1-2% of these cancers occur in the skin. 1 Basal cell carcinoma has a particular predilection for the upper central part of the face as an occurrence site. 2 About 88-90% of basal cell carcinomas may be seen mainly in sun exposed areas like the face and neck. 3 Basal cell carcinoma of the nasal area has a high cure rate of more than 95% but a delay in seeking treatment can allow the cancer to enlarge, causing possible disability. 4 chemotherapy, and radiation. 5,6 In addition to the conventional treatment methods, other options such as cryosurgery, Mohs micrographically controlled surgery, electrodessication, and photodynamic treatment are also available for head and neck cancers. 7 Prosthetic rehabilitation of nasal defects after trauma or surgery has been well documented. The sequence of fabrication of an extraoral prosthesis includes a surgical, a provisional, and a definitive prosthesis. 8 Since an immediate surgical repair of a midface defect is usually not feasible, an interim provisional prosthesis may be considered. This prosthesis can be placed 2 to 3 weeks after the surgery, to provide a cosmetically acceptable appearance. This clinical report describes the use of an interim nasal prosthesis in a patient who underwent partial rhinectomy for basal cell carcinoma. The usage of such an interim or provisional nasal prosthesis permitted the patient to resume social interaction more comfortably and confidently during the healing period and till the definitive prosthesis was fabricated.
cAsE rEPort
A 58-year-old male patient diagnosed with basal cell carcinoma of the nasal vestibule, had undergone partial rhinectomy, and was referred to the department of prosthodontics, SDM college of Dental Sciences, Dharwad, India. Examination revealed that the entire cartilage of the nose, ala, and part of the nasal septum had been resected ( Figure 1).
Being a bank employee who regularly addressed customers, the patient expressed deep concern regarding his esthetic appearance. Due to his facial disfigurement, the patient was seeking a solution for his problem soon after the surgery. The patient was not aware of any of such prosthetic rehabilitations; however, when the procedure of provisional nasal prosthesis and its use was explained in detail, he chose to proceed with the treatment.
The boundary for the impression was outlined on the face. Rolled modeling wax was used to confine the impression material (Hindustan Dental Products, Hyderabad, India). Care was taken not to distort the nasal remnants/tissues by blocking out the deep undercuts in the defect with moist gauze. The facial moulage was prepared using an irreversible hydrocolloid material (Algitex, Dental Products of India, Mumbai) ( Figure 2). The irreversible hydrocolloid was reinforced with gauze and dental plaster (Everest Brand, Panade Industries Pvt. Ltd., Nippani, India). The impression was poured into a Type-III dental stone (Kala Stone, Kala Bhai Pvt. Ltd, Mumbai, India) (Figure 3).
A nose was sculpted in wax on the resultant cast, using the preoperative photographs. The whole morphology and the anatomic contours of the nose were developed according to normal contours, the patient's own descriptions of his preoperative appearance, as well as the references given by his close relatives. The trial wax pros- Guttal, Vohra, Pillai, Nadiger Prosthetic rehabilitation following partial rhinectomy thesis was completed and the nostril holes were cut open for air exchange. 9 The margins of the wax prosthesis were finished to create an illusion of continuity with skin. An eyeglass frame was also worn with the wax tryon, anticipating its need for the retention of the final prosthesis. Eyeglasses are a good means of providing retention since they additionally serve to conceal the borders of the prosthesis. 10 Wax up of the interim nasal prosthesis was invested and the wax boiled out. Heat polymerizing clear acrylic resin was packed and processed. Poly(methyl methacrylate) resin has been recommended as a possible material for use in fabricating a provisional prosthesis. 11 Intrinsic coloring was incorporated in the clear acrylic resin to match the basic skin tone, using an acrylic-based paint (Fevicryl, Pidilite Industries Ltd, Mumbai, India).
The prosthesis was finished to thin the borders and to blend with the surface of the skin. Thereafter, it was adapted to the defect area. The eyeglass frame and the prosthesis were aligned properly on the bridge of the nose. A cyanoacrylate adhesive (Laborfix; Bracon Ltd, Sussex, England) was used to attach the eyeglass frame to the prosthesis (Figure 4). The patient had small, pigmented dots on the surface of the skin, which was matched by extrinsic coloring using acrylic colors (Fevicryl). This enhanced the esthetics, and the acrylic resin prosthesis resulted in a life-like appearance (Figures 5 and 6). After delivering the prosthesis, home-care instructions were given. The patient returned 2 weeks later for a followup evaluation. Four months later the patient felt that the fit of the prosthesis was not the same as compared to that of the initial placement. It was evident that the fit of the prosthesis had changed accordingly because of the tissue changes occurring during the healing phase. The tissue surface was relined with a soft tissue resilient liner. In addition, at this stage, the preparation of a definitive prosthesis was discussed with the patient. An implant-retained silicone prosthesis was suggested to the patient and the approximate estimation of the treatment time and costs was given. After 6 months of follow-up, the patient is waiting to start using a definitive prosthesis with the option of an implant-retained silicone nose, depending on financial arrangements being made.
dIscussIon
Among facial defects, nasal defects produce severe cosmetic impairment, since the nose is a prominent feature of the human face. 12 Retentive media constitute an important factor for the satisfactory rehabilitation of these defects. In the past, most nasal prostheses were retained with strings or straps anchored behind the head, 13 intraoral or intranasal extensions, [13][14][15] and gold strings or leaves. 16,17 Spectacle frames have been popular for anchoring nasal prostheses, and preferred even today, when patients express a desire for an economical treatment solution. Today, prosthetic replacements are secured with adhesives that are readily available, easily applied, and provide satisfactory retention for limited periods of time. 18 However, for an interim nasal prosthesis, spectacle frame retention may be preferred, as was performed in the presented clinical situation. Acrylic resin nasal prosthesis with spectacle retention was a viable treatment provided to the patient. Although resins have shortcomings of being inflexible and having esthetic limitations, for an interim nasal prosthesis, it is an ideal material since it is inexpensive. | 2014-10-01T00:00:00.000Z | 2010-10-01T00:00:00.000 | {
"year": 2010,
"sha1": "c65f2e8bd550bdd2a3150800298e78957f3fb782",
"oa_license": "CCBYNCND",
"oa_url": "http://www.thieme-connect.de/products/ejournals/pdf/10.1055/s-0039-1697869.pdf",
"oa_status": "HYBRID",
"pdf_src": "PubMedCentral",
"pdf_hash": "c65f2e8bd550bdd2a3150800298e78957f3fb782",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
14696821 | pes2o/s2orc | v3-fos-license | AdS/CFT Duals of Topological Black Holes and the Entropy of Zero-Energy States
The horizon of a static black hole in Anti-deSitter space can be spherical, planar, or hyperbolic. The microscopic dynamics of the first two classes of black holes have been extensively discussed recently within the context of the AdS/CFT correspondence. We argue that hyperbolic black holes introduce new and fruitful features in this respect, allowing for more detailed comparisons between the weak and strong coupling regimes. In particular, by focussing on the stress tensor and entropy of some particular states, we identify unexpected increases in the entropy of Super-Yang-Mills theory at strong coupling that are not accompanied by increases in the energy. We describe a highly degenerate state at zero temperature and zero energy density. We also find that the entanglement entropy across a Rindler horizon in exact AdS_5 is larger than might have been expected from the dual SYM theory. Besides, we show that hyperbolic black holes can be described as thermal Rindler states of the dual conformal field theory in flat space.
Introduction
The correspondence between string theory in Anti-deSitter (AdS) space and conformal field theory (CFT) [1,2,3] provides a powerful basis for the study of the microscopic statistical mechanics of black holes. In this framework, a black hole in AdS is described as a thermal state of the dual conformal field theory 1 . The latter is defined on a background geometry that is conformally related to the geometry at the boundary of the AdS space. If we want to work in a regime where the supergravity approximation to string theory is reliable, then the dual CFT has to be strongly coupled. The aim of this paper is to develop the duality for a class of black holes peculiar to AdS space, that will exhibit new and remarkable features.
It is known that the presence of a negative cosmological constant allows for more varied types of horizon geometries than in asymptotically flat situations. In AdS the horizon of a black hole can have positive, zero, or negative curvature. These are spherical, planar or hyperbolic black holes, respectively. In four dimensions it is possible to construct horizons of arbitrary topology by modding out discrete isometry groups. This is the origin of the name "topological black holes." We keep this name, even if it will be somewhat of a misnomer since we will not be considering identifications under discrete isometries. Nevertheless, that is something that could be implemented in a straightforward manner.
The microscopic study of planar black holes within string theory can be traced back to the discussion in [4] of the statistical mechanics of black D3-branes. As this system is understood now, the planar black hole in AdS 5 is dual to a thermal state of N =4 supersymmetric Yang-Mills (SYM) theory in four dimensional Minkowski space, with gauge group SU(N), in the large N limit and at a large value of the 'tHooft coupling g 2 Y M N. We have very limited knowledge of gauge theory in such a strong coupling regime, but the results that follow from calculations using AdS supergravity appear to be remarkably close to what we are able to compute using free field theory. The AdS 5 /SYM pair is the most studied case, but for other dimensions we know that the temperature dependence of the dual field theories is determined by conformal invariance, and this behavior is indeed reproduced by planar black holes [5,6].
Spherical black holes, on the other hand, present a different qualitative feature, namely, a phase transition at finite temperature [7]. As observed in [3,6] this phase transition fits in nicely with our expectations of a confining phase at low temperatures for large N theory on a spatial sphere. This is remarkable. Confinement, however, is a phenomenon well beyond the reach of perturbative field theory. In the present paper, instead, we will be more interested in situations where we can have some hope of connecting the weak and strong coupling regimes.
Hyperbolic black holes in the AdS/CFT context have received comparatively little attention. It was observed from their thermodynamics that the dual field theories, defined on a spatial hyperboloid, should have no phase transitions as a function of temperature [8,9]. At any nonzero temperature the theory is in a deconfined phase, and would appear to be free from drastic changes of degrees of freedom as the coupling is increased. On the other hand, the presence of the length scale coming from the curvature of the hyperbolic space introduces a structure richer than in the case of flat space. There are two additional features of interest. One of them is the fact that the ground state is in general different from the solution that is locally isometric to AdS. In fact, the latter is a solution at finite temperature, with non vanishing entropy, whose origin is due to the presence of a non-degenerate (bifurcate) acceleration horizon in AdS. A second aspect of interest is that the boundary geometry is conformal to Rindler space. It follows that hyperbolic black holes admit a dual description as thermal Rindler states of the CFT in flat space.
Perhaps the most startling consequence of our study will be that in the strong coupling regime we are able to identify larger entropies than would be expected from the CFT side. The first example of this is the ground state in the infinite coupling regime, which is shown to possess a large degeneracy, even if it is a zero-temperature, zero-energy density state. The next example we describe is a supergravity state that is locally isometric to AdS 5 , with an entropy that turns out to be larger than expected from the calculation at weak coupling. Moreover, the increase in the entropy is not accompanied by an increase in the energy of the state. We therefore find a common thread in these results, which would appear to point to the possibility that SYM theory requires the presence of states that can give rise to an entropy, but do not contribute to the local energy density. Curiously, states with precisely these properties have been postulated from a different analysis of the AdS/CFT correspondence [10], where the issue of causality in scattering processes was studied. Although it is probably to soon to discard other alternatives, it would be really exciting if the two phenomena were related.
The layout of the paper is as follows: Section 2 introduces the black holes under consideration, and their quasilocal stress-energy tensor and entropy are presented. Part of these results had been obtained in [9,11]. In section 3 we provide a review of the dual CFT description of planar and spherical black holes with the focus on the aspects that will change when we look at hyperbolic black holes. The supergravity and field theoretical descriptions of the latter are the subject of detailed comparison in section 4. Section 5 develops the description of hyperbolic black holes as Rindler states of the dual CFT in flat spacetime. In section 6 we address the issue of finite coupling corrections. Finally, we discuss in section 7 the possible identification of exotic states from this analysis.
Topological black holes
Our subject in this paper will be the following black hole solutions in AdS n+1 : with where the (n−1) dimensional metric dΣ 2 k,n−1 is where dΩ 2 n−1 is the unit metric on S n−1 . By dH 2 n−1 we mean the "unit metric" on the (n−1)dimensional hyperbolic space H n−1 .
The solutions for k = +1 are sometimes called "Schwarzschild-AdS" solutions: They reduce to the standard Schwarzschild solution when the cosmological constant vanishes, l → ∞, and to AdS in global coordinates when µ = 0. Moreover, their topology is IR 2 × S n−1 , and the horizon is the sphere S n−1 , like that of the Schwarzschild solution. The case k = 0 makes appearance when considering the near-horizon limit of (non-dilatonic) p-branes. Their horizon has the geometry of IR n−1 , which can be periodically identified to give horizons of toroidal topology, although we will not consider such possibilities. Both the k = +1 and the k = 0 cases have been extensively studied recently in the context of the AdS/CFT correspondence.
By contrast, the class of hyperbolic solutions k = −1 have received comparatively less attention. They have been studied mostly in four dimensions, where, together with the other two classes, they can be used to construct black holes with horizons of arbitrary topology: if the hyperbolic space H 2 is identified under appropriate discrete subgroups of the isometry group, then all the closed Riemann surfaces of genus higher than 1 can be generated [12]. A similar result holds for five-dimensional black holes [9], as follows from the fact that an arbitrary compact three-manifold of constant curvature can be constructed as a quotient of a universal covering space of positive, zero or negative curvature. This is the origin of their denomination as "topological black holes." Their appearance in M-theory and in the context of the AdS/CFT correspondence was first discussed, in four dimensions, in [8]. In higher dimensions they have been studied first in [9].
The temperature of these black holes is determined in the standard (Euclidean) manner as where r + is the horizon radius. This relation can be inverted to find which allows us to take β as the parameter that determines the solution 2 . Notice that in the limit where r + ≫ l the k = ±1 classes of solutions approach the planar black hole class k = 0. This admits an interpretation in terms of an "infinite volume" limit, in which the curvature radius of S n−1 or H n−1 is much larger than the thermal wavelength of the system [6]. At this point it is worth recalling that the solutions for µ = 0 are all isometric to AdS n+1 , and therefore can be locally transformed into one another by a simple redefinition of coordinates 3 . However, there are non-trivial differences between these parametrizations. The metric with µ = 0, k = +1 describes AdS in global coordinates, whereas k = 0 describes the Poincaré (or horospheric) parametrization of AdS. The latter describes a wedge of AdS, since the coordinate system breaks down at r = 0, see Fig. 1a. This coordinate singularity corresponds to a degenerate Killing horizon. This means that, in contrast to bifurcate Killing horizons, there is no temperature associated to it. Besides, its area vanishes. Then, a common feature of AdS in both its k = +1 and k = 0 forms is the vanishing of entropy and temperature. They are to be thought of as the ground states of their respective classes of solutions.
The solution with µ = 0, k = −1, introduces a difference here. While isometric as well to AdS, it covers a smaller portion of the entire manifold, as the coordinate patch breaks down at r = r + = l, see Fig. 1b. However, in contrast to the horizon in Poincaré coordinates, the horizon in this case is analogous to a Rindler horizon. There is an associated inverse temperature, β = 2πl, and it has non-vanishing area. One should note that, among the k = −1 class of black hole solutions, the one that is isometric to AdS is not properly a black hole. It is completely non-singular, and in the absence of identifications it does not possess an event horizon. By contrast, the solutions with µ = 0 possess a singularity at r = 0.
For the k = −1 class of black holes, and in contrast to the k = +1, 0 classes, the zero temperature solution is different from the one that is isometric to AdS. In fact, for k = −1 there is a range of negative values for µ such that the solutions still possess regular horizons. The minimum values of µ and r + that are compatible with cosmic censorship, for which the horizon is degenerate, are and, in particular, For these values of the parameters, the black hole is extremal. The Penrose diagram for a hyperbolic black hole with negative µ is like that of a Reissner-Nordström-AdS black hole. For positive µ it is instead like that of a Schwarzschild-AdS black hole [12]. We now want to evaluate the thermodynamic functions for the solutions (1). In particular, if we have the quasilocal stress-energy tensor, which is defined on the boundary of a region of spacetime [15], as a function of the temperature then we can compute all other thermodynamic functions such as the energy or entropy. Recently, a prescription for computing the quasilocal stress tensor of a solution in AdS space has been proposed which appears to capture all of the information relevant to the dual field theory [16]. In this prescription, regularization does not proceed by the traditional subtraction of similar divergences from a reference state to which the solution is asymptotically matched. Instead, in the regularization proposed in [16] divergences are removed by subtraction of local counterterms at the boundary, in a manner closely analogous to the subtraction of divergences in field theory in curved spacetimes. As such, it appears to be particularly suitable for constructing the stress tensor of the dual CFT starting from a supergravity solution (see also [17]). This technique has been extended and generalized in [11] to all the dimensions of relevance for string/M-theory.
The metric on the boundary of AdS, h µν (µ, ν = 0, . . . , n − 1), is conformally related to the background metric of the field theory γ µν . The conformal factor diverges near the boundary. By the AdS/CFT correspondence, the quasilocal stress tensor for AdS supergravity τ µν can be translated into the expectation value of the stress tensor of the dual field theory T µν , in the strong coupling regime, as [17] where the limiting approach to the boundary is assumed.
For the cases at hand the calculation of τ µ ν is straightforward. The appropriate conformal factor is (h/γ) 1/2 = (r/l) n (see eq. (14) below) and we obtain 4 where (see [11]) and ǫ 0 k = 0 for odd n. It is worth noting that the form of this stress tensor is that of a thermal gas of massless radiation.
For the particular case of AdS 5 (and any k) it will be useful to note that the result can be written in a compact form as The energy, given as a function of temperature through (5), can be read from (9) as with V n−1 the volume of dΣ 2 k,n−1 , i.e., the spatial volume of the field theory. With E as a function of the temperature we can apply standard thermodynamic formulae to compute the entropy of the solution, which satisfies, as expected, the Bekenstein-Hawking area law. We could equally well have computed the Euclidean action of the solutions and in this way obtain β times the free energy F , from which the same values of E and S are recovered [11].
CFT duals of spherical and planar black holes-a brief review
The AdS/CFT correspondence states that the full non-perturbative dynamics of quantum gravity in a space that is asymptotic to AdS n+1 can be formulated in terms of a dual conformal field theory defined on the n-dimensional causal boundary of the bulk spacetime. As discussed in detail in [11], the issue of what is the geometry of the boundary of a given solution is, to some extent, open, since it depends on how the spacetime is sliced radially as one approaches the boundary. As an example, it was explicitly shown in [11] how the boundary of (Euclidean) AdS n+1 can be chosen to be S n , IR n , H n , IR × S n−1 , IR × H n−1 , and several other geometries. We see then that the duals of AdS quantum gravity are in general conformal field theories defined on curved backgrounds with fixed geometry. More specifically, in the coordinates chosen in (1), the metric at the boundary, as r → ∞, is of the form The background spacetime for the dual field theory, γ µν , is conformally related to this one, and the conformal factor can be chosen to cancel the divergent factor r 2 /l 2 in (14), γ µν = lim r→∞ l 2 r 2 h µν . In this way, the k = +1, 0, −1 black holes admit a dual description in terms of a CFT on, respectively, IR × S n−1 , IR n , IR × H n−1 , each of these otherwise known as the Einstein universe, Minkowski spacetime, and the static open universe, respectively. However, it should be clear as well that by slicing, say, the k = ±1 solutions in an adequate way, the spherical and hyperbolic black holes can be described as states of the field theory on Minkowski space. This can be achieved more simply by choosing adequately the conformal factor between h µν and γ µν , see [18] for an example. We will make use of this idea later in section 5.
The case of k = 0 is particularly simple since, in the absence of any scale other than the thermal wavelength, conformal invariance, together with staticity and homogeneity of the space, determines the stress tensor of the CFT to take the form The energy and entropy follow as The factor σ sb is the Stefan-Boltzmann constant, which is determined by the precise field content of the CFT, and grows with the number of degrees of freedom of the theory. We will give it below for the cases of interest. As observed in [5,3], for planar (k = 0) black holes r + ∼ β −1 , so the CFT thermodynamic functions (15), (16), agree with their AdS black hole counterparts (9), (12), (13) up to the Stefan-Boltzmann factors (notice that for k = 0, ǫ 0 k = 0). If one wants to make this equivalence more precise and try to compare the precise Stefan-Boltzmann factors, then a specific dual field theory has to be supplied. String/M-theory provides duals for AdS n+1 , n = 2, 3, 4, 6, as the CFTs describing the world-volume dynamics of stacks of parallel (D1+D5)-, M2-, D3-, M5-branes. The dictionary for translating AdS/CFT quantities reads where N is the number of parallel branes. The powers of N displayed above are measures of the number of "unconfined" degrees of freedom: for AdS 5 , N is the rank of the gauge group of the dual N =4 supersymmetric four dimensional SU(N) Yang-Mills theory. For AdS 3 , c is the central charge of the dual CFT in two dimensions; however, since there are no k = −1 black holes in AdS 3 we will not deal with this case any longer. Note that for generic number of dimensions, the entry in the dictionary can be expected to be Let us focus now on the pair AdS 5 /(N =4 SYM), in a discussion which can be traced back to [4]. Using (17), the results from (12) and (13) for k = 0, n = 4, become On the other hand, it is a standard result from free field theory at finite temperature that the factor σ sb in four dimensional thermal Minkowski space for fields of different spin is where n 0 is the number of (real) scalars, n 1/2 is the number of Weyl (or Majorana) fermions, and n 1 the number of gauge vectors. For N =4 SU(N) SYM at large N, By plugging these values into (20) we find σ sb = π 2 N 2 /2, which leads to the well-known result [4] that The SYM result is obtained by computing one-loop vacuum diagrams, i.e., it is the leading term in a perturbative expansion in the 't Hooft parameter g 2 Y M N. By contrast, the supergravity approximation, on which the AdS black hole result is based, is reliable only for large g 2 Y M N. The mismatch in (22) is therefore interpreted as a strong coupling effect. An argument for why the entropy should change only by a numerical factor of order one has been given in [19].
Let us comment on two aspects of (22). The first one is that the values for the energy and entropy at strong coupling are smaller than their perturbative values. As a matter of fact, as noted in [4], the result for E BH would agree with a perturbative calculation if, for some reason, at strong coupling we had effectively n 0 = 6N 2 , n 1/2 = 3N 2 , n 0 = 0, i.e., if only the scalar multiplets contributed to the free energy, whereas the fields in the (N =1) vector multiplet could not be excited. There is therefore a reduction in the effective number of degrees of freedom at strong coupling. The second aspect we want to emphasize, for reasons which will be better appreciated later, is that both the energy and the entropy are reduced by the same factor 3/4. That this should happen is a consequence of the fact that the temperature dependence E ∼ β −4 is the same at both strong and weak coupling, since it is fixed by conformal invariance. Therefore, even if some degrees of freedom may get frozen at strong coupling, it appears that all the states that contribute to the entropy also make a contribution to the energy of the system.
For the cases of AdS 4 and AdS 7 , the dual conformal field theories of N parallel M2-and M5-branes are poorly known, and as a consequence it is impossible at present to discuss these cases in the same detail as the AdS 5 /SYM pair.
Overall, we can say that the qualitative aspects of the AdS/CFT duality for planar black holes are fairly well understood, and in particular for AdS 5 the free field theory seems to capture a good deal of the thermodynamics at strong coupling.
Turn now to k = 1, i.e., spherical black holes and their dual CFTs, which according to (14) are naturally defined on spatial spheres S n−1 . This introduces a length scale l in the theory. As it happens, in this instance there appears a phenomenon that is absent from planar (and hyperbolic) AdS black holes. The thermodynamic analysis of the black hole solutions reveals a phase transition at finite temperature between the state corresponding to µ = 0 (global AdS) and the (large) black hole phase [7,3,6]. The low temperature phase (global AdS) is interpreted as a "confined" phase [3,6]. This phenomenon, although expected from generic considerations, can not be seen from a perturbative analysis of the field theory. Therefore, even if results for conformal fields on S 1 × S n−1 at a perturbative level (free field theory) are available [20], which can be employed to compute E CF T (β), they can not be expected to provide us with any information about the strongly coupled regime, at least at low temperatures: the phase transition throws us into a region where perturbative field theory is useless.
Nevertheless, there is one result that can be meaningfully compared, namely, the Casimir energy associated to the field theory on IR×S 3 . The dual supergravity solution is AdS 5 in global coordinates, which is protected from strong coupling (string α ′ ) corrections [21]. Moreover, the Casimir energy is essentially determined by the central charges of the N = 4 SYM theory, which receive no higher loop corrections [22]. Indeed, it has been proven that the result from free field theory matches precisely the AdS calculation [16].
AdS/CFT duality for hyperbolic black holes
Hyperbolic black holes share with planar black holes the property that they do not exhibit phase transitions at finite temperature. At any temperature the phase structure is dominated by a black hole. Then, the dual field theory at strong coupling is expected to remain in an unconfined phase [8,9] 5 . Therefore, even if interactions are expected to introduce modifications (as was the factor 3/4 in (22) for planar black holes), we can hope to be able to extract valuable information by trying to connect the weakly coupled and strongly coupled regimes.
An important feature of hyperbolic black holes is that the curvature of the hyperbolic space H n−1 introduces a new scale into the field theory, and as a result the temperature dependence is not fully fixed by conformal invariance. The case of flat space is contained in this class of black holes as a limit (as was also for spherical black holes), which can be characterized as the high temperature limit. At any other temperatures the thermodynamic functions are more complicated, and encode more information than in the case of flat spacetime. In particular, the relationship between energy and entropy is not as simple as in (16), and the thermodynamic magnitudes become more sensitive to the field theory content. Furthermore, we will be able to crucially exploit a novel feature, absent from the other two classes of black holes. As mentioned above, there is one particular state, the one corresponding to µ = 0 (i.e., r + = l, β = 2πl), which is isometric to AdS. Since AdS 5 (×S 5 ) is an exact string state, protected from corrections in the 'tHooft coupling g 2 Y M N, results at perturbative level can be extrapolated to strong coupling. When we write AdS with the hyperbolic slicing the situation is, however, interestingly non-trivial, since the k = −1 description of AdS does not cover all of the spacetime, rather only a wedge. Accordingly, in a computation of, say, the 5 It might be worth noting the following difference: Both for the planar and the hyperbolic systems the free energy at any non-zero temperature goes like F ∼ N 2 (for AdS 5 /SYM). For planar black holes, lim β→∞ βF = 0, and one could say that the phase transition takes place at zero temperature, where the supergravity state is AdS 5 . In contrast, in the hyperbolic case the phase at T = 0 is still a black hole (the extremal one), and, as we will see below, lim β→∞ βF ∼ N 2 . partition function of the theory, states that lie outside this wedge are traced out, and will give rise to an entropy, sometimes called "entanglement entropy" [23]. More precisely, on a Cauchy surface in the k = −1 patch, such as shown Fig. 1b, the data to the left of the Einstein-Rosen bridge are traced out. On the supergravity side, this entropy appears as an entropy associated to the acceleration horizon. On the field theory side we can compute the entropy of states on a hyperbolic space. The detailed comparison of these quantities will only be possible for AdS 5 /SYM, so we will devote most of the section to this case. Other sides of the relation to acceleration horizons will appear in section 5.
AdS 5 /SYM on a hyperboloid
Let us then start by translating the strong coupling, black hole results of section 2 into field theory language by using (17), focusing on the AdS 5 /SYM dual pair. From eqs. (11), (5) we find, for the stress-energy tensor of strongly coupled SYM on hyperbolic space at finite temperature In this geometry, the energy is equal to E = − d 3 x T 0 0 . On the other hand, from (13), the entropy is It is straightforward to see that in the high temperature limit β → 0 we recover the results for flat space (19).
As explained, there are two states of particular interest. One is the extremal, zero temperature (β → ∞) black hole (7), and the other is the solution isometric to AdS (β = 2πl). For the first one we find and Notice that the energy for this state, and actually the entire stress tensor, is zero, so it seems appropriate to identify it with the ground state of the theory. Nevertheless, its entropy does not vanish, a surprising fact that was noted in [11] and which we will discuss below. For AdS 5 in the hyperbolic slicing, i.e., the state at β = 2πl, and Turning now to the weakly coupled regime, we will make use of results obtained in [24] for the stress tensor of conformal fields in S 1 × H 3 . The essential input in the computation is the density of eigenvalues of the wave operator in H 3 for fields of different spins. If h(s) is the number of helicities of the spin s field, and n s is the number of such fields, then, for s = 0, 1/2, 1, one gets 1, 1, 1) .
The integrals can be performed explicitly 6 , and with h(0) = 1, h(1/2) = h(1) = 2 we find Having the energy E(β), the entropy can be computed by using the first law of thermodynamics, with the result In the high temperature limit β → 0 the results from the previous section for flat spacetime are recovered. However, attention should be drawn to the mixing of temperature dependences in (30) and (31). In contrast to the simple flat space dependence (16), which would still hold if only scalar fields were present, the presence of higher spin fields introduces a sensitivity to the curvature of the space. This is reflected in the β 2 term inside brackets, which has a different factor in (30) and (31).
We specialize now to the field content of large N SU(N) N =4 Super Yang-Mills theory, (21), to find Compare now these results with the ones in the strongly coupled regime, eqs. (23), (24). It is apparent that the dependence on the temperature is rather different, and in fact both come to agree only at high temperatures, where we recover the same relationship as in (22). For the ground state at zero temperature, the results are as expected for a conventional ground state. On the other hand, for the state at β = 2πl we get It is immediate to notice that for both the ground state and the state at β = 2πl the energy computed using free field theory agrees with the results in the strong coupling (supergravity) regime, eqs. (27), (25).
As a matter of fact, not only in supergravity but also in the field theory on S 1 × H n−1 the state at β = 2πl is singled out among states at other temperatures: it can be formally obtained from the vacuum of the Einstein universe IR × S n−1 by "thermalization at imaginary temperature" T = (2πil) −1 [24]. The two calculations of (a) the Casimir energy on IR × S 3 [16], and (b) the energy of the state at β = 2πl on S 1 × H 3 , eqs. (35) and (27), are in this light seen as the result of formally equivalent calculations. This is reflected in the fact that both follow from the same central charge of the field theory.
Despite the agreement for the energies of these states, the results for the entropy obtained from supergravity are both different from the one-loop field theory results. Eq. (26) is telling us that at infinite 'tHooft coupling there is a large degeneracy for the state at zero temperature and zero energy density. Such ground state degeneracies are highly unusual. The mismatch in the entropy for the state at β = 2πl is not less unexpected. As we had remarked, this state is described in the bulk of AdS as a wedge of the full AdS 5 spacetime. We shall argue in sec. 6 that not only the energy, but also the entropy of this state would have been expected to be protected from corrections in the coupling. Nevertheless, we find at strong coupling an entropy larger than that obtained from field theory at the lowest perturbative order. The relationship between both is simple, This is in stark contrast to the situation for planar black holes, where there is an effective reduction at strong coupling in the number of states available to the gauge theory. Here we find instead an enhancement, but one that affects only the entropy, not the energy 7 .
It can be readily checked that the values of the energy at weak and strong coupling agree only for the two values of the temperature β = 2πl, ∞. Indeed, we would not have expected agreement at any other temperature, due to strong coupling corrections. It is therefore difficult 7 It is amusing to observe, although we do not mean to attach too much significance to this remark, that the mismatch between entropies at β = 2πl, eq. (37), could be remedied by assuming that, at that particular temperature, there were 4N 2 additional chiral multiplets, δn 0 = 8N 2 , δn 1/2 = 4N 2 , contributing to the entropy but not to the stress tensor.
to meaningfully make comparisons of the entropy expected at different temperatures. Notice, however, that, at any temperature, with equality only at infinite temperature. Although the different temperature dependence of different fields makes it difficult to take this too literally, this inequality would suggest that at strong coupling there appear to be more states contributing to the entropy than those that contribute to the energy. One last magnitude which is interesting for comparison purposes is the specific heat, since it measures the response (susceptibility) of the degrees of freedom to thermal excitation. We find For the different temperatures of interest these become while at high temperature we recover the flat space result in which the specific heat grows with the characteristic four-dimensional dependence ∼ β −3 = T 3 . At low temperatures it is the spin-1/2 and spin-1 fields which dominate the specific heat, at least at weak coupling (see (40)). The curvature of H 3 , to which these fields are sensitive, makes them more susceptible of being excited and as a consequence the specific heat grows faster, as C ∼ T instead of T 3 . Remarkably, this is also the behavior we find at strong coupling, where we do not know how to separate the contributions from different sets of degrees of freedom. This suggests that, at low temperatures, the degrees of freedom at strong coupling are not too dissimilar in nature from those that operate at weak coupling. Curiously, the precise numerical factor is off by the same fraction 4/3 as at high temperatures.
Finally, at β = 2πl the specific heats are exactly the same in the strongly coupled and weakly coupled regimes. It would appear that even if extra states are present which contribute to the entropy (albeit not to the energy), the susceptibility to thermal excitation is still dominated by those states that make up the energy density.
Other dimensions
The results (12) and (13) for the energy and entropy of topological black holes in section 2 can be expressed in terms of the temperature using (5) and then converted into expressions for field theory at strong coupling using the dictionary (17) or, more generally, (18). In turn, it is possible as well to compute the corresponding quantities for free fields on hyperbolic space at finite temperature. The contribution from a spin s field to the energy on S 1 × H n−1 is obtained as where h(s) is, as before, the number of physical states (helicities) for each field, ω n−1 is the volume of the unit (n−1)-sphere, the − (+) sign in the denominator applies to boson (fermion) fields, and µ s (λ) measures the degeneracy (density) of eigenvalues of the wave operator on the hyperbolic space. It is known in the mathematical literature as the Plancherel measure. The latter has been computed for hyperbolic spaces in arbitrary dimensions for a wide variety of fields. To quote, for H N , for real scalars [25], (45) For spinors [26], (46) For (co-exact) p-forms, h(s) = (N −1)! p!(N −p−1)! (which must be halved for self-dual forms), and [27] (this contains the scalar case for the value p = 0, and gauge vectors for p = 1). Using these results 8 , we can compute for the free field content of a single M2-brane, i.e., an N = 8 supermultiplet in d = 3, dλ λ 2 n 0 tanh πλ e βλ/l − 1 + n 1/2 coth πλ e βλ/l + 1 , with n 0 = 8 and n 1/2 = 8. It is not possible to give the results of the integrations in closed form for arbitrary values of β, although they simplify for β = 2πl. However, it is easy to see that these results bear little resemblance to the ones that follow from supergravity calculations. Indeed, from (48) one easily sees that whereas the strong coupling calculation would yield Notice that not only the entropy but also the energy of this state is different from zero. Indeed, as noted in [11], for AdS 4 (and in fact for all even d AdS d ) it is the state at β = 2πl that has zero energy. And nevertheless it has non-zero entropy. These results for AdS 4 are even more striking than those for AdS 5 : the state that is isometric to AdS 4 is a state at finite temperature and with non-zero entropy, which nevertheless has zero energy density! This looks markedly different from conventional field theory. Moreover, there appear states, namely, those for µ e ≤ µ < 0, with total negative energy. The meaning of these is unclear.
For the free field content of the (2, 0) superconformal theory on a single M5-brane (i.e., the d = 6 tensor supermultiplet), dλ λ 5λ 2 (λ 2 + 1) e βλ/l − 1 + 8(λ 2 + 1/4)(λ 2 + 9/4) e βλ/l + 1 + 3(λ 2 + 1)(λ 2 + 2) e βλ/l − 1 = V 5 π 3 1440β 6 80 + 84 β 2 π 2 l 2 + 55 This free field theory expression has nothing exotic about it. As the temperature goes to zero, the energy vanishes. So does the entropy, too, as can be checked easily. However, the results at strong coupling from AdS calculations are puzzling: neither the state at zero temperature nor the one at β = 2πl have zero energy. Both, moreover, have non-zero entropy. It appears quite likely that in all these cases exotic states that contribute to the entropy but not to the energy would be required to account for the black hole entropy. However, and in contrast to the case of AdS 5 /SYM, for AdS 4 and AdS 7 free field theory yields little useful information.
Dual CFT description in Rindler space
In the previous section we have chosen to describe hyperbolic black holes in terms of the theory on IR × H n−1 . However, as noted at the beginning of section 3, it is possible to perform the entire description in terms of the theory on flat space. All one has to do is slice the AdS black hole spacetime in a way that, near the boundary, the constant radius sections are flat. Let us start by showing how this can be done explicitly for the solutions which are locally isometric to AdS n+1 . In Poincaré coordinates, leaving x i unchanged, to find AdS in hyperbolic (k = −1) coordinates (the spatial hyperboloid H n−1 is parametrized here in horospheric coordinates). The Killing horizon atr = l is mapped onto the null surfaces u, v = 0. Now notice that as the boundary is approached (r,r → ∞) the transformation (53) between boundary coordinates becomes This is precisely the transformation between Minkowski and Rindler coordinates. Indeed, as we approach the boundary, where ds 2 R is the metric on Rindler space (with time rescaled by l). The conformal factor that effects the change between IR × H n−1 and the flat (Rindler) space at the asymptotic boundary is r 2 /r 2 = l 2 /ζ 2 .
Therefore AdS in hyperbolic coordinates corresponds, in the description in terms of a field theory in flat space, to the Rindler state of the CFT at β = 2πl. This is, of course, the Rindler description of the Minkowski vacuum. For the Rindler observer, the latter is a mixed thermal state described by a density matrix.
This relationship between the solutions that are isometric to AdS, and their corresponding states in the dual field theories, is entirely analogous to that existing between BTZ black holes and AdS 3 and the corresponding states in the dual 1 + 1 CFTs [29], except for the fact that we have not performed discrete identifications.
For black holes with µ = 0 it is not simple to find a global coordinate transformation that effects the change from the hyperbolic to the flat space description. However, we only need the conformal factor that transforms the boundary geometries near infinity, and then use it to transform the stress tensors as in eq. (8). This was the procedure followed in [18] to find a description of spherical black holes in terms of the SYM theory in Minkowski space. We can do the same thing here using the conformal factor l 2 /ζ 2 , which according to eq. (57) takes us to a Rindler space geometry at the boundary.
We conclude then that, in the dual field theory on flat space, hyperbolic black holes correspond to thermal Rindler states at temperature β. The one at temperature β = 2πl is singled out as corresponding to the Poincaré invariant vacuum in Minkowski space. Other states are described, in imaginary time, using geometries with a conical singularity at ζ = 0. Notice, however, that there is no conical singularity in the description on IR × H n−1 .
Given the conformal factor between hyperbolic space and Rindler space, the stress tensor in the latter is constructed as (we are implicitly using the fact that the trace anomaly vanishes for both spaces). As a matter of fact, the simple conformal relationship between Rindler and hyperbolic space has been put to use before in order to solve one of them from knowledge of the other [24] (see also, e.g., [30,31] and references therein). Conventionally, we have subtracted a β-independent tensor term in (58) in order that that the stress tensor vanishes at β = 2πl. The reason is that T µν being a tensor that vanishes in the Minkowski vacuum, it should vanish in that state in any other coordinate system. Recall that the Minkowski vacuum is the global state of minimum energy, which realizes the full Poincaré symmetry of the theory. The Rindler vacuum (the state for β → ∞) can have lower energy because the minimization of the energy in Rindler space is constrained only by a subgroup of the Poincaré symmetries. The divergence of the vacuum energy density as ζ → 0 is only expected, since the vacuum is made to accelerate infinitely hard at that point. The construction (58) works the same way at weak and strong coupling. Since at β = 2πl the stress tensor T µ ν (Hyper) is the same in both regimes, the negative energy of the Rindler vacuum is the same at zero and infinite coupling. Moreover, the subtraction of such a quantity does not affect the calculation of the entropy. Therefore, the subtraction in (58) does not introduce any significant modification in our discussion. The entropy density s in Rindler space is equally obtained by rescaling the one in hyperbolic space as Notice that the finite entropy density of the Minkowski vacuum is not to be subtracted, since it is physically relevant as entanglement entropy. A Minkowski (global) observer assigns zero entropy to this state. An accelerating observer, however, detects quantum fluctuations of the vacuum (they appear to him as thermal fluctuations) and is sensitive to the vacuum activity of the global fundamental state. The Rindler entropy density at β = 2πl therefore yields a measure of the states that are subject to quantum fluctuations in the global vacuum. The discussion of the previous section can now be couched in terms of statements about the energy and entropy of the CFT in Rindler space. We therefore find that the Rindler vacuum is, at infinite coupling, highly degenerate. On the other hand, a Rindler observer accelerating in the Minkowski vacuum measures an entropy density for SYM larger than would have been expected from the weak coupling calculation.
Finite 't Hooft coupling corrections
The calculations we have presented so far have been performed at two opposite ends of the scale of the SYM 't Hooft coupling, g 2 Y M N. Gauge theory computations have been performed at the level of one-loop vacuum diagrams, i.e., g 2 Y M N = 0, whereas the supergravity approximation to type IIB string theory is reliable when g 2 Y M N = l 4 /(2α ′2 ) → ∞. In this section we want to discuss the corrections that arise when g 2 Y M N deviates from these limits. The study will be carried out only for the case of AdS 5 /SYM.
Conformal invariance imposes strong restrictions on the form of finite coupling corrections for the planar case, k = 0. The temperature dependence is fixed, so a thermodynamic function like the free energy must be of the form where F 0 is the value at zero coupling. It is clear that the energy and entropy are corrected by the same function f (g 2 Y M N). In contrast, for the hyperbolic or spherical systems one will typically have and the temperature dependence will change in general. Indeed, we have seen explicitly that the supergravity and gauge theory expressions for the energy and entropy in the hyperbolic case have a very different dependence on β. One would ascribe the differences to the effect of interactions as the coupling is turned on.
Perturbative interactions will change the weak coupling result through higher-loop diagrams. For the SYM theory in Minkowski space, these have been computed in [32]. However, the extension of these calculations to the spherical or hyperbolic cases is much more difficult, since it implies solving an interacting theory in a curved background.
At the other end of the scale, large g 2 Y M N, the first corrections arise from O(α ′3 ) = O((g 2 Y M N) −3/2 ) corrections to the effective IIB superstring action at low energies. The relevant term in the Euclidean action is [33,34] where W is a scalar constructed out of contractions of four Weyl tensors. Using this term, finite coupling corrections to the thermodynamics of planar black holes have been studied in [34,35], and spherical black holes in [36,37]. The study of the latter has been taken further in [38], where the corrections to hyperbolic black holes have been calculated as well.
In principle, one must consider corrections in the entire ten dimensional theory, since it is not possible to keep the size of the sphere S 5 fixed [35]. However, on reduction to five dimensions it is easy to see that neither the dilaton nor the scale factor of the sphere will contribute on-shell to the effective five-dimensional action for as long as they fall off fast enough at asymptotic infinity. It is therefore possible to compute the Euclidean action of the corrected solutions in a five dimensional formulation for solutions asymptotic to AdS 5 [34]. This will be important for us, since it will permit us to employ the intrinsic regularization procedure of [16] for the computation of the corrected action.
Let us focus on the hyperbolic solution at β = 2πl. The full ten-dimensional solution is locally AdS 5 × S 5 , which is conformally flat. Therefore corrections from (62) vanish and the geometry should remain the same. Actually, it appears reasonable to assume that all α ′ corrections can be written in an appropriate scheme in terms of the Weyl tensor (along the lines in [39]). It then follows that the temperature β = 2πl is uncorrected, since it is determined entirely by the properties of the metric. If we use the intrinsic regularization of the gravitational action we have employed in this paper, which relies only on the metric of the solution at hand, then the value of the action for this solution is also unchanged. Now, since the Euclidean action is identified with βF , it follows that the free energy of the state at β = 2πl should receive no corrections. We have already mentioned that, from field theory arguments, the energy of this state is protected, and indeed we have explicitly seen that it takes the same value at zero and infinite coupling. Now, when higher derivative terms are added to the Einstein-Hilbert action the entropy is in general no longer given by the area. However, since we would conclude that the entropy of SYM on hyperbolic space at β = 2πl should not change its value when going from strong to weak coupling 9 . This is not what we have found. The entropy at strong coupling is instead 3/2 times larger than the value computed from one-loop vacuum diagrams. Obviously, already the free energy is different in both regimes, This looks worrisome. While we cannot completely discard that subtle reasons invalidate the assumption that the higher α ′ corrections can be written in some scheme in terms of the Weyl tensor, it should be noted that the argument developed above is known to actually work for the closely analogous situation of BTZ×S 3 black holes [34]. It would certainly seem odd if the free energy were corrected in the α ′ expansion, but the energy density (and specific heat) were not. Let us then discuss other alternatives here. In concluding that the entropy should remain unchanged we have implicitly assumed that there is no phase transition in the theory as a function of the coupling. It might then be that as the coupling is increased a phase transition occurs, in which new states arise that do not change the energy but nevertheless increase the entropy. This phase transition would have to be invisible in an expansion of F in inverse powers of g 2 Y M N. Another possibility, probably no less bizarre, is that the one-loop calculation at weak coupling does not capture all of the states that build up the entropy. If this were the case, the correct result to all orders would be the one given by the supergravity calculation. We will discuss further this possibility later in sec. 7.
States other than the one at β = 2πl are expected to receive corrections. These should change the entire ten-dimensional metric, and with it the temperature and thermodynamic functions. Indeed, these corrections have been computed in [38], where it has been calculated how the value of r + as a function of β is shifted. Although the value r + = l for β = 2πl remains, as argued, uncorrected, the extremal radius changes. As for the action we get (here G is the five-dimensional Newton's constant). We have performed the calculation of the corrected action using, as in the rest of this paper, the intrinsic regularization method of [16]. The calculation of the action in [38] was instead done with a background subtraction. Our result coincides with the one in [38] except for one important difference: for k = −1 the value of δF in (65) does not tend to zero as the temperature goes to zero, β → ∞. This is, the energy of the extremal state is shifted from zero. In contrast, the calculations in [38] were performed by taking the state at zero temperature as the reference state. By construction, this keeps the energy of that state to zero. But this way of proceeding has the unattractive property that the energy of the state at β = 2πl does receive a correction to this order. It would seem unnatural to choose a regularization that needlessly introduces finite coupling corrections for a quantity that we have reasons to expect should remain uncorrected. Intrinsic regularization yields instead δF = 0 at β = 2πl. It is interesting to observe that for k = −1 the corrections change sign at the AdS value β = 2πl. Using (65) it is a straightforward matter to compute the corrections to the energy, entropy, and specific heat. The explicit formulae for arbitrary temperature are rather unilluminating, so we shall only quote the values for the states of most interest. Of course, while Notice that the extremal state acquires a negative energy. At present it does not seem possible to decide whether this is a real problem or just an artifact of the α ′ expansion. It can be made to appear less problematic by taking the Rindler interpretation of the result, since it merely implies a shift in the energy of the Rindler vacuum of SYM. The correction to the entropy is negative as well. Extrapolation is not admissible at this level, but it might be that the large degeneracy of the ground state at infinite g 2 Y M N steadily decreases with the coupling. Perhaps more significant is the fact that the corrections to the specific heat maintain the dependence C ∼ β −1 that we have seen already appears for higher spin fields in hyperbolic space.
Discussion
We hope to have made it clear that hyperbolic black holes provide a rich setting to study the AdS/CFT correspondence, introducing new features absent from both planar and spherical black holes. A particularly interesting aspect is that they provide the possibility of studying properties of the global AdS vacuum and of the Minkowski vacuum of the CFT by the introduction of accelerating observers.
The most striking result of our analysis has been the identification of enhancements in the value of the entropy that are not accompanied by increments in the energy. The first instance of this phenomenon is the appearance of a large degeneracy for the ground state at infinite coupling. Large degeneracies for supersymmetric, zero temperature black holes are well known in string theory 10 . However, the hyperbolic extremal black hole is not supersymmetric, and in the absence of supersymmetry it is extremely difficult to make interacting systems have highly degenerate ground states 11 . It may be worth recalling that the result can be interpreted as saying that the Rindler vacuum of SYM at infinite 't Hooft coupling is highly degenerate.
No less unexpected is the strong/weak coupling discrepancy of the entropy of AdS in hyperbolic slicing. This time it can be interpreted in terms of the degeneracy of the Minkowski vacuum of SYM as seen by an accelerating observer. The mismatch in the entropy is the more striking, since we did not expect corrections to the free energy of this state at any order. Indeed, we have explicitly seen that there are no corrections to O(α ′3 ), and that the energy and specific heat of that state take the same value at zero and infinite coupling. Barring subtleties in the α ′ expansion, alternative explanations must be sought. We have mentioned the possibility that the states responsible for this entropy arise as a consequence of a phase transition as the coupling is increased. Another option might be that the non-renormalization of the entropy still works, but that the total entropy at small coupling is not entirely captured by standard oneloop vacuum diagrams, i.e., that the Super-Yang-Mills theory possesses states that contribute to the entropy but not to the energy density. This would sound like a rather exotic proposal. However, very similar conclusions have been arrived at in [10], from the study of an entirely different paradox in the AdS/CFT context. There, in order to preserve causality of the field theory when describing processes that take place far from the boundary of AdS, it was found necessary to postulate "a very rich collection of hidden degrees of freedom of the SYM theory which store information but give rise to no local energy density" (sic) [10]. It is striking that this appears to be the sort of phenomenon we are observing in our study of black hole entropy. From the arguments in [10] it would appear that these so-called "precursor" states are already present at the weakly coupled level, and therefore might provide the extra degeneracies we have found.
As noted, even if AdS 4 and AdS 7 also appear to exhibit enhanced entropies, the situation is complicated by the lack of an adequate understanding of their dual field theories. It will be obviously interesting to find other setups where these exotic entropies show up. 10 They arise as degeneracies of BPS states. 11 A somewhat similar phenomenon has been found for charged AdS black holes [40]. However, in that case these large degeneracies are accompanied by equally large energy densities, and the states are moreover known to be unstable. | 2014-10-01T00:00:00.000Z | 1999-06-04T00:00:00.000 | {
"year": 1999,
"sha1": "39258e430384d752573a858bef5015010a830a0d",
"oa_license": null,
"oa_url": "http://iopscience.iop.org/article/10.1088/1126-6708/1999/06/036/pdf",
"oa_status": "BRONZE",
"pdf_src": "Arxiv",
"pdf_hash": "39258e430384d752573a858bef5015010a830a0d",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
} |
18519314 | pes2o/s2orc | v3-fos-license | Intercellular adhesion molecule 3, a third adhesion counter-receptor for lymphocyte function-associated molecule 1 on resting lymphocytes.
Recent studies suggest that some T and B lymphocyte cell lines bind to the integrin lymphocyte function-associated molecule 1 (LFA-1) chiefly through a pathway independent of its two known counter-receptors, intercellular adhesion molecules (ICAMs)-1 and -2. A monoclonal antibody (mAb) was raised that, in combination with blocking mAb to ICAM-1 and ICAM-2, can completely inhibit binding of these cell lines to purified LFA-1. This third ligand, designated ICAM-3 based on its functional relatedness to ICAM-1 and -2, is a highly glycosylated protein of 124,000 Mr. It is well expressed on all leukocytes and absent from endothelial cells. In assays of adhesion of resting lymphocytes to purified LFA-1, ICAM-3 is by far the most functionally important ICAM, implying an important role for ICAM-3 in the generation of immune responses.
Lymphocyte function-associated molecule 1 (LFA-1) 1 is an integrin that mediates a wide range of leukocyte interactions with other cells in immune and inflammatory responses (1,2). Two homologous immunoglobulin family counter-receptors for LFA-1 have been discovered: the inducible intercellular adhesion molecule I (ICAM-1) (3)(4)(5) and constitutively expressed ICAM-2 (6). Development of a mAb to ICAM-2 allowed several previously LFA-l-dependent, ICAM-I-independent phenomena to be analyzed and suggested that a third ligand for LFA-1 existed (7). Binding of several cell types such as epithelial and endothelial cells to purified LFA-1 could be completely blocked with a combination of ICAM-1 and ICAM-2 mAb, whereas an ICAM-I-, ICAM-2-independent pathway of adhesion to LFA-1 existed on many lymphoid cell lines, including the T call lymphoma call line, SKW3 (7).
We now report on the production and characterization of a mAb, CBR-IC3/1, that, in conjunction with anti-ICAM-1 and anti-ICAM-2 mAbs, can completely inhibit adhesion of a variety of call lines to purified LFA-1. Due to its functional role as a LFA-1 ligand, we have termed this novel molecule ICAM-3. The biochemical characteristics and cell distribution of ICAM-3 are distinct from those of either ICAM-1 or ICAM-2.
PBMC and neutrophils were obtained as described (7). Lymphocytes and monocytes were separated by cytometric anal) sis using forward and perpendicular light scatter, and their identity was confirmed by monocyte-and T cell-specific mAbs. Resting T ceils were isolated from whole blood by plastic adherence and nylon wool filtration and were 91% CD2 +, while PHA blasts were generated by culturing the cells for 3 d in the presence of 10 Izg/ml PHA (Sigma Chemical Co., St. Louis, MO). Development oflCAM-3 Hybn'doma. SKW3 cells were used to immunize 3-12-wk-old BALB/c female mice (Charles River Laboratories, Wilmington, MA). Immunizations (10s-106 calls per intraperitoneal immunization) were given three times at 3-wk intervals. 3 d before fusion with the murine mydoma P3X63Ag8.653, the mice were injected both intraperitoneally and intravenously with 5 x 10 s SKW3 cells. The protocol for fusion and subsequent maintenance of hybridomas was described previously (9). 600 hybridomas were screened for the ability to inhibit SKW3 binding to purified LFA-1 in the presence of mAb to ICAM-1 and ICAM-2. On this basis one mAb, CBK-IC3/1, was selected for further analysis. It was cloned three times by limiting dilution and isotyped by ELISA using affinity-purified antibodies to mouse immunoglobulins (Zymed Immunochemicals, San Francisco, CA).
Flow Cytometric Analysis. Immunofluorescence flow cytometry was performed on an EPICS V analyzer (Coulter Diagnostics, Hialeah, FL) after staining cells with mAb-containing supernatants followed by FITC-conjugated anti-mouse antibody (Zymed Immunochemicals) as described (7). Since both primary and secondary mAbs were used at saturating concentrations, membrane antigen expression could be quantitated as a measure of mean fluorescence intensity (10).
Su~ce Iodination. Surface labeling of cells with 125I was performed as described using Iodogen (Pierce Chemical Co., Rockford, IL) (11). Triton X-100 (1%) lysates were cleared with bovine IgG-coupled-Sepharose and then incubated with appropriate mAbbound Sepharose for 2 h. Beads were washed and heated at 100~ in sample buffer containing 50 mM Tris, 1% SDS, and 1% 2-ME or 20 mM iodoacetamide. Samples were subjected to SDS 7% PAGE (12) and autoradiography with enhancing screens. Treatment of samples with N-Glycanase (Genzyme, Boston, MA) was as previously described (13); samples were incubated with 10 U/ml N-Glycanase for 18 h at 37~ Purification of LFA-1. LFA-1 was purified from JY lysates on TS2/4-Sepharose as described previously (14). The LFA-1 bound to TS2/4-Sepharose was eluted with 50 mM triethylamine (pH 11.5), 150 mM NaC1, 2 mM MgCI2, and 1% octyl 3-D-gluco-Fyranoside. Samples were neutralized and stored frozen at -70~ Adhesion Assay. Adhesion of cells fluorescently labeled with 2'7'bis-(2-carboxyethyl)-5 (and -6)-carboxyfluorescein (BCECF; Molecular Probes, Inc., Eugene, OR) to plates coated with purified LFA-1 was performed as previously described (7,14,15). Cells were pretreated with a 1:200 dilution of mAb ascites for 45 min at 4~ and 10 s cells were transferred to each well. Cell lines adhered to solid phase LFA-1 for 1 h at 37~ and nonadherent cells were removed by six aspirations with a 23-gauge needle. Lymphocytes and blasts were sedimented by centrifugation (30 g for 5 min) and incubated at 37~ for 30 min. Unbound lymphocytes and blasts were removed by flicking media from the plate eight times, with 100 gl added between each wash. Flicking was more effective for thorough removal of unbound T lymphocytes, which were more difficult to remove by aspiration because of their small size. Fluorescence was quantitated from the 96-well plates using a Pandex fluorescence concentration analyzer (Baxter Healthcare Corp., Mundelein, IL).
Results and Discussion
To characterize the LFA-1--dependent, ICAM-I-, ICAM-2independent pathway of adhesion, mAbs were raised to SKW3 and were screened in combination with anti-ICAM-1 and anti-ICAM-2 mAbs for the ability to inhibit binding of this cell line to purified LFA-1. One mAb, CBR-IC3/1 (IgG1), was selected that completely inhibited this novel pathway of adhesion ( Fig. 1 A). CBR-IC3/1 does not react with purified LFA-1 or COS cells transfected with either ICAM-1, ICAM-2, or LFA-1 cDNAs (data not shown), and it does react with the SLA cell line, which was derived from a LFA-l-deficient patient (16). The antigen recognized by CBR-IC3/1 is thus distinct from ICAM-1, ICAM-2, and LFA-1. By analogy to ICAM-1 and ICAM-2, whose names were based on their identification as ligands for LFA-1, we have designated this third counter-receptor ICAM-3; whether it also belongs to the Ig superfamily remains to be determined. Adhesion of SKW3 to purified LFA-1 was only slightly inhibited by a combination of blocking anti-ICAM-1 and anti-ICAM-2 mAbs. When the anti-ICAM-3 mAb, CBR-IC3/1, was added alone, it significantly inhibited the adhesion, and this inhibition was complete when combined with blocking anti-ICAM-2 mAb (Fig. 1 A). Thus, adhesion of SKW3 to purified LFA-1 was mediated largely by ICAM-3 and also, in part by ICAM-2. In adhering to LFA-1, each of four cell lines utilized the three ICAMs to different degrees, as demonstrated by the different patterns of mAb inhibition (Fig. 1 A). The adhesion of the B lymphoblastoid cell line, JY, occurred through an ICAM-1 pathway with smaller contributions through the ICAM-2 and ICAM-3 pathways; inhibition was partial with ICAM-1 mAb and almost complete when combined with either ICAM-2 or ICAM-3 mAb. Another B lymphoblastoid cell line, SLA, utilized both ICAM-1 and ICAM-3, since adhesion was not inhibited by either ICAM-1 or ICAM-3 mAb alone and almost completely by the two mAbs together. The thymoma cell line, Jurkat, used both the ICAM-2 and ICAM-3 pathways of adhesion, with a small contribution by ICAM-1. There is considerable redundancy in use of ICAMs since for each of these cell lines, mAbs to at least two ICAMs were required to achieve substantial inhibition.
The pattern of distribution of ICAM-3 differed from that of ICAM-1 and ICAM-2 in several ways. Unlike ICAM-1 and ICAM-2, ICAM-3 was not expressed on either resting or stimulated endothelium (data not shown). This finding agrees with the observation that LFA-1--dependent binding of cells to both resting and stimulated endothelium was completely inhibited by a combination of mAbs to ICAM-1 and ICAM-2 (7). ICAM-3 was restricted to the hematopoietic lineage, being highly expressed on lymphoid and monocytic cell lines, with a few exceptions (Fig. 2 A and data not shown). In all cases examined thus far, expression of ICAM-3 was coordinate with the LFA-l-dependent, ICAM-I-, ICAM-2-independent pathway of adhesion. Cells binding LFA-1 solely through ICAM-1 and ICAM-2 did not express ICAM-3 (endothelium, Raji), while cell lines that bound weakly 0-Y, U937, Sup T) or strongly (SKW3, Jurkat, SLA) through this third pathway of adhesion had correspondingly low or high ICAM-3 surface expression. In all cases, the combination of all three anti-ICAM mAbs completely eliminated binding to LFA-1.
ICAM-3 differed markedly from ICAM-1 and ICAM-2 in its expression on leukocytes (Fig. 2 B). ICAM-3 was expressed at high levels on resting lymphocytes, monocytes, and neutrophils, whereas ICAM-1 and ICAM-2 were expressed much more weakly or were absent. Upon activation of lymphocytes with PHA, ICAM-3 expression increased two-to threefold, whereas expression of ICAM-1 was greatly increased
(7, 17) (Pig. 2 B).
We tested the functional importance of the three ICAMs in adhesion of T lymphocytes to LFA-1 (Fig. 1 B). Resting lymphocytes were previously shown to bind strongly to purified LFA-1 (14), and we found that this binding is almost completely ICAM-3 dependent. After mitogenic activation with PHA, however, adhesion to LFA-1 occurred chiefly through ICAM-1, correlating with its increased surface expression, and in lesser degree through ICAM-3.
The relative affinity of the three ICAMs for LFA-1 can be examined by comparing their contributions to binding LFA-1 ( Fig. 1) with their cell surface expression as measured by immunofluorescence flow cytometry (Fig. 2). This comparison revealed that ICAM-1 has the greatest affinity for LFA-1, and that ICAM-2 and ICAM-3 have similar, but lower, affinities. For instance, Jurkat cells expressed ICAM-2 and ICAM-3 at similar levels, and each contributed to binding to LFA-1. Where ICAM-2 expression was greater than that of ICAM-3, such as on JY ceUs, the ICAM-2 pathway of LFA-1 adhesion prevailed over ICAM-3. In contrast, SLA and SKW 3 expressed three-to fourfold more ICAM-3 than ICAM-2, and the ICAM-3 pathway of adhesion predominated over the ICAM-2 pathway. When ICAM-1 was expressed in comparable or somewhat lesser amount than ICAM-2 or ICAM-3, as was the case with SLA, JY, and the PHA-activated T cells, the ICAM-1 pathway of adhesion was dominant.
Immunoprecipitates of ICAM-3 from various t2SI-labeled cell lines revealed a band of 124,000 Mr under reducing conditions, with slightly increased mobility under nonreducing conditions (Fig. 3, A and B). Treatment with N-glycanase resulted in reduction of the ICAM-3 band to Mr 87,000, indicating that ICAM-3, like ICAM-1 (17,18) and ICAM-2 (7), is a highly glycosylated protein (Fig. 3 C). The biochemical characteristics, patterns of expression, and functional properties of ICAM-3 distinguish it from previously described adhesion molecules, including the human homing receptor LAM-1 (19), the inducible endothdial adhesion molecule VCAM-1 (20)(21)(22), and the VLA family of matrix receptors (23); no mAbs with similar cell distributions were found in the data bases from the third or fourth leukocyte workshop (24,25).
The existence of three LFA-1 ligands suggests specialization for different aspects of LFA-l-dependent leukocyte interactions. ICAM-1 is basally expressed on endothdium and many epithelial cell types and is strongly induced in inflammation and immunity, where it is hypothesized to regulate ceil localization (1) and facilitates recognition of specific antigens (26,27). Since ICAM-2 is the predominant LFA-1 ligand on resting endothdium, this pathway of adhesion may have important consequences for normal recirculation of LFA-1-bearing lymphocytes through tissue endothelium (28)(29)(30)(31). The finding that adhesion of resting T lymphocytes to LFA-1 occurs primarily via ICAM-3, combined with the fact that ICAM-3 is much better expressed than the other LFA-1 ligands on monocytes and resting lymphocytes, implies an important role in the initiation of immune responses. Recent studies have shown that B lymphocyte activation stimulates avidity of cell surface LFA-1 (32,33) in a manner analagous to that reported for T lymphocytes (14,34). Our studies predict that 1CAM-3 on T lymphocytes would facilitate their interaction with antigen-presenting B lymphocytes. Furthermore, a role is suggested for LFA-1 ligand(s) other than ICAM-1 in both allogeneic and autologous mixed lymphocyte reactions (35) and in lysis by T cells of certain target cells (36).
The existence of multiple ICAMs may also have implication for therapy. ICAM-1 mAb is efficacious in vivo in prolonging renal (37) and cardiac (38) allografts. ICAM-3 mAb may be capable of inhibiting a distinctive and perhaps overlapping subset of immune responses in vivo, since it inhibits LFA-l-dependent adhesive interactions with a distinct subset of cell types. | 2016-05-04T20:20:58.661Z | 1992-01-01T00:00:00.000 | {
"year": 1992,
"sha1": "80205411a65eb3c45ecef49979121af1a7233daf",
"oa_license": "CCBYNCSA",
"oa_url": "https://europepmc.org/articles/pmc2119096?pdf=render",
"oa_status": "GREEN",
"pdf_src": "PubMedCentral",
"pdf_hash": "80492b793cd12c17976b0cac48f677d4b9fefaf2",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
} |
53753862 | pes2o/s2orc | v3-fos-license | Effects of Ser47-Point Mutation on Conformation Structure and Allergenicity of the Allergen of Der p 2, a Major House Dust Mite Allergen
Purpose Hypoallergenic recombinant Der p 2 has been produced by various genetic manipulations, but mutation of a naturally polymorphic amino acid residue known to affect IgE binding has not been studied. This study aimed to determine the effect of a point mutation (S47W) of residue 47 of Der p 2 on its structure and immunoglobulin (Ig) E binding. Its ability to induce pro-inflammatory responses and to induce blocking IgG antibody was also determined. Methods S47 of recombinant Der p 2.0110, one of the predominant variants in Bangkok, was mutated to W (S47W). S47W secreted from Pichia pastoris was examined for secondary structure and for the formation of a hydrophobic cavity by 8-Anilino-1-naphthalenesulfonic acid (ANS) staining. Monoclonal and human IgE-antibody binding was determined by enzyme-linked immunosorbent assay. Allergen-induced degranulation by human epsilon receptor expressed-rat basophil was determined. Stimulation of the pro-inflammatory cytokine interleukin (IL)-8 release from human bronchial epithelial (BEAS2B) cells and inhibition of IgE binding to the wild type allergen by S47W-induced IgG were determined. Results S47W reduced secondary structure and failed to bind the hydrophobic ANS ligand as well as a monoclonal antibody known to be dependent on the nature of the side chain of residue 114 in an adjacent loop. It could also not stimulate IL-8 release from BEAS2B cells. IgE from house dust mite (HDM)-allergic Thais bound S47W with 100-fold weaker avidity, whereas IgE of HDM-allergic Australians did not. S47W still induced basophil degranulation, although requiring higher concentrations for some subjects. Anti-S47W antiserum-immunized mice blocked the binding of human IgE to wild type Der p 2. Conclusions The mutant S47W had altered structure and reduced ability to stimulate pro-inflammatory responses and to bind IgE, but retained its ability to induce blocking antibodies. It thus represents a hypoallergen produced by a single mutation of a non-solvent-accessible amino acid.
INTRODUCTION
Genetically modified allergens with reduced immunoglobulin (Ig) E-binding activity have the potential to reduce allergic side effects in immunotherapy and to facilitate treatment with rapid up-dosing and new methods of delivery. Their therapeutic potential has been demonstrated by their ability to induce IgG-blocking antibodies and to modify Th2-cellmediated hypersensitivity 1,2 and further modifications that reduce interactions with the innate immune system might also favor the stimulation of immune down-regulatory pathways instead of hypersensitivity pathways.
A major house dust mite (HDM) allergen, Der p 2, is an ML-domain protein that consists of 2 β sheets folded over, with disulfide bonds, into a clam-like structure to create a lipid binding cavity. 3 Strategies for modifying Der p 2 have included fragmentation (4), fusion with other proteins, 4,5 abrogation of disulfide bonding 6,7 and the mutation of surface-exposed residues. 8,9 Much of the known natural amino acid sequence variation of Der p 2 has been found to be restricted to defined changes in amino acids at positions 40, 47, 111 and 114 10,11 that are located in or adjacent to the loop structures at one pole of the molecule. 2 Since the natural substitutions in these positions are known to affect antibody binding, 12,13 disruption in this region might also be effective. The substitution of D with N in amino acid 114 has been shown to markedly affect monoclonal antibody binding but with only a small effect on human IgE binding. 12 In comparison of titrations between Der p 2.0101 (D20101) and Der p 2.0107 (D20107) which only differ in position 47, substitution of S with T reduced the median IgE binding of patient's sera to half. 13 Large differences in IgE binding by different variants have also been demonstrated in Korea, 14 although the comparisons made did not pinpoint a single amino acid substitution.
The first described variant (D20101) of Der p 2 has been used internationally both as a reference sequence and for the production of genetically engineered allergens. However, it was obtained from a long-term culture of undocumented origin kept by the pharmaceutical company CSL Ltd. (Melbourne, Australia), so its relevance to naturally occurring variants is uncertain. Studies of HDM found in homes have shown that while the D20101 and similar sequences could readily be found in Australia 10 variants with even a resemblance to D20101 were absent in the homes examined in Bangkok. 11 Studies of domestic HDM from other geographical locations have not been reported, although a large study from a colony kept in Korea 15 found that only 1/60 variants characterised was the canonical D20101 and, from the amino acids at positions 40, 47, 111 and 114, a total of 10/60 would be D20101-like. 15 Comparisons of IgE binding by different variants with sera from subjects in Perth in Australia and from Bangkok in Thailand have not only revealed differences due to the variant used for measurement but also variations in titer that might be attributed to exposure to different variants. 16 The study here reports the effects of making an unnatural substitution at the poorly solventaccessible position 47 and describes its effects on secondary structure, lipid binding, IgE binding, inflammatory interactions with epithelial cells and its ability to induce IgG-blocking antibodies. The mutation of serine to the large hydrophobic tryptophan at position 47 was chosen for experimentation because position 47 was naturally buried in a hydrophobic environment and because although tryptophan has a high propensity to disrupt structure, 17 it does so by more local effects compared to the more global effects of the highly disruptive of proline and aspartic acid that directly impinge on the peptide backbone and change the charge.
Sera
The use of sera from skin-prick positive HDM-allergic donors for IgE binding was approved by the Institutional Review Board at Siriraj Hospital (SiEc459/2008), Thailand and the Princess Margaret Hospital Ethics Committee, Perth, Western Australia (1347/EP). All donors gave written consent. Sera were selected on the basis of a positive skin prick test to Dermatophagoides pteronyssinus and/or Dermatophagoides farinae extracts with a wheal diameter >3 mm. Perth donors had specific anti-HDM IgE levels >10 kU/L as measured by ImmunoCAP. ImmunCAP data was not available for the Thai donors, but they showed IgE binding to rDer p 2 by enzyme-linked immunosorbent assay (ELISA) with absorbance values at 450 nm >0.2 optical density (OD) (Tables 1 and 2).
Mouse IgG antibody against recombinant Der p 2 variant 2.0110 (D20110) and S47W were raised through an antibody production service by Biomedical Technology Research Center, Faculty of Associated Medical Sciences, ChiangMai University. Briefly, purified recombinant D20110 and S47W were mixed with complete Freud's adjuvant (Sigma-Aldrich, St. Louis, MO, USA) and injected subcutaneously into four 6-weeks-old BALB/c mice (100 µg per dose). The immunizations were repeated with the same antigen using incomplete Freud's adjuvant (Sigma-Aldrich) at 2-week intervals for 2 more immunizations. Blood samples were collected. Mouse sera were titrated for D20110 anti-serum and anti-S47W antiserum-by binding ELISA.
Allergens
Site-directed mutagenesis was performed based on the quick change protocol (Strategene, La Jolla, CA, USA) using pPICzα harboring D20110 cDNA as a template. Specific primers were designed to change TCA (S) to TTT (W) as shown below.
Circular dichroism (CD spectroscopy)
Protein samples were prepared at a final concentration of 10 µM in phosphate-buffered saline (PBS). CD spectra were recorded from 260 to 190 nm at 0.02-cm path length, 1-nm resolution, with a scan speed of 50 nm/min on a Jasco J-815-150S spectropolarimeter. The CD data were used in calculations of secondary structure using the CDPro suite of programs.
Hydrophobic staining of Der p 2 with 1,8-ANS
Staining Der p 2 with 1-anilinonaphthalene 8-sulfonic acid (ANS; Sigma-Aldrich) to measure the hydrophobic binding capacity was conducted as before. 11,16 Briefly, ANS was dissolved in methanol at a concentration predetermined from an absorbance value at 372 nm using a molar extinction coefficient of 8,000 M −1 . 11,16 The mixture of 200 µM ANS and 3.5 µM D20110 or S47W was incubated for 10 minutes at 25°C before determining the emission spectra of ANS on a Jasco FP-6300 fluorometer. ANS was excited at 390 nm with a 5-nm slit width, and the emission spectra were measured from 400-600 nm with a 5-nm slit width.
Inhibition of human IgE binding to Der p 2 by D20110 and S47W
The inhibition of IgE-antibody binding to D20110 coated on ELISA plates by D20110 and the S47W was performed as follows: 500 ng of D20110 in PBS were added per well on 96-well Maxisorb plates (Nunc, Rochester, NY, USA) and incubated at 4°C overnight. Sera of HDMallergic donors were diluted 1:8 to 1:32 based on pre-determined levels of IgE binding to D20110. Diluted sera were incubated with serially diluted D20110 or S47W in PBS-A (PBS containing 3% skim milk, 0.05% Tween 20) at 4°C overnight. D20110-coated 96-well plate was washed with PBS-A. The absorbed sera were centrifuged at 17,210 g for 10 minutes before the supernatant was added to D20110-coated 96-well plate and incubated at room temperature for 2 hours. For the assays conducted with T,hai sera the IgE binding was developed with horseradish peroxidase (HRP)-labelled goat IgG anti-human IgE antibodies as previously described, 16 and for the assays conducted with sera from Perth donors the binding was developed with monoclonal biotinylated anti-IgE and europium-conjugated streptavidin as previously described. 13 The results were calculated as mean and standard error of the percent inhibition obtained from each of the different sera.
Binding of mouse anti-D20110, mouse anti-S47W and monoclonal anti-Der p 2 to D20110 and S47W
Recombinant D20110 and S47W in PBS were added at 500 ng per well on 96-well Maxisorb plates (Nunc) and incubated at 4°C overnight. Sera from D20110 or S47W immunized mice were diluted 1:50 in PBS-A. The allergen-coated 96-well plates were washed with PBS-A. Diluted sera were added and incubated at room temperature for 2 hours. The allergen 96-well plates were then washed with PBS-A before 1:1,000 diluted HRP-labelled goat IgG anti-mouse IgG (H+L) antibodies )KPL, Milford, MA, USA) in PBS-A was added and incubated at room temperature for 1 hour. Bound antibody was detected with 3,3′,5,5′-Tetramethylbenzidine (TMB) by measuring absorbance at 650 nm. The results were calculated as mean and standard error of the percent inhibitions obtained from 4-6 experiments.
The same sandwich ELISA was used to measure the binding of the 1D8 monoclonal anti-Der p 2 antibody (Indoor Biotechnologies, Cardiff, UK) which was added at 100 ng/mL. The variants 20101 and 20104 were included as negative and positive controls respectively based on their known binding characteristics.
Inhibition of human IgE binding to Der p 2 by mouse anti-D20110 or S47W IgG
Recombinant D20110 in PBS was added at 500 ng per well on 96-well Maxisorb plates (Nunc) and incubated at 4°C overnight. Sera from D20110 or S47W immunized mice were added at dilutions of 1:50, 1:100, and 1:1,000 in PBS-A and incubated at room temperature for 2 hours. Plates were washed 3 times with PBS-A and 1:8 to 1:32 dilutions of sera from 9 HDM-allergic donors based on predetermined levels of direct IgE binding to D20110 were added and incubated at room temperature for 2 hours. HRP-labelled goat IgG anti-human IgE antibody (KPL) diluted 1:1,000 in PBS-A was added and incubated at room temperature for 2 hours. Bound antibody was detected with TMB by measuring absorbance at 650 nm. The results were calculated as mean and standard error of the percent inhibitions obtained from each of the different sera.
Bronchial epithelial cell culture and stimulation with Der p 2
The human bronchial epithelial cell line BEAS-2B immortalized by a replication defective hybrid of adenovirus and SV40 exhibits squamous cell differentiation. It was purchased from the American Type Culture Collection (Manassas, VA, USA) and maintained in DMEM/F12-1 (DMEM/F12 supplemented with 15 mM HEPES, 2.85 mM L-glutamine (Millipore, Bedford, MA, USA), 5% heat-inactivated fetal bovine serum (FBS), 100 U/mL penicillin, 100 µg/mL streptomycin and 1.25 µg/mL amphotericin B. Cells were grown in humidified atmosphere with 5% CO 2 at 37°C until 80%-90% confluence was reached. To examine their response to inflammatory stimuli, the DMEM/F12-1 was replaced with LHC-9 (Gibco, ThermoFisher Scientific) containing 5% heat-inactivated FBS and the cells were seeded on 24-well culture plates at 1×10 5 cells/well and incubated overnight. The attached BEAS-2B cells were washed twice with 20 mM HEPES-BSS then incubated in RPMI-1640-1 (RPMI-1640 supplemented with 25 mM HEPES, 2 mM L-glutamine, 5% heat-inactivated FBS) overnight. They were then washed twice with 20 mM HEPES-BSS and further incubated with D20110 and the S47W diluted in serum-free RPMI-1640 for 24 hours under 5% CO 2 at 37°C. Initial experiments showed that a dose of 25 µg/mL allergen was required to achieve sufficient IL-8 release for the inhibition assays and that lower doses induced proportionally less cytokines. The synthetic triacylated lipoprotein-TLR1/2 ligand, Pam3CSK4 (invivoGen, SanDiegp, CA, USA) was used as a positive control, and rat anti-human TLR2 IgG antibodies (invivoGen) diluted in serumfree RPMI-1640 were added for the experiments that examined the blocking of TLR2 binding.
IL-8 measurement by ELISA
The concentration of IL-8 secreted into the culture medium from stimulated-BEAS-2B was assessed using human CXCL8/IL-8 DuoSet ELISA (R&D Systems, Minneapolis, MN, USA). Briefly, the capture antibody anti-human IL-8 was coated onto a 96-well microplate and incubated overnight, and then the plates were washed and incubated with PBS containing 1% bovine serum albumin (BSA) for at least 1 hour. Various concentrations of human IL-8 and cultured supernatants were added and incubated at room temperature for 2 hours. The detection antibody was added and incubated at room temperature for 2 hours. Streptavidin-HRP was added and incubated at room temperature for 20 min. The substrate solution was then added and the reaction was stopped by 2N sulfuric acid. The absorbance was recorded at 450 and 570 nm. The final value was calculated from the values at 450 minus the values at 570 nm.
Basophil degranulation assay
RBL-SX38 cells that express the rat high-affinity IgE receptor Fcε-RI and have additionally been transfected with a functional human Fcε-RI were maintained in MEM 1 (MEM supplemented with 10% heat-inactivated FCS, glutamine, 100 U/mL penicillin, and 100 mg/mL streptomycin solution). For the degranulation assay, the RBL-SX38 cells were cultured in MEM containing 0.4 mg/mL geneticin (G418) until the cells reached 85%-95% confluence. They were then incubated in MEM 1 for another 36 hr before being harvested with 0.05% Trypsin/EDTA solution. The cells were re-suspended in MEM 1, plated on 200,000 cells/well of 48-well plate, and incubated overnight with serially diluted sera. All cells were washed with PIPES buffer (140 mM NaCl, 5 mM KCl, 0.6 mM MgCl 2 , 1.0 mM CaCl 2 , 5.5 mM glucose, 0.1% BSA and 10 mM PIPES, pH 7.4).
Various concentrations (1, 10, 100 and 1,000 ng/mL) of D20110 and S47W or diluted goat anti-human IgE antibody, as a positive control, were added at 50 µL in pre-warmed PIPES buffer to each well and the cells were incubated for 30 minutes at 37°C. For background degranulation, cells incubated with PIPES without serum and cells incubated with serum without allergen were included. Supernatants were collected and attached cells were lysed with 0.2% Triton X-100.
The activity of β-hexosaminidase in the medium and within the cells was determined by adding 0.1 mM 4-methylumbelliferyl-N-acetyl-β-D-glucosaminide in 100 mM citrate, pH 4.5 and incubated for 30 minutes at 37°C. The reaction was stopped with 0.25 M glycine. The fluorescence was measured by using 380-nm excitation and 440-nm emission filters.
The degranulation levels were calculated from the following formula.
where R is the degranulation level; Rsup is released enzyme activity in the supernatant; Rppt is residual enzyme activity within the cell; blank is the buffer control.
Statistical analysis
Results were analyzed by 1-way analysis of variance (ANOVA), followed by Tukey's multiple comparison test to determine the statistical significance using GraphPad Prism (GraphPad Software, La Jolla, CA, USA).
RESULTS
Recombinant D20110 and S47W were observed to be monomeric proteins when purified through the size-exclusion column. The analysis of their CD spectra showed that the composition of the secondary structure of the S47W was 5.7% alpha helix, 38.9% β-sheet, and 55.4% random coil compared to that of D20110 which was 4.4% α-helix, 58.9% β-sheet, and 36.6% random coil (Fig. 1A). The fluorescent of ANS bound to the S47W that had a λmax of 475 nm had a low fluorescent intensity of 13.01 arbitrary units (a.u.) compared to that of ANS bound to D20110 that had a λmax of 469 nm with a high fluorescent intensity 168.32 a.u. (Fig. 1B).
The results showed that the 1D8-monoclonal antibody bound to D20110 with an average OD 650 nm of 1.1, whereas it did not bind to the S47W OD 650 nm <0.05 (Fig. 1C). For the controls, the 1D8 bound to D20104 with an average OD 650 nm of 1.2, while it bound to D20101 with an average of 0.25 OD 650 nm (Fig. 1C).
The IgE assay showed that IgE of the Bangkok atopics bound to the coated D20110 with average OD 650nm value of 0.503 but bound to the coated S47W with average OD 650nm value of 0.12 ( Fig. 2A). The results from IgE of the Perth atopics bound to coated D20110 with average 5,500 europium whereas IgE bound to the coated S47W with average 1,000 europium (Fig. 2B).
The ELISA assays showed that both D20110 and S47W inhibited IgE binding to D20110 when sera from the Bangkok atopics were examined, but that while D20110 inhibited the IgE binding with an IC 50 of 20 ng/mL, the S47W only inhibited IgE binding with an IC 50 of 1,330 ng/mL ( Fig. 2A). This difference appeared more pronounced when the inhibition assays were conducted for IgE binding from the Perth atopics. The results showed that while D20110 inhibited IgE binding with an IC 50 of 8.61 ng/mL, the S47W did not inhibit IgE binding at 100 ng/mL compared to the 30%-40% inhibition found for the Thai sera (Fig. 2B). It should be noted that all of the sera from Perth had over 10 IU/mL of IgE binding to natural Der p 2.
LAL assays showed that the preparations of purified D20110 and S47W had low levels (<0.001 EU/mg) of endotoxin contamination that would not cause inflammatory responses through the TLR4 of BEAS-2B cells. To determine if the S47W could induce inflammatory responses through TLR2, D20110, and S47W were incubated with BEAS-2B cells. The results showed that, at 25 µg/mL, D20110 stimulated the release of 309.7 pg/mL IL-8, while the S47W stimulated the release of only 156.9 pg/mL (Fig. 4). The experimental TLR2 ligand Pam3CSK4 stimulated the release of 934.5 pg/mL IL-8. An anti-TLR2 IgG antibody reduced the IL-8 release from D20110-stimulated BEAS-2B cells to 100.8 pg/mL (Fig. 4).
The binding of the mouse D20110 antiserum and S47W antiserum-to D20110 and S47W coated on ELISA plates showed the binding of D20110 antiserum to S47W was 54% of its binding to D20110, while the binding of S47W antiserum to D20110 was up to 60% of its binding to the S47W (Fig. 5A). When tested for their ability to block human IgE binding to the wild type allergen (D20110), both the mouse antisera, made against D20110 and S47W, had similar activities (Fig. 5B). Both antibody preparations inhibited 40% of IgE binding to D20110 at 1:50 dilution (Fig. 5B).
DISCUSSION
This report describes a point mutation of Der p 2 that reduced its amount of β-structure, removed its ability to bind a hydrophobic ligand, ablated the epitope bound by the 1D8 monoclonal antibody, reduced IgE binding avidity by 100-fold and abrogated its ability to produce TLR2 mediated IL-8 release from epithelial cells.
The CD spectrum analysis showed that the amount of β-sheet was reduced from 59% 16 to 39%, by 34%. While the 39% represents the retention of considerable secondary structure, β-sheet formation is the major structural feature of Der p 2 and is required to form its flexible clamshell-like structure. A very significant change occurred because the fluorescence of the ANS-binding assay was almost completely ablated, proabably in the cavity. ANS binding to MD-2 has been shown to be restricted to the LPS-binding cavity 18 and studies with Der p 2 showed that it required folded protein. 16 Residue 47 is a poorly solvent accessible amino acid located in a loop that links the 2 β-sheets of Der p 2 via strand B (residues 34-42) of sheet 1 and strand C (residues 51-58) of sheet 2 (Fig. 6). 3 This location is opposite to a proposed entrance of the hydrophobic cavity of Der p 2 where W92 is located (Fig. 6). 16 important function of the side chain of 47 is hydrogen bonding of the side chain of T49 and D113, 20,21 in the vicinity of which are C73-C78 disulfide bond indicating reduced flexibility in this area (Fig. 6). 3,21,22 Position 47 is one of the 4 common polymorphic positions found in Der p 2 in nature where it shows substitutions of T with S. The difference between these 2 residues is that T has methyl group in addition to a hydroxyl group in the side chain. The substitution with S would decrease the distance of hydrogen bonding of hydroxyl group and sidechain of T49 and D113 (Fig. 6), resulting in altered IgE binding as demonstrated by Hales et al. 13 In this work, W was chosen to substitute S as we wanted to replace the hydroxyl sidechain with a large aromatic indole ring which might disrupt the internal hydrogen network resulting in the observed change in conformation. Direct evidence that showed conformation change in this area was that the binding of the 1D8 mAb to the S47W was ablated. This monoclonal antibody has shown to only bind to the 20101 if the D114 of the 20101, on the loop immediately adjacent to the S47, is mutated to the N114 as found in many other variants. 12 The mechanism for the altered structure thus appears to change in the loop structure appears to be induced from a change in the loop structure in the region where the natural common polymorphisms are found. Moreover, this differs from other mutational strategies such as the removal of disulfide bonds, 6,7 fragmentation 4 and the alteration in surface exposed side chains. 8,9 The comparison of IgE binding with sera from Perth and Bangkok probably shows that exposure to different variants of Der p 2 found in different environments can affect IgE binding to hypoallergens. HDM-producing D20110 and related variants (20103, 20104, and 20109) are common in Bangkok, 11 whereas mites producing D20110 have not been detected in Perth 10 the 20101. Perth individuals would accordingly have few no serum IgE antibodies induced by exposure to D20110. 23 As the direct IgE-binding results showed that the sera IgE of both Bangkok and Perth atopics bound only to coated D20110. The results suggest that the single mutation S47W may cause changes in IgE epitopes on Der p 2 resulting in no IgE binding.
To confirm that this observation occurred due to altered structure, IgE binding to the S47W in solution was performed in IgE inhibition assays. Also, as IgE inhibition results showed, while IgE binding to D20110 was readily demonstrated in the sera from Perth residents, no inhibition was found with up to 100 ng/mL S47W. IgE binding to D20110 from the Thai sera was inhibited, depending on the individual by 30%-40% S47W at 100 ng/mL, although the IC 50 was over 100-fold less than found for the wild type allergen. While the IgE results show remarkable differences in the binding affinity between the S47W and the wild type allergens, more extensive testing is required along with corroboration among other geographical regions taking account of the differences in Der p 2 allergen variants produced by mites in different regions. The basophil degranulation assays, however, showed that for 3 of the 4 Thais examined, the S47W at >10 ng/mL could trigger degranulation, similar to D20110. This observation could be attributed to cross-linking of the Der p 2 mutant by a mixture of weak and strong antibodies in the same repertoire as demonstrated by monoclonal antibody studies. 24 There were, however, differences in the profiles of degranulation for different individuals, so that it would be interesting to examine S47W-induced basophil degranulation with sera from patients in other regions. While the interactions between different epitopes might be behind the mechanism, it seems that even for modified allergens titration of the allergens, by skin testing would be required before therapy and that different mutations would perhaps be more suited to different individuals.
S47W had an attenuated interaction with epithelial cells as shown by its greatly reduced ability to induce an inflammatory response, IL-8 release, in BEAS-2B cells. Taking the results of others into account 19,20 this suggests that the interaction of Der p 2 with TLR2 required structured allergens. It is still unknown whether the proposed interaction between Der p 2 and TLR2 involves presentation of a lipid ligand to TLR2. Although it is reported that bacterial LPS bound to Der p 2 stimulated TLR4 in a similar manner to LPS bound to MD 2, 25 the role of a ligand in the interaction of Der p 2 with TLR2 has not been reported.
The mouse immunization experiments first demonstrated that antisera produced against the wildtype D20110 and the mutant S47W had very different specificities as shown by the different reactivities in the binding assays as illustrated in Fig. 5A. Despite this difference, however, the antiserum produced against the S47W not only had a considerable ability to block IgE binding to the wild type D20110, but also was the same as that of antiserum produced against D20110. Since the mice were first immunized with the Th1-type Freund's complete adjuvant, the blocking antibody is likely to be IgG.
In conclusion, the substitution of residue 47 of Der p 2 with W altered conformation of Der p 2, which reduced its ability to stimulate pro-inflammatory response from epithelial cells and to bind IgE, but, retained the molecule's ability to induce antisera with IgG-blocking activity. | 2018-12-02T16:55:10.179Z | 2018-09-11T00:00:00.000 | {
"year": 2018,
"sha1": "5f58f289322bc2a3ed688f5a0d985a3f9586f469",
"oa_license": "CCBYNC",
"oa_url": "http://e-aair.org/Synapse/Data/PDFData/9999AAIR/aair-11-129.pdf",
"oa_status": "HYBRID",
"pdf_src": "PubMedCentral",
"pdf_hash": "5f58f289322bc2a3ed688f5a0d985a3f9586f469",
"s2fieldsofstudy": [
"Medicine",
"Biology"
],
"extfieldsofstudy": [
"Medicine"
]
} |
250672031 | pes2o/s2orc | v3-fos-license | Dark-matter QCD-axion searches
The axion is a hypothetical elementary particle appearing in a simple and elegant extension to the Standard Model of particle physics that cancels otherwise huge CP-violating effects in QCD; this extension has a broken U(1) axial symmetry, where the resulting Goldstone Boson is the axion. A light axion of mass 10−(6-3) eV (the so-called "invisible axion") would couple extraordinarily weakly to normal matter and radiation and would therefore be extremely difficult to detect in the laboratory. However, such an axion would be a compelling dark-matter candidate and is therefore a target of a number of searches. Compared to other dark-matter candidates, the plausible range of axion dark-matter couplings and masses is narrowly constrained. This restricted search space allows for "definitive" searches, where non-observation would seriously impugn the dark-matter QCD-axion hypothesis. Axion searches employ a wide range of technologies and techniques, from astrophysical observations to laboratory electromagnetic signal detection. For some experiments, sensitivities are have reached likely dark-matter axion couplings and masses. This is a brief and selective overview of axion searches. With only very limited space, I briefly describe just two of the many experiments that are searching for dark-matter axions.
From constraints imposed by structure formation in the Universe, plus the remnant light-isotope abundance left from the Big Bang, our good guess is that the dark matter is some particle relic left over from the Big Bang. A number of particle candidates have been proposed, two of which have properties that fit all the observations. These two are WIMPS and axions. WIMPs are neutral particles with mass in the 100 GeV range and having about weak-interaction couplings. Such particles have the properties of dark matter and would have frozen out in the early universe with about the right density to be dark matter. Much of the interest in WIMPs is driven by interest in the theory of Super Symmetry, which provides a WIMP candidate, the neutralino. Equally well-motivated, but less wellknown, is the axion. (See reference [1] for an overview of the axion.)
What is the axion?
Quantum Chromo Dynamics (QCD) was firmly established in the early 1970's as the underlying quantum theory of the nuclear interactions. It was remarkably successful, predicting, e.g., the structure of hadronic jets and the "running" of the strong coupling constant. However, by the mid-1970's it was realized this same theory predicts huge CP-violating interactions from "instanton" (multiple degenerate vacuua) effects. One such effect would be a large permanent electric dipole moment in a spinning non-degenerate object bound by the strong interaction, e.g., the neutron.
In one of the most pleasing series of measurements in modern physics, the neutron is observed to have a vanishingly small upper bound to its permanent electric dipole moment. This strongly suggests QCD, despite its successes, is not the whole story of the strong interactions. In 1977, Roberto Peccei and Helen Quinn proposed a hidden U(1) axial symmetry of the quarks that had the effect of cancelling the unphysical strong-CP effects. Steven Weinberg and Frank Wilczek shortly thereafter realized such a symmetry, unseen in nature and therefore manifestly broken, implies the existence of a new pseudoscalar Goldstone-boson, the axion [2].
The properties of a light (in the μeV mass range) axion make for an ideal dark-matter candidate. In the Big Bang, axions appear as a zero-temperature Bose condensate, remaining at near-zero temperature and never in thermal equilibrium with the rest of the Universe. The axion, having the same quantum numbers as the π 0 , can similarly decay into two photons, but its lifetime is vastly longer than the age of the Universe. Most importantly, sufficiently light axions would have density to be the dark matter. Much lighter (<<μeV) and axions would have severely overclosed the Universe. Much heaver (>meV) and axions produced in supernova sn1987a would have efficiently transported energy out of the explosion, thereby observably shortening the neutrino arrival pulse length recorded on Earth. These bounds result in an allowed axion mass window of 10 -(6-3) eV. Such light axions, while possibly supplying the dominant form of matter in the Universe, have interactions so feeble as to render them nearly "invisible" to normal matter and radiation.
Overview: Present limits on dark-matter axions
The axion has a wide variety of couplings and decay modes; think of the π 0 and its complexity; add to that a new U(1) axial symmetry with its new charges, and the model-space of couplings is large. However, there's a great simplification for the axion decay a→γγ. This two-photon decay rate, characterized by an effective coupling constant g aγγ , contains the ratio of color and electromagnetic anomalies of the new U(1) symmetry. Hence, this rate doesn't depend explicitly on the value of the axion's U(1) couplings since they cancel in the ratio. Even those measurements that depend on other decays and interactions other than with two photons are often cast into terms of constant g aγγ to allow differing measurements and limits to be compared. Figure 1 shows selected limits on axion couplings and masses from a variety of techniques. The horizontal axis is the putative axion mass. The vertical axis is the effective coupling of the axion into two photons. "KSVZ" and "DSVZ" refer to two benchmark classes of axion models commonly targeted by searches; in a sense they represent the extremes of allowing axions to couple with optional "full" or zero QCD strength to leptons. Dark-matter axions have properties that put them somewhere between the "KSVZ" and "DFSZ" couplings and in the mass range 1-10 μeV (or possibly 1-100 μeV). Not shown in figure 1 is the very restrictive upper bound to the coupling inferred by sn1987a, which is Just below in the figure, axions can affect the properties of the Sun in several ways, including its seismic signature and energy output. As well, those same solar axions could scatter off a germanium crystal at the appropriate Bragg angle and convert into x-rays. To the right, axions in halos of astrophysical objects could spontaneously decay into pairs of optical photons, subsequently detected in telescopes. None of these methods are sensitive to dark-matter QCD axions of the expected couplings and masses. In the dark-matter band, next in sensitivity are astrophysical bounds. Here, axion emission from astrophysical objects would observably affect the evolution of those objects. Such objects include stars along the red giant horizontal branch and white dwarfs. More recently, the CERN Axion Solar Telescope, aiming to detect axions emitted from the Sun, achieved sensitivities superior to the usual astrophysical bounds. One important bound is that from axion emission in supernovae. As I mentioned, such emission would have shortened the neutrino burst duration detected on Earth seen from sn1987a. This gives by far the astrophysical coupling-constant constraint of around 10 -13 /GeV. But even there, the sn1987a bounds lack adequate sensitivity to dark matter axions. However, a technology of converting nearby axions remaining from the big bang into microwave photons ("microwave cavity" on the figure) is indeed sensitive to plausible dark-matter axion masses and couplings. Figure 1 shows only a sampling of the limits and technologies. Much more information on axions and limits is contained in the summary [1] and references therein. But since the topic here is darkmatter QCD axions, a simplification results since most technologies are by far too insensitive. While those searches may be sensitive to unusual axion variants, they don't speak to the well-motivated QCD dark-matter axion hypothesis. I therefore restrict further discussion to the astrophysical bounds and the RF cavity technique and I'll pick one experiment from each as an example. While from figure 1 it appears the astrophysical bounds don't achieve adequate sensitivity, this may not be so. In particular, axion emission in those objects may not depend directly on the axion-to-two-photon coupling, so the emission rate is highly uncertain. For instance, axions emitted via nuclear bremsstrahlung, unlike the two-photon coupling, does depend on the unknown new U(1) axial charge of the quarks. It may well be that those astrophysical bounds are more sensitive than heretofore assumed. One may then wonder whether the astrophysical bounds ("HB Stars" on the figure) are already well excluded by the considerably more restrictive sn1987a bound. This may be, but plasma effects in supernovae explosions are important and difficult to calculate. Hence, the supernovae axion emission may be suppressed, which considerably softens the supernovae bound, which in turn increases the discovery potential of the other astrophysical bounds.
Example search: The CERN Axion Solar Telescope (CAST)
Axions would be emitted by the Sun with their kinetic energies in a broad distribution centered at 3 keV. The idea of detecting these axions via their conversion into x-rays goes back several decades, to an paper by Pierre Sikivie. The first serious implementation of this was an effort at Brookhaven National Laboratory. A larger and more sensitive instrument was built, and recently recommissioned, at the University of Tokyo. Figure 2 shows a sensitive instrument based on this method, the CERN Axion Solar Telescope (CAST) [2]. It features state-of-the art x-ray optics and detectors at the ends of a long, high-field LHC prototype dipole magnet. The entire assembly rides on a telescope mount that keeps the magnet axis aligned with the Sun for several hours at dawn and dusk. Axions would be signaled by an excess of counts in the x-ray detectors. As the axion-to-photon conversion rate is quadratic in field-strength and conversion length (until the onset of decoherence), and the x-ray detection efficiency is high, CAST represents a significant advance in the sensitivity of this technology. The first phase of this experiment operated the magnet bore in a vacuum, results of which are shown in figure 2. No axion signal was observed. This configuration was sensitive to axions in the mass range up to a few 10's of meV, beyond which the axion mass was poorly matched to the photon dispersion relation with result the axion-to-photon conversion rate plummets. This result was notable for being at or more sensitive to the astrophysical bounds, and perhaps even sensitive to plausible dark-matter axion couplings at the more massive end of the search range.
At higher axion masses, in order to match the axion and photon dispersion relations, the magnet bore is backfilled with 4 He or 3 He gas at controlled pressure. At higher pressures, the peak of the conversion efficiency shifts to higher axion masses. This next experiment phase is in operation and the goal is to have sensitivity to axions with masses up to around an eV with couplings near that of the astrophysical bounds.
Example search: The Axion Dark-Matter eXperiment (ADMX)
In the same paper where Pierre Sikivie described the solar-axion search, he as well conceived of an RF-cavity search for Milky Way halo axions. The technique is to thread a high-Q microwave cavity with a large static magnetic field. Nearby halo axions scatter of the field and thereby convert into microwave photons. The resulting photon energy is that of the total energy of the axion. The microwave photons are detected in what is in essence an ultra low noise double-heterodyne radio receiver. The resonant frequency of the cavity is tunable across a search bandwidth. At each cavity tuning setting, the cavity power is averaged until the putative signal-to-noise ratio exceeds a threshold for realistic axions, and the power spectrum examined for excess power. The cavity is then re-tuned and search repeated until the cavity tuning range is exhausted. A schematic of ADMX realization of this [3] is shown in figure 3. The sensitivity of the technique is very good since the small axion-to-twophoton coupling appears only once, at the axion-to-photon conversion step in the cavity. Other techniques, not relying in already present axions, must as well produce those axions, which necessitates another factor of the very small, e.g., axion-to-two-photon coupling. The main challenge of the RF-cavity technique is that the expected microwave signal is very small, around 10 -22 watts or less. Detecting such small electromagnetic RF power levels requires liquid helium temperatures to reduce the cavity blackbody photon backgrounds and electronic noise.
Low-noise microwave amplification is the key technology of this search. Early versions of ADMX used cryogenic field-effect-transistor (FET) amplifiers of modified radio telescope design, having noise temperatures in the neighborhood of 2K. The latest phase of this experiment replaces the FET amplifiers with dc SQUID amplifiers. These devices, when cooled with a dilution refrigerator, have noise temperatures near the quantum limit, approximately 50 mK at signal frequencies near 1 GHz. The averaging time it takes to achieve a certain signal-to-noise ratio depends on the square of the noise power. Hence, the reduction in noise from 2 K to 50 mK in replacing transistor amplifiers by SQUIDs yields a potential speed-up of over 1000. In practice, some of this gain will be used to improve sensitivity rather than simply speed up the search. Figure 1 shows the early ADMX results with transistor amplifiers ("Microwave Cavity" limits to the far left). Notice this is the only technique with sensitivity to realistic dark matter axion masses and couplings. Recently, ADMX announced limits from the experiment retrofitted with SQUID amplifiers and will shortly retrofit their instrument with a dilution refrigerator to reach near quantum-limited low-noise operation.
Summary and outlook
This overview only barely touched on dark-matter axion detection. I would be remiss without mentioning the axion model uncertainties. Although the calculation of the axion-to-two-photon coupling is claimed to be robust, there could be surprises. Also, precise mass limits at the extremes of the search range are not well known. Surprises could lurk there, as well. It's therefore prudent to search for axions with "unusual" couplings and masses. Finally, this very short discussion necessarily left out many important searches and developments, for which I apologize. Perhaps the main result is that sensitivity to dark-matter QCD axions has at last been achieved with the RF cavity technique, and we may know soon whether the dark matter is made of axions Figure 1. Selected limits on axion masses and couplings from a variety of techniques. The horizontal axis is the putative axion mass. The vertical axis is the effective coupling of the axion to two photons. "KSVZ" and "DSVZ" refer to two classes of axion models commonly targeted by searches. Dark-matter axions lie between the "KSVZ" and "DFSZ" models in the mass range 1-100 μeV. Not shown is the very restrictive upper bound to the coupling from sn1987a, which is a Note the CAST sensitivity in this phase is comparable to the astrophysical bounds ("HB stars"). The limit "Tokyo helioscope" are earlier results from a Japanese solar-axion search; this project has recently returned to data-taking. "SOLAX, COSME" and "DAMA" are limits from solar-axion searches using axion-to-photon conversion in a crystal. Microwave power is amplified by a low-noise cryogenic amplifier and mixed-down to near audio. The result is digitized and processed with FFT electronics and the power spectrum is searched for axion signals. Two frequency resolutions, wide and narrow, are optimized for thermalized or non-thermalized (respectively) axions in our Milky Way halo. | 2022-06-28T03:30:12.430Z | 2010-01-01T00:00:00.000 | {
"year": 2010,
"sha1": "8855a1ed63676bc385e72988bf7ab59c41e26360",
"oa_license": null,
"oa_url": "https://doi.org/10.1088/1742-6596/203/1/012008",
"oa_status": "GOLD",
"pdf_src": "IOP",
"pdf_hash": "8855a1ed63676bc385e72988bf7ab59c41e26360",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
} |
50769229 | pes2o/s2orc | v3-fos-license | COGEVIS: A New Scale to Evaluate Cognition in Patients with Visual Deficiency
We evaluated the cognitive status of visually impaired patients referred to low vision rehabilitation (LVR) based on a standard cognitive battery and a new evaluation tool, named the COGEVIS, which can be used to assess patients with severe visual deficits. We studied patients aged 60 and above, referred to the LVR Hospital in Paris. Neurological and cognitive evaluations were performed in an expert memory center. Thirty-eight individuals, 17 women and 21 men with a mean age of 70.3 ± 1.3 years and a mean visual acuity of 0.12 ± 0.02, were recruited over a one-year period. Sixty-three percent of participants had normal cognitive status. Cognitive impairment was diagnosed in 37.5% of participants. The COGEVIS score cutoff point to screen for cognitive impairment was 24 (maximum score of 30) with a sensitivity of 66.7% and a specificity of 95%. Evaluation following 4 months of visual rehabilitation showed an improvement of Instrumental Activities of Daily Living (p = 0.004), National Eye Institute Visual Functioning Questionnaire (p = 0.035), and Montgomery–Åsberg Depression Rating Scale (p = 0.037). This study introduces a new short test to screen for cognitive impairment in visually impaired patients.
Introduction
Visual impairment, defined as visual acuity of 20/40 or less in the best-corrected better-seeing eye, affects over 280 million people worldwide. Excluding curable etiology such as cataracts or refractive disorders, the most frequent causes are age-related, such as macular degeneration, glaucoma, and diabetic retinopathy [1]. The condition of low vision is therefore strongly age-dependent and affects more than 73% of individuals aged over 65 years [2].
Loss of visual acuity impairs many activities of daily living, including reading, cooking, or selecting clothing/ dressing. Associated loss of the peripheral visual field may also cause difficulties for detecting obstacles while walking. Low vision rehabilitation (LVR) delivers multidisciplinary training including visual strategies, occupational therapy, and mobility techniques. Optic aids and nonoptic aids such as tactile marking and signature guides can be used [3]. As the effectiveness of such a multidisciplinary training is difficult to evaluate, few studies have been published on the subject. One study reported a mild improvement of quality of life after LVR [4].
Even though cognitive and visual impairments are both frequent in the elderly, the relationship between the two disorders is still a matter of debate. Several studies have reported an increased cognitive impairment in patients with agerelated macular degeneration compared to age-matched subjects [5,6]. In geriatric health services, the percentage of patients with poor visual acuity was extremely high but patients with visual impairment were found to have lower cognitive scores compared to patients with normal vision [7]. In a large cohort study, the presence of dementia at the time of diagnosis of age-related macular degeneration is not different from what is expected by chance at that age [8]. Moreover, studies assessing the association between dementia and visual impairment are limited due to the fact that many cognitive tests rely on visual skills. Patients presenting a severe loss of vision or blindness can therefore only complete part of the evaluation.
The main objective of this study was the validation of a new scale named the COGEVIS to evaluate the cognitive status of visually impaired patients and therefore detect mild cognitive impairment. The COGEVIS evaluates cognitive function without the use of vision. Our secondary objective was to determine whether LVR was effective in improving quality of life and autonomy among elderly patients with visual impairment.
Patients.
We performed a monocentric/single-site prospective study including adults aged 60 and above, referred to the LVR of "Sainte Marie Hospital" of Paris between April 2015 and April 2016. All patients were referred by an ophthalmologist to the LVR outpatient department after diagnosis. Exclusion criteria were (1) previously established diagnosis of neurodegenerative disorder and dementia, (2) ongoing treatment for cancer or other medical illness that would preclude participation, (3) severe psychiatric disorder, and (4) visual acuity of the best eye above 20/70. The study was approved by the French National Research Ethics Committee (CPP, Comité de Protection des Personnes dans la Recherche Biomédicale).
Evaluation and Care in the Low Vision Rehabilitation
Department. At the initial patient appointment, a detailed ophthalmologic examination was performed, including visual acuity, monocular manual visual field, and binocular manual visual field. Evaluation of autonomy in the daily living was assessed by a neuropsychologist using Lawton's Instrumental Activities of Daily Living (IADL) [9] The National Eye Institute Visual Functioning Questionnaire (NEI VFQ 25) was used to assess vision-related health status [10]. Depressive symptoms were evaluated with the Montgomery-Åsberg Depression Rating Scale (MADRS) [11]. Cognitive status was evaluated with the COGEVIS (COGnitive Evaluation in VISual impairment), a new scale developed to accommodate impaired vision.
After the first visit, patients followed the low vision rehabilitation program. This consisted of multidisciplinary training performed by optometrists, orthoptists, orientation and mobility instructors, occupational therapists, physiotherapists, and psychologists. Over four months, all patients attended twice-weekly rehabilitation sessions of three hours each. Two hours per week, orthoptists and optometrists worked on improving visual strategies and adjusting devices. Visual strategies were adapted to a patient's functional vision. For example, patients with a central scotoma were trained to use an eccentric fixation, while patients with a constricted visual field were taught how to perform visual scanning. The usefulness of optical devices was also tested: for example, magnifiers were adapted to the smallest readable character size and filters for glare control and lamps were proposed. One hour per week, occupational therapists trained patients to improve their autonomy in activities of daily living. The main domains were cooking, personal care, gesture recognition, self-administrated medication, shopping, and financial management. Communication instructors taught patients about computer use for one hour per week. Equipment was adjusted to each patient's vision, such as large print keyboards, magnification software, audio-screen readers, or text-to-speech converters. Orientation and mobility instructors trained patients twice a week to improve walking and mobility autonomy. This work first focused on posture, balance, and foot placement. Depending on the patient's functional vision, training for use of long canes or white canes was proposed, as well as street crossing and public transport autonomy. Psychologists interviewed patients at the beginning of the program and followed patients twice per month throughout the rehabilitation period.
At the end of rehabilitation, the evaluation battery was again carried out, including the IADL, NEI VFQ 25, MADRS, and COGEVIS scales.
COGEVIS Description. COGEVIS (COGnitive Evaluation in VISual impairment)
is an assessment measure of cognitive disorders that has the particularity of not soliciting patients' visual abilities. It has been designed to be easily applied in everyday practice by various professionals working with visually impaired patients (e.g., orthoptists and medical doctors).
It is largely composed of subtests derived from global efficiency scales: the Mini Mental State Examination (MMSE) [12], the Frontal Assessment Battery (FAB) [13], and a brief evaluation battery of gestural praxis [14], which were adapted to avoid visual modality. COGEVIS is thus a comprehensive cognitive evaluation tool that does not rely on the visual ability to be performed. The scale has a score range of 0-30, with higher scores indicating better function.
The exact French version of COGEVIS (with the verbal instructions and quotation system) and an English translation of it are provided in Supplementary file number 1.
Cognitive Status Categorization at the Institute of
Memory and Alzheimer's Disease. Between the initial appointment and the end of the first month of rehabilitation, patients were evaluated at the Institute of Memory and Alzheimer's Disease (IM2A) of the Pitié-Salpêtrière Hospital in Paris. Evaluation included a battery of neuropsychological tests and a consultation with a senior consultant neurologist specializing in cognitive disorder. The neuropsychological battery involved only tests that could be performed by individuals with a visual deficiency, that is, relying more on auditory-verbal skills than on vision. The assessment was composed of the MMSE, the FAB, the digit span forward and backward, lexical (words starting with P) and categorical (animal names) verbal fluencies, the free and cued selective memory test, analysis of praxis, and the California Verbal Learning Test [15]. At the end of the neuropsychological tests and neurological consultation, a consensual diagnosis was made to determine (1) if the participant had normal cognition, mild cognitive impairment (MCI [16]), or a major neurocognitive disorder based on the DSM-V and (2) in the case of cognitive impairment, what was the most probable underlying cause for it. Depending on test results, a MRI and/or positron emission topography (PET) was proposed to help confirm diagnosis.
2.5. Statistics. Statistical analysis was performed using the StatView System. Descriptive statistics are presented with mean and standard deviation (SD). A comparison of the participants with normal cognition (on the basis of a consensual clinic-neuropsychological evaluation at IM2A) with those with cognitive impairment (MCI + major cognitive disorder) was performed using Student's t-test for continuous variables after visually ensuring Gaussian distribution or chi-squared test for binary or categorical variables. Also, we evaluated the performance of the COGEVIS to diagnose cognitive impairment in the studied population by examining the receiver operating characteristic (ROC) curves of this test. The Wilcoxon signed-rank test was performed to compare the evolution of the COGEVIS, MADRS, IADL, and NEI VFQ 25 pre-and postrehabilitation. The level of significance in all analyses was set at p < 0 05.
Results
Thirty-eight subjects from the LVR of Sainte Marie Hospital of Paris were included in the study. Thirty-two participants completed a neurological evaluation at the IM2A, and 24 completed the follow-up evaluation after LVR. Their mean (±standard error of the mean (SEM)) age was 70.3 ± 1.3 years. The cohort included 17 (44.7%) women and 21 (55.3%) men. They had studied for 10.4 ± 0.8 years and were predominantly right handed (87%). Their visual and cognitive statuses are described in Figure 1. A detailed description of the population including a comparison of the participants with normal cognition with the cognitively impaired ones (MCI + major cognitive disorder) is provided in Table 1. Five patients assessed presented major cognitive disorders; diagnoses included one person with a mixed pathology origin (vascular + AD), 2 with typical AD, and 2 with typical Lewy body dementia (LBD). Administration of the COGEVIS by a neuropsychologist was feasible and quick (less than 10 minutes for all subjects with a mean administration time of 5 minutes in cognitively unimpaired patients). Interestingly, COGEVIS scores were significantly different between the two groups both at baseline and at follow-up evaluation. In addition, while the cognitively normal participant's COGEVIS scores slightly improved between baseline and follow-up evaluation 4 months later, scores of cognitively impaired participants slightly decreased.
Using the ROC curve method to assess the value of COGEVIS to diagnose cognitive impairment results showed an area under the ROC curve of 0.84. The cutoff point that maximized sensitivity and specificity was 24 with a sensitivity of 66.7% and a specificity of 95% ( Figure 2).
Finally, among the 24 subjects who completed a followup evaluation after LVR, an improvement of cognition (COGEVIS), functional ability (IADL), quality of life (NEI VFQ 25), and depressive symptoms was observed after 4 months of LVR as displayed in Table 2.
Discussion
The present study responds to an unmet need for an appropriate diagnostic measure of cognitive impairment in patients with visual deficiency. We developed a new cognitive tool, the COGEVIS, based on the combined expertise of cognitive neurologists, neuropsychologists, and LVR specialists. This new scale is the first comprehensive scale to be validated in a cohort of elderly patients with visual deficiency in whom cognitive impairment is underdiagnosed or wrongly attributed to visual impairment [17]. In this study, COGEVIS was able to identify cognitive impairment with a good diagnostic value (area under the ROC curve of 0.84) and was also used to assess cognitive evolution after LVR. Previous studies screened visually impaired patients with only part of the test omitting items that require image processing. The Leipzig Longitudinal Study of the Aged reported results of part of the MMSE with a maximum total score of 22 instead of 30. Validity of the results was limited, restricted to individuals with very high or very low cognitive performance [18].
Although the small number of participants did not allow for statistical analysis of major neurocognitive disorder etiology, we qualitatively note that the frequency of cognitive impairment in this population was high, compared to the frequency in the general population [19]. We also found that there was a high percentage of LBD among participants presenting cognitive impairment. Importantly, LBD diagnoses in our study were made according to the latest McKeith et al. criteria [20] and not only proposed in the instance of visual hallucinations. In visual deficiency, there is a high prevalence of visual hallucination related to Charles Bonnet syndrome, a condition in which visual hallucinations develop in association with visual deprivation [21][22][23]. This condition does not elicit either Parkinsonism or major cognitive fluctuations, two of the three major clinical criteria for LBD. Moreover, we systematically searched for supportive features such as REM sleep behavior disorder, dysautonomia (constipation, orthostatic hypotension), or anosmia to strengthen diagnosis accuracy. However, Charles Bonnet syndrome may not be a benign disease. In Lapid et al.'s study, after an average follow-up time of 33 months, 26% of patients presenting Charles Bonnet syndrome developed dementia. The most commonly diagnosed form of the dementia was LBD [24]. Factors associated with Charles Bonnet syndrome negative outcome were fear-inducing and longer-lasting hallucination episodes associated with a reduction of daily activities [21]. LBD is a disease in which the primary visual cortex is often hypometabolic on fluorodeoxyglucose PET studies [25]. Chronic visual deficiency may induce a vulnerability of the posterior cortex, favoring the development of Lewy body dementia. However, this is not substantiated in our study, as neither the degree nor the duration of visual deficiency was associated with cognitive performance. Other factors, which could not be evidenced from this study, could be assessed in a larger cohort of visual deficiency patients followed longitudinally. The evolution in scores between initial and follow-up assessments after 4 months indicates the efficacy of LVR to improve visually impaired patients' functional abilities and their quality of life. This contributes further evidence in addition to the few studies published on the improvement of quality of life [4] and ongoing utility of LVR in treating patients aged 60 and above. Interestingly, LVR improved cognition according to the significant increase in COGEVIS scores possibly related to the learning of new strategies for planning and organization. We could identify a subgroup of patients with pre-LVR cognitive impairment who did not benefit from LVR, as their post-LVR COGEVIS scores were lower than those at baseline. However, this result does not invalidate LVR in patients with cognitive impairment. Firstly, the low number of cognitively impaired participants does not allow us to draw general conclusions about this result following LVR.
Secondly, a specific study, focused only on LVR efficacy in this subgroup of patients, should be conducted in a randomized, placebo-controlled, double blind trial in order to assess the true impact of this therapy. Our study emphasizes that knowing the cognitive status of visually impaired patients before LVR is critical to inform the patient and his family of possible outcomes and adjust expectations regarding LVR.
In previous studies, loss of visual acuity has been reported to be significantly associated with depression [26][27][28]. Interviews of visually impaired patients older than 60 pointed out the high prevalence of depression in this population (more than 30%) compared to normally sighted peers [29]. In our study, MADRS scores used to assess depressive symptoms showed above average rates of depression among visually deficient patients and higher rates when visual deficiency was associated with cognitive impairment. Depressive symptoms also decreased after LVR. Therefore, discrimination between purely depressive syndromes with cognitive complaints and cognitive impairment due to neurodegenerative diseases is important to adapt the objectives and indication of LVR. COGEVIS could be a suitable test to separate these two syndromes.
Conclusion
COGEVIS is a new, simple, and useful test to screen for cognitive impairment in visually impaired patients. It can also help in the assessment of therapeutic interventions (e.g., LVR) in this population.
Conflicts of Interest
The authors declare that they have no conflicts of interest.
Authors' Contributions
Claire Meyniel and Stéphane Epelbaum conceived and drafted the manuscript and are the guarantors for the content. Dalila Samri, Farah Stefano, Joel Crevoisier, Florence Bonté, Raffaella Migliaccio, Laure Delaby, Anne Bertrand, Marie Odile Habert, and Bruno Dubois edited the manuscript. | 2018-08-06T13:39:54.133Z | 2018-06-25T00:00:00.000 | {
"year": 2018,
"sha1": "5217c6b0974838acf57f0397f7681f261c80f84b",
"oa_license": "CCBY",
"oa_url": "http://downloads.hindawi.com/journals/bn/2018/4295184.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "18224ce0c876dc6c458e09441cf1dbf57671c26b",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Computer Science",
"Medicine",
"Biology"
]
} |
86784680 | pes2o/s2orc | v3-fos-license | We need to educate young lung cancer patients about menopause risk
Background Approximately 220,000 Americans are diagnosed with lung cancer each year, leading to 150,000 deaths annually [1]. While the incidence of lung cancer has been decreasing in men, among women the rate has plateaued after increasing for years [1]. A recent study revealed that the incidence of lung cancer in young women has surpassed that of young men between 30 and 54 years of age [2]. These recent trends underscore the importance of understanding the effect of lung cancer treatments on fertility and menopause in young women. Chemotherapy-associated infertility and premature menopause are known to be frequent concerns among young women diagnosed with other cancers, sometimes impacting cancer-directed therapy decisions and quality of life [3,4]. Amenorrhea (particularly when it is long lasting) is a surrogate for gonadotoxicity in women. Anticancer drugs may diminish fertility and lead to menopause via ovarian atrophy, stromal fibrosis and vascular toxicity [5]. Various types of chemotherapy have been shown to destroy rapidly growing mature ovarian follicles and to induce apoptosis in primordial ovarian follicles [5]. Chemotherapy-induced infertility is most burdensome for younger patients (who have more frequently not completed their desired child-bearing). Female patients treated for cancer during childhood go on to have half as many live births as their sisters who did not get chemotherapy [6]. Alkylating agents such as cyclophosphamide are known to be more gonadotoxic than many other classes of chemotherapeutics and higher doses of cyclophosphamide are most problematic [7]. An analysis of patients with breast cancer enrolled in the International Breast Cancer Study Group Trials V and VI revealed that time to menopause after receiving cyclophosphamide, methotrexate and 5-fluorouracil is dose-dependent; in women younger than 35 who received one or no cycles of cyclophosphamide, methotrexate and 5-fluorouracil, 37% were menopausal in 5 years, significantly less than the 65% of women under 35 who received six or seven cycles [8]. Risk of ovarian toxicity increases with age; amenorrhea occurs at least temporarily in more than 80% of premenopausal women treated with the more modern combination of anthracycline, taxane and cyclophosphamide for early stage breast cancer, but nearly half of women less than 40 eventually resume menses while less than 5% of those over age 50 do [9]. Similarly, in lymphoma patients, treatment regimens that contain high doses of alkylating agents are associated with the highest risk of menopause and risks are age dependent [10]. While the gonadotoxic effects of many standard treatment regimens for breast cancer and lymphoma are well studied, the risk of menopause and infertility in premenopausal women with lung cancer remains uncertain. In small cell lung cancer, cisplatin plus etoposide is the standard first-line chemotherapy treatment for both limited and extensive stage disease. Platinum-based chemotherapy regimens are frequently first-line choices for non-small-cell lung cancer in both the nonmetastatic and metastatic setting [11]. A patient with metastatic disease often instead receives tyrosine kinase inhibitors (TKIs) as first-line treatment if the tumor has a targetable mutation. Immunotherapy plus chemotherapy is recommended for tumors with low PDL-1 expression, whereas immunotherapy alone is used in patients with high PDL-1 expression [11]. The gonadotoxicity of these drugs is understudied; while cisplatin is
known to cause significant atresia of ovarian follicles and apoptosis in granulosa cells in rats [12], rates of amenorrhea during and after cisplatin are less clear in humans. It is known that nearly all men who receive cisplatin-based chemotherapy for testicular cancer experience at least temporary azoospermia but 50% recover by 2 years and 80% by 5 years [13]. This is similar to the rate and duration of azoospermia during and after cyclophosphamide, doxorubicin, vincristine and prednisone (CHOP) for non-Hodgkin lymphoma [14].
Our recent longitudinal study analyzed the risk of menopause in 182 premenopausal women treated for lung cancer between 1999 and 2016. 85 received platinum-based chemotherapy, while 97 did not. Overall, 55% of women who received chemotherapy reported becoming menopausal within 2 years of treatment compared with only 31% of women who received no treatment or targeted therapy [15]. Interestingly, the rate of menopause among women receiving doxorubicin-cyclophosphamide (AC) for breast cancer historically is similar to the rate we identified in these young women who received chemotherapy for lung cancer. While the heterogeneity of populations in the aforementioned studies makes it difficult to draw direct comparisons, these results do suggest that the risk of gonadotoxicity with common platinum-based regimens for lung cancer may mimic that of standard doses of cyclophosphamide given for breast cancer (usually 2400 mg/m 2 ).
Our study included too few patients treated with immunotherapy and TKIs alone to assess this. Preclinical studies have revealed that epidermal growth factor receptor expression is a required component of ovarian maturation [16]. Consequently, epidermal growth factor receptor-targeting TKIs could plausibly disrupt normal ovarian function, though larger clinical studies are needed to address these questions.
Treatment of menopausal symptoms
Diminished ovarian function often leads to the onset of amenorrhea and moderate-to-severe menopausal symptoms after chemotherapy such as hot flashes, sleep disturbance, fatigue and mood changes [17].
For mild vasomotor symptoms, behavior modification such as lowering the room temperature, cooling fans and weight loss have demonstrated efficacy for reducing hot flashes and night sweats [17]. Women who have moderate to severe vasomotor symptoms are often treated with medications. Selective serotonin reuptake inhibitors have been shown in multiple randomized controlled trials to reduce hot flashes by as much as 40% without a significant increase in adverse events compared with placebo. Gabapentin and pregabalin have also been studied in menopausal women and have demonstrated efficacy in reducing hot flashes. However, they are generally used as second-line drugs in women who do not achieve adequate symptom control with a selective serotonin reuptake inhibitor. Oxybutynin, clonidine and stellate ganglion blockage are other therapies that are potentially promising to reduce severe vasomotor symptoms [17].
In addition to vasomotor symptoms, women who experience premature menopause often have genitourinary symptoms of menopause (GSM) such as vaginal dryness and discomfort with sexual activity. While GSM symptoms significantly impair quality of life, women are unlikely to discuss such symptoms with their healthcare provider [17] and oncologists may be less likely to ask patients with lung cancer about GSM than patients who are receiving endocrine therapies for breast or gynecologic cancers. A number of therapies may improve GSM symptoms, including nonpharmacologic methods, such as vaginal lubricants, topical lidocaine and hyaluronic acid gel or pharmacologic preparations such as vaginal DHEA and estrogen [17]. Because lung cancers do not seem to be hormonally driven (despite evidence of some hormone receptor expression), and because hormone replacement therapy (HRT) does not increase lung cancer incidence, HRT may also be considered (though safety and efficacy of HRT have not been well studied in lung cancer survivors specifically) [18].
Options for fertility preservation
Because of the risk of early menopause, premenopausal women should be counseled about fertility preservation options prior to initiating lung cancer therapy.
Embryo cryopreservation is the most well-established method of maintaining fertility. However, embryo cryopreservation is only a viable option for women who have a male partner or wish to use a sperm donor. For others, the cryopreservation of mature and immature oocytes may be available [19]. Vitrification, which allows for rapid freezing of oocytes, has led to greater success rates with oocyte cryopreservation than were previously possible [19]. Ovarian tissue cryopreservation remains investigational, though several small studies have reported successful live births with this technique [19].
As the aforementioned options may delay therapy for 2-6 weeks, ovarian function suppression may also be an attractive alternative. There is some controversy about the value of suppressing ovarian function during chemotherapy with gonadotropin-releasing hormone agonists to reduce the vulnerability of maturing ovarian follicles to cytotoxic chemotherapy. While these agents appear to increase the rate of ovulation and menses after chemotherapy for breast cancer, no study has shown a definitive increase in the number of live births with the use of ovarian suppression during chemotherapy [19]. Consequently, the American Society of Clinical Oncology (ASCO) guidelines do not include gonadotropin-releasing hormone agonists as recommended fertility preservation technique for patients with cancer [20].
Recommendations
While we await larger studies of the impact of specific lung cancer therapies on ovarian function, it is important for clinicians to counsel young women that systemic therapy for lung cancer may increase their risk of infertility and premature menopause. Those who have not yet completed their desired childbearing at the time of a lung cancer diagnosis should be referred to reproductive endocrinology prior to systemic therapy initiation to consider fertility preservation (e.g., oocyte and/or embryo cryopreservation). In addition, clinicians should ask about and offer therapies for the genitourinary and vasomotor symptoms of ovarian dysfunction during and after lung cancer treatment. | 2019-03-28T13:33:24.679Z | 2019-02-01T00:00:00.000 | {
"year": 2019,
"sha1": "63b587ab3c2d297db14dbb69ba56856a1a9e31e3",
"oa_license": "CCBYNCND",
"oa_url": "https://www.futuremedicine.com/doi/pdf/10.2217/lmt-2018-0018",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "63b587ab3c2d297db14dbb69ba56856a1a9e31e3",
"s2fieldsofstudy": [
"Medicine",
"Biology"
],
"extfieldsofstudy": [
"Medicine"
]
} |
268585999 | pes2o/s2orc | v3-fos-license | Multimodal imaging in Ig G4- related aortitis: case report
ABSTRACT We present the case of a 56-year-old patient with fever of unknown origin associated with chest and lumbar pain. Multimodality imaging revealed diffuse peri-aortitis in the thoracic aorta without involvement of the aortic valve, contributing substantially to the diagnosis of Ig G4-associated aortitis. Immunosuppressive therapy was started. Follow-up at five months with cardiac magnetic resonance imaging showed a reduction in the inflammatory process in the thoracic aorta.
Introduction
Aortitis is an infrequent cause of chest pain, whose etiology is associated with rheumatologic, infectious or idiopathic diseases.Histopathological vascular exam is the gold standard for diagnosis (1,2) .A late diagnosis can be catastrophic, leading to arterial stenosis and multiple organ failure; therefore, starting early therapy should be the principal objective.Currently, noninvasive diagnosis can be done using imaging studies, ensuring an early diagnosis, and avoiding percutaneous invasive procedures (3,4) .We present a case report to show the importance of multimodal imaging for the diagnosis of this entity.
Case Report
A 56-year-old woman came to the emergency room with fever of unknown origin associated with chest and lumbar pain for the last four months.In addition, she had intermittent nausea and vomiting.Her medical history included arterial hypertension, obesity, and deep vein thrombosis of the lower limbs.On physical examination, heart rate was 104 beats per minute, blood pressure was 160/84 mmHg in the right arm and 164/86 mmHg in the left arm.Thoracic auscultation showed no significant findings, and the evaluation of radial, humeral, femoral, and pedal pulses showed no abnormalities.
In laboratory tests, the complete blood count showed no alterations; however, the glomerular sedimentation rate was 53 IU/L, and the C-reactive protein was 22.6 IU/L.On the other hand, the metabolic profile, assessment of syphilis (VDRL), and immunological profile including tests for lupus, ANCA, and IgG4, showed no abnormalities.
Initially, computed tomography was indicated to explore the thoracic vessels.This study showed a concentric hypo attenuated and thicked lesion of 1.2 cm at the proximal ascending thoracic aorta and dense tissue with scant enhancement surrounding the aorta compatible with peri aortitis extending from the ascending aorta, aortic arch, subclavian, and infrarenal aorta to the common iliac arteries (Figure 1A).
With those findings, transesophageal echocardiography was recommended, which clearly showed diffuse wall thickening from the proximal ascending aorta to the proximal third of the descending aorta (maximum thickness of 15 mm).The circumferential involvement was evidenced in the tri-planar image of the thoracic aorta.(Figure 2, video 1).In the aortic valve exploration, mild aortic regurgitation was found, and in the rest of the study, no relevant alterations were reported (video 2).
Finally, cardiac magnetic resonance imaging (MRI), in the T1 sequence showed a thickened ascending aorta wall (13mm) with a hypointense signal (Figure 1B) and enhancement of the descending aortic wall after administration of intravenous contrast and in the T2 sequence, evidenced a hyperintense area circumferential to the aorta, extending from the ascending aorta, the aortic arch, and the descending aorta, compatible with an inflammatory process (Figure 1D).Unfortunately, a percutaneous biopsy was not performed due to the logistic and expertise limitations of the center.
Once the findings were determined, the rheumatology department, with clinical and mainly radiological criteria, made the diagnosis of aortitis associated with Ig G4; for this reason, the patient began to receive prednisone 20 mg twice a day, and one month later azathioprine daily.At the fifth month of treatment, she reported no fever and no chest or lumbar pain.On the other hand, CT angiography and cardiac MRI revealed an evident decrease in aortic thickness (Figure 3).
Discussion
Inflammation of the aortic wall layers can be caused by a wide variety of systemic diseases, from rheumatologic disorders, and infectious diseases to idiopathic entities (1,2) .Cardiovascular imaging plays an important role in the diagnosis and identification of complications (3) .
Computed tomography is a non-invasive procedure, which explores the whole thoracic aorta and its different morphostructural presentations such as inflammatory vasculitis, aneurysmal change, and pseudotumor formation.It is the choice test in the detection of aortitis, due to its high sensitivity and specificity (95% and 100% respectively), and it can also be synchronized with electrocardiography, particularly for evaluation of coronary arteries, avoiding percutaneous invasive procedures (4) .
MRI can detect complications of the aortic wall such as thickening, ulceration, and formation of pseudoaneurysms and aneurysms, with greater sensitivity than tomography.Likewise, it is useful in the differentiation of disease stages, the T1 sequence shows hypointense images related to chronic periarteritis, on the other hand, the T2 sequence shows hyperintense images in the early and active stages of the disease due to inflammatory edema, while the low-intensity signal is commonly observed in the late fibrotic stages (5) .
Transthoracic echocardiography (TTE) in this entity, is particularly relevant in determining the presence of aortic regurgitation and providing signs of dilatation or thickening of the ascending aorta (6) .
Transesophageal echocardiography (TEE) provides realistic and high temporal-spatial resolution images of the thoracic aorta, except for the distal third of the ascending aorta (6) .Additionally, it defines with high-precision, pathologies such as atherosclerosis including the presence of plaques, aortic dissection, and inflammatory processes as in our case (6) .An additional utility of the TEE is to pinpoint the location of the structural abnormality to guide the biopsy and
References
even the surgical act, likewise, TEE can account for immediate complications following the intervention (7,8) .
New imaging tools are being developed such as nuclear medicine imaging with fluorine-18 fluorodeoxyglucose (18-FDG), with high sensitivity, especially in the early phase of the disease (9) , by evaluating the increase of glucose uptake in the vessel walls, estimated with a variable sensitivity between 56 to 100%.
Aortitis is part of a long list of differential diagnoses of chest pain (1) , as the case in our patient.IgG4-related aortitis can be difficult to diagnose in the absence of other organ involvement (2) .As mentioned above, histological examinations were not performed, so the rheumatologic medical team determined to initiate treatment for this entity based primarily on clinical and radiological findings.
Among the differential diagnoses is giant cell arteritis; however, the patient did not present headache or pain in the temporal artery, and the diagnostic imaging evidenced an absence of involvement of the extracranial carotid branch (1) .Takayasu's disease was excluded by physical examination; there was no difference in pulse in both arms, and the tomographic findings were diffuse and not obstructive.On the other hand, systemic lupus was not considered because of negative clinical and serological markers (2) .Syphilitic aortitis was eliminated from the list of diagnoses due to the absence of epidemiological history, negative VDRL test, and non-localized aortic involvement (10) .
Another infectious aortitis was ruled out because the patient did not present a septic aspect, a hemogram without relevant findings, and the disease duration was prolonged (11) .
In conclusion, IgG4 aortitis is a systemic inflammatory disease that can cause from non-specific symptoms to the involvement of large vessels, which can potentially generate tragic complications such as myocardial ischemia, aortic dissection, or rupture.
Multimodal imaging has an essential role in determining the diagnosis of IgG4-associated aortitis, even more so when a histological study is not available.Its usefulness extends to follow-up, to determine response to immunosuppressive and anti-inflammatory therapy.
Figure 1 .
Figure 1.(A)Pre-treatment contrast Computed tomography angiography reformat showed concentric lesion hypoattenuated with 77 HU and thickening of 1.2 cm at the proximal ascending thoracic aorta.(B) Pre-treatment post-contrast resonance angiography showed a thickened ascending aorta wall (13 mm) with a hypointense signal.(blue arrow).(C) Pre-treatment T1w sequence MR in the same patient demonstrated enhancement of the descending aortic wall after administration of intravenous contrast (white arrow).(D) Pre-treatment Double inversion recovery fat sat T2w sequence MR in the same patient at different levels shows a diffuse hyperintense lesion of the aortic wall compatible with periaortic edema (white arrow).
Figure 2 .BFigure 3 .
Figure 2. (A) TEE.Triplanar image of the ascending aorta, the arterial wall with asymmetric thickening of 14 mm in posterior aspect "P", and 8 mm in anterior aspect "A".(B) TEE.Triplanar image of the descending thoracic aorta, visualizing arterial wall with homogeneous circumferential thickening of 5mm. | 2024-03-22T15:45:40.594Z | 2024-01-04T00:00:00.000 | {
"year": 2024,
"sha1": "41b06c5dd692388fbbaafc646709210d297a1365",
"oa_license": "CCBY",
"oa_url": null,
"oa_status": "CLOSED",
"pdf_src": "PubMedCentral",
"pdf_hash": "521831c6b3bc42f357fec457018d26ac869fbf33",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": []
} |
256846609 | pes2o/s2orc | v3-fos-license | A general basis set algorithm for galactic haloes and discs
We present a unified approach to (bi-)orthogonal basis sets for gravitating systems. Central to our discussion is the notion of mutual gravitational energy, which gives rise to the self-energy inner product on mass densities. We consider a first-order differential operator that is self-adjoint with respect to this inner product, and prove a general theorem that gives the conditions under which a (bi-)orthogonal basis set arises by repeated application of this differential operator. We then show that these conditions are fulfilled by all the families of analytical basis sets with infinite extent that have been discovered to date. The new theoretical framework turns out to be closely connected to Fourier-Mellin transforms, and it is a powerful tool for constructing general basis sets. We demonstrate this by deriving a basis set for the isochrone model and demonstrating its numerical reliability by reproducing a known result concerning unstable radial modes.
Introduction
Orthogonal basis sets play a key role in the efficient calculation of the gravitational potential of perturbed, isolated mass distributions. They also have great value for investigating the stability of dynamical models for galaxies. Both these topics have attracted renewed interest recently in light of the mounting observational evidence that the Milky Way and other galaxies are not as symmetric in shape as assumed previously (Vera-Ciro & Helmi 2013;Law & Majewski 2010), and moreover may not be in exact dynamical equilibrium (Erkal et al. 2021;Petersen & Peñarrubia 2021).
A small sample of recent applications of basis sets includes: efficiently reconstructing individual trajectories in time-varying snapshots of N-body simulations of dark matter haloes (Lowing et al. 2011;Sanders et al. 2020;; flexible non-parametric models for the Milky Way (Garavito-Camargo et al. 2021); and a wide variety of perturbation calculations (Hamilton et al. 2018;Fouvry & Prunet 2022).
The development of these so-called 'biorthogonal' basis sets begins with Clutton-Brock (1972, 1973, who introduced two remarkable analytical sets of potential-density pairs based on the Kuzmin (1956) disc and Plummer (1911) sphere respectively. These mathemtical discoveries (along with some later results discussed below), while fortunate, are limited. It has long been recognised that to make best use of the basis set technique, one would prefer a complete freedom in choice of zeroth-order (as well as underlying coordinate system and geometry), while making minimal sacrifice of computational efficiency.
To this end, there are basically three possible directions of generalisation. One might hope to have the good fortune of finding other 'analytical' basis sets, taking some known model as the zeroth-order potential-density pair and then hoping that by some ingenious change-of-variables or integral transform a set of orthogonal higher-order functions can be written down. This approach is limited but has provided a handful of further results in both spherical polar coordinates (Hernquist & Ostriker 1992;Zhao 1996;Rahmati & Jalali 2009;Lilley et al. 2018b,a) and for infinitesimally thin discs (Kalnajs 1976;Qian 1993). Generally speaking, for both spheres and thin discs, basis sets exist for some double power-laws and for certain types of exponential distributions of mass.
Secondly, one could posit an arbitrary sequence of nonorthogonal potential-density pairs, and from them derive an orthogonal set using the Gram-Schmidt algorithm. This is the approach of Saha (1993); Robijn & Earn (1996). The downsides are the large number of expensive numerical integrations required to compute the required inner products, the numerical instability inherent to the Gram-Schmidt process, and the uncertain completeness or convergence properties of the resulting orthogonal basis.
Lastly, the strategy devised by Weinberg (1999); generalises Clutton-Brock (1973)'s original result directly by noticing that the potential-density relation takes the form of a Sturm-Liouville eigenfunction equation with a certain weight function; by choosing a different weight function and using a numerical Sturm-Liouville solver, a different set of eigenfunctions (and hence basis set) can be found. This approach has the upside that certain guarantees about completeness and convergence can be made, but the downside that the resulting eigenfunctions must be tabulated numerically on a coordinate grid.
In this paper we describe a different generalisation of Clutton-Brock's original results -we jettison the eigenfunction equation but retain a three-term recurrence relation.
Essentially our approach is motivated by the observation that the extant basis sets 1 so far described in the literature admit the curious property of tridiagonality with respect to a radial derivative operator. That is, for a given density basis function ρ n (r) 1 Of analytical form and infinite extent.
Article number, page 1 of 21 arXiv:2302.06944v1 [astro-ph.GA] 14 Feb 2023 A&A proofs: manuscript no. aanda_clean (suppressing the angular indices and coordinates), the following holds, r ∂ρ n ∂r = a n ρ n−1 + b n ρ n + c n ρ n+1 , (1.1) where a n , b n , c n are constants. This may seem to be merely a curiosity, but upon further reflection it motivates a farreaching generalisation: armed with just knowledge of an arbitrary (smooth) zeroth-order basis element, the tridiagonality property (1.1) allows us to build up an entire ladder of basis elements recursively, using just one additional integral per recursive step. The resulting basis elements are linear combinations of derivatives of the zeroth order and hence require no further interpolation. Along these lines, in Sec. 2 we present an algorithm to generate general basis sets from arbitrary zeroth-order potential-density pairs. Underlying this main result is a link to the general theory of orthogonal polynomials, which motivates us to claim completeness of the resulting basis sets. This theoretical background is discussed in Sec. 3, where we introduce the Fourier-Mellin transform, and show a correspondence between tridiagonal orthogonal basis sets and orthogonal polynomials in the transformed space. Key to this link is the notion of the gravitational selfenergy inner product, and an operator (D) that is self-adjoint with respect to it.
The new approach was in part first suggested implicitly by Kalnajs (1976), who introduced the Fourier-Mellin transform (but not naming it as such) in the case of thin discs 2 , but nevertheless only used it to rederive the Clutton-Brock (1972) basis set. Those results are partially repeated (using our updated notation) in Sec. 4.2, where we show that a formalism equivalent to the spherical case exists for thin discs in cylindrical polar coordinates, along with a similar self-adjoint operator (A).
As further motivation for our new algorithm, in Sec. 4 we demonstrate concretely how the formalism applies to some existing basis sets in the literature. Specifically we show that the two major families of basis sets -corresponding to double-power laws in the spherical (Lilley et al. 2018a) and thin disc (Qian 1993) scenarios (along with their various limiting forms) -both possess the tridiagonality property, and hence each admit a representation in terms of a polynomial in D or A respectively.
In Sec. 5 we return to the general algorithm described in Sec. 2, and discuss the numerical and computational issues that arise when trying to implement it in practice. In particular it is necessary to find a fast, stable method to evaluate the requisite numerical integrals. This is most easily accomplished using Gauss-Laguerre quadrature in the transformed (Fourier-Mellin) space, first computing the underlying system of orthogonal polynomials. The recommended procedure is illustrated with the case of the isochrone model, which we use in Sec. 5.4 to recover a known result about unstable radial modes.
Finally in Sec. 6 we discuss some of the geometric ideas underlying the new formalism. We outline how our results might be extended to other geometries or coordinate systems relevant to the study of realistic galaxies, and give an outlook on future work to be done in the area.
Description of algorithm
First we make some new definitions as well as recapitulating the standard terminology. We define the self-energy inner product ·, · on mass densities ρ 1 , ρ 2 = d 3 r d 3 r ρ 1 (r)ρ 2 (r ) r − r .
(2.1) This is sometimes referred to as the mutual gravitational potential energy of ρ 1 with respect to ρ 2 . Of course, the total selfenergy is just ρ 2 = ρ, ρ , which here must clearly always be real and positive (although the normal convention is for this quantity to be negative, the overall choice of sign is irrelevant for our purposes). It is important that (2.1) obeys the standard properties of a inner product: linear in its first and conjugate linear in its second argument. Generically we allow mass densities to be complex-valued, as it eases some of the following derivations; however the entire formalism (necessarily) also works in the case of purely real mass densities. We are also not limited to densities with finite total mass, only finite total self-energy 3 . Finally we note that if we have a solution to Poisson's equation for ρ 1 and ρ 2 , finding their gravitational potentials to be Φ 1 and Φ 2 , then (using Green's identities) we can rewrite the inner product (2.1) as or alternatively as We set the gravitational constant G = 1 throughout. Now we introduce both spherical polar coordinates (r, ϕ, ϑ) and cylindrical polar coordinates (R, ϕ, z), the latter being used here only in the situation where the mass density is confined to a thin disc aligned with the z-axis. We define two important operators, (2.5) These have the important property of being self-adjoint with respect to the inner product (2.1) (see App. A for a proof), i.e.
and (when f and g are thin discs) (2.7) Our standard notation for basis sets is as follows. We denote by {ρ nlm } a complete basis for the set of smooth mass densities satisfying: The set {ρ nlm } is assumed orthogonal with respect to (2.1), ρ nlm , ρ n l m = N nlm δ n l m nlm , N nlm = −K nl N nl .
(2.9) These basis functions are the product of radial and angular components, where K nl are constants factored out of ρ nl just to simplify the expressions; and ∇ 2 l is the radial part of the Laplacian when operating on (radial functions) × (a spherical harmonic of order l): (2.12) The purely radial functions ρ nl (r) and Φ nl (r) are real-valued, and satisfy a 'bi-orthogonality relation' For this reason such basis sets are traditionally referred to as bi-orthogonal. Note that we take Y lm throughout to be a unitnormalised (complex) spherical harmonic. If non-orthonormal spherical harmonics are employed then N nlm must contain the appropriate factor that normalises them. The radial functions Φ nl and ρ nl are typically functions of the quantity r/r s , where r s is some 'scalelength' with units of length; we will generally use r s = 1 implicitly 4 . An analogous notational convention is used throughout for the case of a thin disc. We write {σ nm } to represent a complete basis, where (2.14) In an abuse of notation we suppress the z-dependence and elide the quantities which have subscript nm, writing the potential in the disc plane as (2.15) We now describe a natural method for deriving basis sets with any smooth analytical zeroth-order element. We will focus on the spherical case, and afterwards describe the (slight) changes required in the thin disc case.
The first step is to choose a suitable zeroth-order potential, which we denote Φ(r). This can be chosen according to the problem at hand, the only requirements being that it must be a smooth spherically-symmetric function of r, and the potential-density pair must have finite total gravitational self-energy. Starting from Φ we must then invent a function Φ 0l (r) that provides the zeroth radial order for the higher multipoles indexed by l. This function must satisfy two boundary conditions 5 : Φ 0l ∼ r l as r → 0 and Φ 0l ∼ r −l−1 as r → ∞. One way to achieve this is to take (2.16) but any choice with the correct asymptotic behaviour will do just as well 6 Once Φ 0l is chosen, the corresponding density multi-4 Explicit length units can be reintroduced by writing Φ nl (r/r s ) and ρ nl (r/r s ) and then adding the correct number of powers of r s in whatever expression they are used. Note that such r s -dependency cancels out in the operators D and A. 5 For models with infinite enclosed mass the potential can contain an additional factor of log r as r → ∞. 6 Other choices may be preferable from an analytical point of view, for example Φ 0l = r l Φ r 2l+1 or Φ 0l = r l Φ(r)/(1 + r) 2l , the latter suggested by Saha (1993). poles ρ 0l are fully determined by where K 0l is an arbitrary constant chosen to simplify the algebra. The defining relation for the basis set with zeroth order ρ 0l is the differential-recurrence relation, ρ n+1,l = r∂ r + 5 2 ρ nl + β nl ρ n−1,l , (2.18) with initial conditions ρ −1,l = 0, and where β nl are some (as yet undetermined) constants. Note that the operator applied to the ρ nl term on the RHS is equal to −iD (2.4). We can immediately write down a similar recurrence for the potential elements, (2.19) due to the commutation relation between D and the radial Laplacian ∇ 2 l (see App. B). By taking the inner product of (2.18) with both ρ n+1,l and ρ n−1,l , and exploiting the self-adjointness property (2.6), we find that the constants β nl are given by This is just the ratio of the gravitational self-energy of the nth and (n − 1)th basis elements. Because the RHS of (2.18) depends only on the nth and lower elements, we can now build up the entire sequence of basis elements by alternating applications of (2.18) and (2.20). This deceptively simple algorithm leaves some unresolved issues: 1. Are these basis sets truly complete? 2. How do we deal with the differentiation required in (2.18)? 3. Are the numerical integrals in (2.20) stable?
We can give at least convincing heuristic answers to these questions. The question of completeness we consider in the course of the theoretical discussion in Sec. 3.2. The repeated differentiation will in general require some form of symbolic or automatic differentiation, which we discuss in Sec. 5.3 -unless the specific form of the zeroth-order allows for a simplification. The question of numerically calculating the recurrence coefficients β nl is thorny, and we return to it in Sec. 5 after developing in Sec. 3 the theoretical machinery that links these basis sets to the theory of general orthogonal polynomials.
Our resulting basis elements are linear combinations of the higher-derivatives of the zeroth-order functions: {D n ρ 0lm } in the case of the density, and {D n Φ 0lm } in the case of the potential. This means that, given a closed-form zeroth-order, all higher elements are generated through differentiation -no numerical interpolation is required, unlike Weinberg (1999)'s algorithm based on Sturm-Liouville eigenfunctions. In fact, given a particular zeroth-order, a basis computed via the Sturm-Liouville approach will not in general coincide with the basis set developed from our own algorithm, except for certain special cases that are known to obey eigenfunction equations (for example the Zhao (1996) basis sets).
In addition, unlike Saha (1993), we are able to avoid the brute force approach of Gram Schmidt orthogonalisation (with complexity O(n 2 ) in the number of inner products, and uncertain numerical stability). This is due to the self-adjointness of the operator D, which ensures that each basis set maps onto an underlying orthogonal polynomial in Fourier-Mellin space, a mathematical connection elaborated upon in Sec. 3. Thus we can reuse the large body of literature regarding the construction of general orthogonal polynomials, the most important property being that any set of orthogonal polynomials obeys a three-term recurrence relation -this relation is transferred over to the basis set, manifesting as the differential-recurrence relation (2.18).
Lastly we note that in the case of a thin disc the surface densities σ nm have fundamental differential-recurrence relation where the operator applied to σ nm on the RHS is now −iA (2.5); but the algorithm is otherwise identical to the spherical case.
In both the spherical and thin disc case the algorithm can be initialised by choosing either the zeroth-order potential or the zeroth-order density; but starting with a density may be more difficult in the thin disc case as analytical potential-density pairs are harder to come by. The required boundary conditions on ψ 0m (with azimuthal index m standing in for l) are unchanged, as is the requirement of smoothness and finite self-energy.
Functional calculus of D and the Fourier-Mellin transform
Consider the eigenfunctions of D, which we denote Ψ s . These satisfy and have the form We combine this with a spherical harmonic to define the Deigenbasis Now let F(r) be a general mass density, and F lm (r) its spherical multipole moments. Then the expansion coefficient of F in the D-eigenbasis is (see App. C for proof) and M r is the Mellin transform, We will refer (with some precedent) to the combination (3.4) of taking multipole moments and a Mellin transform as the three-dimensional Fourier-Mellin transform. We can re-express F in terms of its Fourier-Mellin expansion coefficients using the Mellin inversion theorem (via an appropriate change of variable), where the inverse Mellin transform M −1 is for some constant c, the choice of which does not affect any of our results. The mutual gravitational energy of two general mass densities F and G can therefore be expressed as Because D is self-adjoint the spectral theorem applies, and we can consider arbitrary bounded complex-valued functions of D.
The Fourier-Mellin transform can be viewed as the (unitary) map to the space in which D acts as a multiplication operator. In practice though, we can limit ourselves to considering polynomials in D.
The formalism developed above also applies mutatis mutandis to the thin disc case. The derivation is now mostly the same as that found in Kalnajs (1971Kalnajs ( , 1976), but we update his notation. Our self-adjoint operator A has eigenfunctions Σ s satisfying The functions Σ sm (R) are Kalnajs' logarithmic spirals 7 . For a general razor-thin mass density σ(R) we have the thin disc version of the Fourier-Mellin transform (see App. C.2 for proof), where σ m (R) are the cylindrical multipoles of σ(R), and (3.12)
Tridiagonality and polynomials
Associated to each of our basis sets is a polynomial we refer to as the index-raising polynomial -depending on the normalisation we write either p nl (s) or P nl (s) (for a polynomial of degree n in the variable s). The general result proved below is that applying the nth-degree polynomial with argument D to the zeroth-order density element gives the nth-order density element. It may help with interpretation to note that these polynomials in a sense 'live' in the Fourier-Mellin space introduced in the previous section.
There are several related statements that one can make about a given basis set and its associated index-raising polynomial: 1. The tridiagonality of the density basis functions {ρ nlm } with respect to the operator D; 2. The expressibility of each basis function in terms of a polynomial in D applied to the lowest-order basis function, with these polynomials obeying a three-term recurrence relation; 3. The orthogonality of p nl (s) with respect to a weight function ω l (s) given in terms of the Mellin transform of ρ 0l ; 4. The orthogonality of the basis functions {ρ nlm } with respect to the self-energy inner product ·, · .
Below we show that the first and second statements are equivalent. We also find that the third statement implies the fourth, and the second and fourth together imply the third. However, while it is easy to show that the third statement implies the second, the converse is much harder. Favard's theorem guarantees that a set of polynomials obeying a three-term recurrence relation is orthogonal with respect to some measure, however this is a difficult computation and is not what we actually want. In practice we want the freedom to specify zeroth-order basis elements, not the recurrence coefficients themselves. Therefore, to construct an arbitrary basis set we impose the first and fourth statements. Then the second and third statements (which provide the polynomials P nl (s) or p nl (s)) are a useful representation of the underlying basis set, which we exploit in order to solve numerical issues in the implementation described in Sec. 5.
The idea of finding orthogonal polynomials from tridiagonal matrices or operators is not new; in the finite-dimensional case the corresponding matrix is called a Jacobi matrix, and gives rise to polynomials of discrete argument. Our work invokes the infinite-dimensional case, in which a Jacobi operator (here D or A) operates on an infinite sequence of functions, which we generally assume to be a complete orthogonal set that spans the relevant function space. Such infinite-dimensional Jacobi operators are studied in Granovskii & Zhedanov (1986); Ismail & Koelink (2011);Dombrowski (1985), and our set-up mimics the development given in the first paper, with the difference that our D and A are taken as given and do not arise from any Lie algebraic considerations 8 .
As in the previous section, we give the main derivations in the case of spherical polar coordinates; the thin disc case then follows with little modification.
Polynomials from tridiagonality
We show that any set of densities {ρ nlm } that is tridiagonal with respect to D gives rise to an expression for each ρ nlm in terms of an index-raising polynomial in D, of the form ρ nlm = P nl (D)ρ 0lm . (3.13) By tridiagonality we mean that the following expression holds, Dρ nlm = a nl ρ n−1,lm +b nl ρ nlm +c nl ρ n+1,lm , (3.14) for some constants a nl , b nl and c nl . First, define From (3.14) there exists an expansion of χ nlm of the form Then by inverting B n jl (interpreted as a matrix with respect to the n j indices) it is evidently possible to write an expansion for ρ nlm of the form Now make the definition (3.18) 8 The operators D and A do in fact arise as the generators of symmetries of the self-energy inner product; see the discussion in Sec. 6.
To prove that P nl (s) is a polynomial, take the Fourier-Mellin expansion of (3.17), where the second equality uses the self-adjointness property (2.6) as well as the definition of the eigenbasis (3.3). Dividing through by Ψ slm , ρ 0lm then gives P nl (s) as a polynomial in s with (as yet undetermined) coefficients A n jl . But from the definition (3.17) we see that P nl (D) is just the operator expression for ρ nlm that raises the radial index from 0 to n, which is (3.13). To find the three-term recurrence relation for P nl (s), take the Fourier-Mellin expansion of (3.14), divide through by Ψ slm , ρ 0lm , and rearrange, giving For the converse statement, substituting D for s in the above recurrence and left-applying to ρ 0lm trivially recovers the tridiagonality property (we must also take the initial conditions P 0l = 1 and P −1l = 0).
Orthogonal polynomials
From Favard's theorem we know that the P nl (D) are a system of orthogonal polynomials, as they satisfy a three-term recurrence relation (3.20). However, in order to actually construct the orthogonalising weight function, we first assume that the underlying basis functions are already orthogonal. It follows that where the (positive, real-valued) weight function ω l (s) is related to the zeroth-order density basis function ρ 0l by the proof of which is in App. D. The orthogonality relation (3.21) works in both directions: if we instead assume that P nl (s) are orthogonal with respect to (a given) ω l (s), then the orthogonality of the ρ nlm follows. In fact, P nl (s) can be written in terms of purely real polynomials p nl (s), which are also orthogonal with respect to ω l (s), It is often more convenient in applications to deal with these realvalued polynomials. Without loss of generality (up to normalisation) we can take the polynomials p nl (s) to be monic 9 , obeying a three-term recurrence relation p n+1,l (s) = s p nl (s) − β nl p n−1,l (s). (3.25) In this way we only have to consider the single sequence of recurrence coefficients β nl . According to this normalisation the P nl (s) therefore obey the recurrence P n+1,l (s) = −is P nl (s) + β nl P n−1,l (s).
( 3.26) Replacing s with D and applying to ρ 0l on the right then leads to the defining recurrence for the density basis elements (2.18). Alternatively we can express the density and potential directly in terms of p nl (s),
Disc case
As expected, similar results apply in the case of thin discs. Take {σ nm } to be a set of infinitesimally thin surface densities that are tridiagonal with respect to the operator A and orthogonal with respect to ·, · . We have index-raising polynomials P nm (s), defined by ( 3.28) This gives rise to a representation of the basis functions via repeated application of the operator A, The orthogonality relation can be written The details of this derivation are in App. D.2. As before we can instead use real-valued polynomials p nm (s), also orthogonal with respect to Ω m (s). The potential and surface density in terms of p nm (s) are In general it is difficult to find the z-dependence of the potential for thin discs analytically, although in exceptional cases there may be a simple solution (e.g. the Kuzmin-Toomre discs). In any case because p nm (A) acts by differentiation with respect to R alone, this guarantees that if the z-dependence of the zerothorder potential is known, then the correct the z-dependence is preserved in all higher-order potential basis elements. This will have important implications when considering the extension of our results to the Robijn & Earn (1996) method for thickeneddisc basis sets, however we do not pursue this in the present work.
Completeness
We make some informal comments about the completeness of a general basis set {ρ nlm }, derived from a zeroth-order ρ 0l (r) as described above. The completeness of the angular part of each basis (the spherical harmonics) is taken as given.
The question then of whether a set {ρ 0l , Dρ 0l , D 2 ρ 0l , . . .} forms a complete basis for (the lth multipole of) the space of mass densities is the same as asking whether ρ 0l is a cyclic vector for the operator D. This is related to the completeness of the associated orthogonal polynomials p nl (s), as powers of D correspond to powers of s; so we require that the monomials s n (weighted by ω l (s)) form a complete basis for functions on the interval (−∞, ∞). This is achieved if ω l (s) is nonzero everywhere. By the definition of ω l (s), this then requires the Mellin transform of ρ 0l to be nonzero everywhere, which in turn requires that D n ρ 0l be non-vanishing everywhere for all n (Marín & Seubert 2006). Therefore, to be a valid zeroth-order density, ρ 0l must fulfil the following: These conditions are required to hold for all n ∈ N; restricting to n = 0 gives the conditions (2.8) on representable mass densities. While these conditions are fairly restrictive, in general any reasonable 'analytical' potential-density pairs will satisfy them; in particular those described in the following section whose corresponding basis sets or index-raising polynomials have closedform expressions.
Application to known basis sets
In Sec. 3 we developed a theoretical justification for the simple algorithm described in Sec. 2. We now provide further motivation by applying the formalism to some concrete examples of basis sets from the literature. Remarkably, all known analytical spherical (resp. thin disc) basis sets of infinite extent have a representation in terms of D (resp. A). In fact, it is extremely theoretically suggestive that these previously-described analytical basis sets have index-raising polynomials that can be written in terms of known classical orthogonal polynomials. The expressions we derive below for the various basis sets' index-raising polynomials may appear complicated; however the presence of a classical polynomial indicates simply that in each case the recurrence coefficient β nl (2.18) can be written as a rational combination of the given basis set's fixed shape parameters.
Clutton-Brock's Plummer basis set
The simplest possible useful basis set in spherical polar coordinates is that of Clutton-Brock (1973), which uses the Plummer (1911) model as its zeroth-order. By making an appropriate variable substitution, Clutton-Brock transformed the Poisson equation for the radial components (2.11) into the defining second-order differential equation for the Gegenbauer polynomials (DLMF, §18.8). Each radial density and potential component is proportional to just one polynomial, and the normalisation constant is 10 This basis set is in fact a special case of the family described in Sec. 4.1.2, but as it is the simplest (and earliest) of all the spherical basis sets we present it in some depth as a didactic example.
Looking at the definition of a continuous Hahn polynomial (E.1), we find a hypergeometric function that terminates after n terms, but where the argument s appears as a 'parameter'. Given how this relates to the definition of the density elements, this means that where the operator D alarmingly also appears as a 'parameter'; each term in the sum is proportional to a Pochhammer symbol whose argument involves D. However these are unproblematic to evaluate, as they expand to and each occurrence of r∂ r then operates to the right on ρ CB73 0l (r) in the expected fashion. The index-raising polynomials (of argument D or A) in the remainder of this section are evaluated in a similar way.
The double-power law basis sets
Practically all known double-power law basis sets in spherical polar coordinates are contained within one super-family described in Lilley et al. (2018a) (containing within it the basis sets of Clutton-Brock (1973); Hernquist & Ostriker (1992); Zhao (1996); Rahmati & Jalali (2009);Lilley et al. (2018b)). There are two free parameters (α and ν) controlling both the asymptotic power-law slope and turnover. We refer to the expressions given in Lilley et al. (2018a) for the potential, density and normalisation constants (Eqs (30)-(33) of that work), and label them with the superscript LSE. The zeroth-order has ρ LSE 0l ∼ r −2+1/α+l as r → 0, and ρ LSE 0l ∼ r −3−ν/α−l as r → ∞. Inserting ρ LSE 0l into the definition of the weight function (3.22) and writing µ = α(1+2l), we find that which is again proportional to a continuous Hahn weight function (App. E.1). Explicitly for the index-raising polynomials we have
The cuspy-exponential basis sets
These basis sets were not mentioned in Lilley et al. (2018a) and are therefore newly presented in the literature 11 ; but they are a straightforward derivation from the double power-law result, obtained by letting the parameter ν and the scalelength simultaneously tend to infinity. The result is a family of basis sets with both an exponential fall-off and a central cusp in density, both controlled by the parameter α -hence the nickname cuspy exponential. The lowest order density function is ρ 00 ∝ r −2+1/α e −r 1/α . Important cases are α = 1/2 which gives a Gaussian, and α = 1 which is a density familiar to chemists as the Slater-type orbital. We use the superscript CE for these basis functions. The density and potential are (with µ = α(1 + 2l)) ρ CE nl (r) = 2 (−1) n r l−2+1/α e −r 1/α L (µ) n 2r 1/α + L (µ) n−1 2r 1/α , where γ(µ, z) is the (lower) incomplete Gamma function and L (µ) n (z) is a Laguerre polynomial. The relevant constants are We can apply the limiting procedure directly to P LSE nl (s), and the calculation is simpler than for the basis functions themselves. The operator D does not depend on the scalelength, and hence is unaffected by the limiting procedure. So we need only consider the limit in ν. The result is proportional to a Meixner-Pollaczek polynomial P (µ/2) n (z; φ) (App. E.2), The Kuzmin-Toomre model (Kuzmin 1956;Toomre 1963) is the simplest power law for the infinitesimally thin disc. This model provides the zeroth-order for a basis set introduced by Clutton- Brock (1972). This basis set turns out to be a special case of Qian's family (Sec. 4.2.2), but here at least we can write down simple expressions in terms of a single Gegenbauer polynomial C (α) n (x) (Aoki & Iye 1978), so it is worth recording the results separately. The density and potentials in the plane are and the normalisation constant in the orthogonality relation is (4.13) The corresponding Fourier-Mellin weight function (3.31) is then Ω CB72 m (s) = Γ(1/2 + m + is) 2 2 2m+5 π 2 Γ(m + 1/2) 2 , (4.14) which is proportional to the weight function for a Meixner-Pollaczek polynomial (App. E.2) with parameters λ = m + 1/2 and φ = π/2. So the index-raising polynomials have a simple expression in terms of the Meixner-Pollaczek polynomials, P CB72 nm (s) = i n P (m+1/2) n (s; π/2). (4.15)
Qian's k-basis sets
The family of basis sets introduced by Qian (1993) is a generalisation of Clutton-Brock (1972), allowing for an arbitrary generalised Kuzmin-Toomre model to be the zeroth order. That is, the zeroth-order density functions are (using the superscript Q) Here B(x, y) = Γ(x)Γ(y)/Γ(x + y) is the standard Beta function, and the prefactors have been chosen so that all derived expressions are compatible with those in Qian (1993). The higher-order potential and density functions that Qian provides are given in terms of very complicated recursion relations, that are only valid when k is an integer. However there is no such limitation in our representation. The weight function is proportional to that for a continuous Hahn polynomial p n (s; a, b) (App. E.1), and so the index-raising polynomial is (4.17) We therefore have closed-form expressions for σ Q nm and ψ Q nm , that are valid for all real values of k, as long as the zeroth-order model has finite total self-energy. The original Clutton-Brock (1972) basis set (Sec. 4.2.1) is recovered when k = 0. The normalisation constant for the orthogonality relation can be derived from that of the continuous Hahn polynomials, and is = π 2 Γ m + n + 1 2 Γ 2k + m + n + 3 2 2n!(2k + 2m + 2n + 1)Γ(2k + 2m + n + 1) .
Qian's Gaussian basis set
A Gaussian density profile is another plausible model for the density of a galactic disc, and such a basis set was also studied by Qian (1993). Just as we derived the cuspy-exponential basis sets of Sec. 4.1.3 from the double-power law result by taking the infinite limit of the shape parameter ν, it turns out that Qian's basis set for the Gaussian disc can be derived by taking the limit k → ∞ in the corresponding expressions (4.16) for the generalised Kuzmin-Toomre basis set of Sec. 4.2.2. The zeroth-order density and potential are (using the superscript G) The function denoted 1 F 1 is a confluent hypergeometric (Kummer) function, that reduces to combinations of modified Bessel functions for any given m. At zeroth-order we have the wellknown result that the potential of a plain Gaussian disc involves a single modified Bessel function, ψ G 00 (R) = π 2 I 0 R 2 /2 e −R 2 /2 . Again Qian gives the higher-order potential and densities only as complicated recursion relations. However, explicit expressions follow upon taking the limit k → ∞ in (4.17). We find that P G nm (s) = lim where P (m/2+1/4) n (s/2; π/2) is a Meixner-Pollaczek polynomial (Sec. E.2). Then (4.19) and (4.20) can be combined to find which works because the factor of 1/ √ k cancels out in A; there is a similar expression for the potential functions.
Exponential disc
Interestingly, there is another thin disc model which has classical index-raising polynomials: we briefly sketch the derivation for an exponential disc. We require all density components to fall off exponentially like e −R but also to behave like an interior multipole as R → 0, so as a zeroth-order ansatz for the density we take simply σ exp 0m (R) = R m e −R .
( 4.22) This gives a weight function (via (3.31)) proportional to that for a continuous Hahn polynomial 12 . Thus the index-raising polynomials can be written down explicitly as P exp nm (s) = i −n p n s/2; m/2 + 1/4, m/2 + 5/4 , (4.23) along with closed-form expressions for the recurrence coefficient and normalisation constant. The remaining complication is the zeroth-order potential. The m = 0 case is awkward but classical (Binney & Tremaine 1987, Ch. 2) and uses modified Bessel functions, Deriving expressions when m > 0 is trickier -we give the details in App. F -but it can be accomplished with the following differential-recurrence relation: Some examples of the potential basis elements ψ exp nm (R) are plotted in Fig. 2.
Numerical implementation
At the end of Sec. 2 we mentioned the main obstacles to the effective implementation of the new algorithm -primarily the numerical stability when computing the coefficients β nl , but also the need to compute repeated radial derivatives of the zerothorder elements.
For the recurrence coefficients β nl the difficulty is that naively computing the integrals (2.20) becomes computationally expensive very quickly with increasing order n (and to some extent also with l). Therefore it is essential to pick a numerical integration method that is fast without sacrificing accuracy. Unfortunately due to the total freedom in choice of zeroth-order ρ 0l , 12 Unfortunately generalising the exponent to e −R 1/α gives no similarly simple result. it is difficult to find a quadrature scheme for the integrals (2.20) that is optimal in general.
Fortunately, due to the link to the polynomials p nl (s) developed in Sec. 3, we can take advantage of the extensive literature on the construction of general orthogonal polynomials. Following Gautschi (1985) we have two options: either the discretized Stieltjes procedure or the modified Chebyshev algorithm. As it happens, computing the recurrence coefficients naively as in (2.20) is directly analogous to using the discretized Stieltjes procedure, except that now we perform the integrals in Fourier-Mellin space. This turns out to be the better option numerically, as the modified Chebyshev algorithm runs into floating point issues sooner due to catastrophic cancellation of terms. However, for completeness we describe both algorithms (Sections 5.1 and 5.2). We also discuss computer-assisted techniques for performing the repeated differentiations (Sec. 5.3).
All these methods are illustrated throughout for a basis set constructed to have the isochrone model (Henon 1959) as its zeroth-order, and we follow up the numerical discussion with a demonstration of the validity of the isochrone-adapted basis set (Sec. 5.4); however the underlying methods we describe are applicable to any suitable zeroth-order model. The potential, density and polynomial weight function for the isochrone model are as follows: The precise l-dependence of these expressions is of course arbitrary to some extent, but we have made a suitable 'natural' choice.
Discretized Stieltjes procedure
The sequence of recurrence coefficients β nl that we need to compute can be expressed as the ratio of two integrals, β nl = I nl I n−1,l , where I nl = ρ nl , ρ nl = ρ nl 2 , (5.2) and so for each higher n we need one additional evaluation of I nl .
Evaluations of β nl alternate with applications of the recurrence relation (3.25) to find the next basis element ρ nl . Once sufficient β nl have been found, the potential or density functions ρ nlm and Φ nlm can be evaluated via their own recurrences as described in Sec. 2. The difficulty then is in finding an appropriate strategy to compute the integrals I nl . We opt to evaluate them in Fourier-Mellin-space, using the polynomials p nl (s) directly, and making use of the fact that the integral can be written Therefore the first step is to determine the weight function ω l (s). This can be found in terms of the (Fourier-)Mellin transform of Article number, page 9 of 21 A&A proofs: manuscript no. aanda_clean either the zeroth-order potential or the density, ω l (s) = 2K 2 0l (l + 1/2) 2 + s 2 M r ρ 0l (r) 5/2 + is 2 (5.4) The Mellin transform is perhaps one of the less familiar integral transforms, but in practice a wide variety of Mellin transforms can be found in closed-form (helped especially by computer algebra systems), in part because with a logarithmic change of variable it can be written as a Fourier transform. All the polynomial weight functions considered in this paper can be found symbolically using MATHEMATICA 13 . Numerical evaluation of the Mellin transform is also an option -by transformation to the Fourier transform and approximation using Fast Fourier Transform methods -however we do not pursue this further in the present work. Now we consider the asymptotic behaviour of the weight function ω l (s) as s → ±∞. The smoothness requirement (2.8) on ρ 0l forces ω l (s) to decay faster than any power of s, i.e. at least exponentially. We expect that ω l (s) ∼ |s| b e −a|s| as s → ±∞, (5.5) so we need to determine the decay constant a. In the case of our isochrone basis set this asymptotic behaviour is derived from the behaviour of the complex gamma function at infinity (DLMF, §5.11.9), giving ω l (s) ∼ |s| −1 e −π|s| , or a = π. When ω l (s) can be written down, it is usually simple to read off the decay constant a; for example, the double power-law basis sets (Sec. 4.1.2) have a = α.
The at-least-exponential decay of the weight function suggests that the appropriate discretisation scheme for (5.3) is Gauss-Laguerre quadrature. To implement this for the isochrone case, rewrite (5.3) to pull out a factor of e −πs , and use the symmetry of the integrand to change the domain of integration to (0, ∞) (defining x = πs), e −x e x ω l (x/π) p nl (x/π) 2 ∼x 2n−1 as x→∞ dx (5.6) We can then implement Gauss-Laguerre quadrature of order ν, as a weighted sum over evaluation points x jν , which are the roots of the νth Laguerre polynomial L ν (x): w jν e x jν ω l (x jν /π) p nl (x jν /π) 2 , (5.7) The quadrature rule of order ν integrates polynomials exactly up to order 2ν − 1, so to compute I nl with the isochrone weight function we would expect to need at least ν ≥ n. (An acceptable rule of thumb is that I nl requires ν = max(n + l, 10).) It may be necessary to compute the weights w jν and roots x jν to a higher order of precision internally using arbitrary-precision arithmetic, but this is not a bottleneck in practice -and typically 13 Some MATHEMATICA code demonstrating this is included in the repository at https://github.com/ejlilley/basis. Gauss-Laguerre quadrature is implemented as a library function whose implementation details are hidden. In this way we can get e.g. 50 orders of β nl to floating-point precision in under a tenth of a second using one core of a modern CPU.
The radial parts of some examples of potential elements in the isochrone basis set are plotted in Fig. 1.
Modified Chebyshev algorithm
This is an alternative method described in Gautschi (1985), which we find to be less numerically stable in practice. However we describe it here for completeness, as it may yet find some usefulness (e.g. to facilitate finding exact expressions for the recurrence coefficients in certain cases).
By symmetry this means thatμ kl are nonzero only for even k. In principle the choice of auxiliary polynomial is wide open, but the obvious choice in our case (for ease and stability of computation) is the monic Hermite polynomials He k (s), for whichβ kl = k. We can then proceed to find the mixed moments via a system of recurrence relations that produces the desired recurrence coefficients β nl as a byproduct: σ 0kl =μ kl , (5.11) σ jkl = σ j−1,k+1,l − β j−1,l σ j−2,k,l +β kl σ j−1,k−1,l , β nl = σ nnl σ n−1,n−1,l .
In practice (for our isochrone basis set) we find that σ jkl suffers from catastrophic cancellation beyond approximately j = 20. Alternatively if the modified momentsμ kl are known in 'closedform' then this method is convenient for finding 'exact' recurrence coefficients. This turns out to be the case for the isochrone basis set, for which see App. G.
Repeated differentiation
There are three classes of algorithm for computer-assisted differentiation: 1. finite-differencing, 2. symbolic differentiation, and 3. automatic differentiation. The first of these we can discount pretty much immediately as being wildly numerically unstable and expensive compared to the other two. The second, symbolic differentiation via computer algebra, is potentially competitive at low expansion orders, but it is hard to predict the degree of blow-up in the number of algebraic terms. It depends strongly on the precise form of the function that is being differentiated. In practice we find that efficient application of symbolic differentiation at high expansion orders requires alternating between differentiation and algebraic simplification.
Of course, one may also attempt symbolic differentiation by hand, attempting to find simplifications that reduce the tower of applications of D n to a simpler form -whether this is possible also depends on the form of Φ 0l and ρ 0l . Many of the basis sets considered in Sec. 4 have simple closed-forms (at all orders) due to fortunate simplification in repeated differentiation. For example, taking the double-power law basis (Sec. 4.1.2) with parameters α = 1/2, ν = p − 3/2 and n = l = 0 (and labelling each density function with p), we have ρ p+1 = p −1 (p − 5/4 − iD/2)ρ p . (5.12) Using this identity in (4.8) then leads (after some further simplification) to a known closed-form expression for ρ LSE nl . However it is likely to be difficult to find easy differentiation formulas in general. For our isochrone basis set, the method we give in App. G for computing the modified moments can be adapted to find expressions for the higher-order derivatives, but the result is complicated and of dubious numerical stability.
The third method, automatic differentiation (AD) is what we find to be most competitive in practice. This is a general term referring to a class of algorithms implemented entirely at the software library level, that provides an evaluation of the derivative at a single point given only knowledge of the chain rule and the differentiation rules for primitive arithmetic operations and standard mathematical library functions. Essentially, the function to be differentiated is written in ordinary code, and the AD algorithm automatically deduces the correct sequence of chain rule steps to carry out. For our purposes we require higher-order derivatives; while applying an AD algorithm to itself works in principle (and often works in practice) it is very inefficient, as the AD logic itself must be differentiated. It is better to use an AD implementation that natively understands higher-order derivatives.
As we are coding in the JULIA programming language, we use a suitable library called TAYLORSERIES.JL (Benet & Sanders 2019). A special variable t(N) is instantiated that represents the first N terms of an (abstract) Taylor series. Given a point r 0 , we can use t + r 0 as the argument of any ordinary mathematical function 15 ; the result is the first N coefficients of the Taylor series around r 0 that approximates that function. For example, setting N = 3 and r 0 = 1.0 and using the potential of the isochrone model (5.1) as our function, the computer prints a data structure representing the following truncated Taylor series, Φ iso 00 (t(3) + 1.0) = −0.4142 + 0.1213t − 0.0052t 2 − 0.0225t 3 .
When it comes to the actual implementation, we have two choices, which we find to have similar efficiency in practice. The first option begins with computing the vector of derivatives (at a point r 0 ) up to some maximum order N all in one go, (5.13) In fact, because D can be expressed as a single differentiation with respect to a transformed variable (via r d/dr = d/ds, where s = log r), V l can be obtained directly from a single N-term Taylor series evaluation. Separately, we derive from β nl the matrix elements (A l ) n j = A n jl in the expansion (5.14) To evaluate a vector of potential functions at a single point, we perform the contraction Φ l = A l · V l . At each different point r 1 , we have to re-compute V but not A.
The second option is to use the recurrence relation directly (i.e. (2.18) or (2.19)). Because we know ahead of time that we want N iterations of the recurrence relation, we set up the Taylor series t(N), and use r 0 + t(N) as the dependent variable. The length of the series then shrinks as we go up the ladder of basis function evaluations. In practice this second method seems to be marginally slower than the first one, as more operations on the abstract Taylor series need to be performed.
Unstable modes of a spherical system
It is important to check whether a basis set constructed according to the prescriptions of Sec. 5 actually works in practice. One simple approach might be to just construct n basis functions, and integrate up the n × n square of inner products, testing whether orthogonality is achieved to a given floating-point precision. However, we know that it is possible to construct basis sets that are genuinely orthogonal but whose expansions of realistic mass densities fail to converge in practice, or display other undesirable numerical effects. 16 Therefore we choose to demonstrate the validity of our approach by reproducing a physical result from the literature -the unstable radial mode of the isochrone model.
We use the discretized Stieltjes method described in Sec. 5.1, where the basis set is adapted to the isochrone model at zeroth order. However the specific adaptation is not the crucial part; for this particular application only the perturbing density needs to be accurately resolved by the basis elements, so the key feature required of the basis set is only that it has the correct asymptotic behaviour. To this end, we adapt the code and method of Fouvry & Prunet (2022) to show that the same unstable mode is recovered by our isochrone-adapted basis set. The part of the code that implements the basis set may be found at https://github.com/ejlilley/basis.
The details of the computation can be found in Fouvry & Prunet (2022). In brief, we start with knowledge of an isotropic distribution function that solves the collisionless Boltzmann equation for the isochrone potential. We also have the corresponding action and angle coordinates (J, w) as a function of position and momentum, which for the isochrone potential are known in closed form. Then, each potential basis element must be Fourier-transformed with respect to the angle coordinateŝ where the R n (s) represents the collisionless Boltzmann operator for a perturbation with growth rate proportional to e st . The unstable growing mode then corresponds to a solution A of the matrix 16 For example, the 'defective' NFW basis set constructed in Lilley (2020, Ch. 2), which does not converge with the addition of higherorder angular terms. See also Saha (1991), who suggests that "glitches and generally anomalous behaviour" in the recovery of modes may be related to the form of the chosen basis functions -this should be systematically investigated. 17 The azimuthal index m is set to zero as it does not affect the final result. A plot of this mode is shown in Fig. 3. The maximum expansion orders were n max = 6 and l max = 2, with a scale length of r s = 1 and a maximum resonance number of n max 1 = 10. All our other integration parameters are identical to those in Fouvry & Prunet (2022, App. C), where a matching result was obtained using the Clutton-Brock (1973) basis set with n max = 100 and r s = 20 -the mode shape also agreeing with the original result of Saha (1991). As mentioned previously, it is not strictly necessary to exactly match the zeroth-order element of the basis set to the underlying equilibrium model. However the basis elements must have the correct asymptotic behaviour, so using the isochroneadapted basis set guarantees that this condition is satisfied. Nevertheless, our results do hint that accurate mode recovery may be possible with many fewer basis elements when the basis is suitably adapted, although we hesitate to draw any firm conclusions until a more systematic comparison can be drawn.
Calculating the matrix M is very computationally expensive, as it requires multiple truncated infinite summations, over several indices (n, l and the vector of wavenumbers n). It also requires two nested integrations, as the Fourier transform (5.16) must also be performed numerically. In the general nonisochrone case a third level of integration is required, because the action and angle coordinates are no longer known in closedform. Any method of reducing this computational effort is therefore desirable. It is possible that judicious choice of basis elements and application of their differential-recursion relation (2.19) may ameliorate these calculations, but further investigation is needed.
Discussion and Conclusions
We have reformulated the study of bi-orthogonal basis sets using the language of Fourier-Mellin transforms. This unexpected development unifies many previous results into a coherent theoretical framework. The general idea of generating new potentialdensity pairs from old by differentiation is not entirely new. Traditionally this is accomplished by differentiating with respect to the model's scalelength -in particular, Aoki & Iye (1978) found compact expressions for Clutton-Brock (1972)'s thin disc basis by repeatedly applying the operator a∂ a (for a the scalelength) and orthogonalising the resulting sequence of potentialdensities by the Gram-Schmidt process. Subsequently de Zeeuw & Pfenniger (1988), in the course of deriving a series of ellipsoidal potential-density pairs, noted that the operators r∂ r and ∇ 2 obey an important commutation relation (which we re-derive in App. B). Therefore Aoki & Iye (1978)'s result (and by extension our algorithm presented here) can be expressed in terms of the coordinates alone, without reference to an arbitrary scalelength.
The formalism developed in sections 2-3 deserves some further interpretation. In particular, the operator D on which the whole development hinges may appear to have been plucked out of thin air, but it is in fact no accident: D is precisely the infinitesimal generator of the scaling symmetry of the self-energy inner product (2.1). To briefly motivate this, let S t be a 'radial scaling' operator, As is immediately evident from dimensional analysis, this preserves the self-energy, i.e.
S t f, S t g = f, g .
( 6.2) The operator D is now defined in terms of the infinitesimal generator of S t , Differentiating (6.2) with respect to the parameter t, it is immediately evident that D is self-adjoint 18 . In Sec. 3.1 we implicitly invoked Stone's theorem from functional analysis to provide a Fourier-like transform whose integral kernel is the eigenfunction of a self-adjoint operator. In our case the operator is D, the eigenfunction is Ψ s (3.2), and the resulting integral transform is exactly the radial part of the Fourier-Mellin transform that we defined in (3.4). The spherical harmonics arise from a similar argument applied to the generators of the coordinate rotations 19 . This line of reasoning suggests that it may be worthwhile to look for other symmetries of the self-energy inner product, perhaps arising from other coordinate systems or geometries in which the Laplacian separates. Given a set of three mutuallycommuting operators arising from three symmetries of the selfenergy, we would expect to be able to construct a basis set formalism similar to that of the present work. To sketch out what this looks like in full generality, let τ be a suitable self-adjoint operator according to the criteria just described (restricting to one spatial dimension for the sake of discussion). Then the selfadjointness condition (2.6) combined with the properties of the inner product (2.1) implies that where τ * is the Hermitian adjoint of τ with respect to the ordinary inner product on L 2 functions (A.3). Suppose further that we have found a set of orthogonal potential functions {Φ n }, with an index-raising polynomial p n (s) such that Φ n = p n (τ)Φ 0 . (6.5) Then the associated density functions (obeying ∇ 2 Φ n = ρ n ) are given by ρ n = p n (τ * )ρ 0 . (6.6) There are further simplifications involved in Sec. 3, which come about essentially because D = D * + const., which means that the eigenfunctions of D and D * are the same up to a constant shift in the eigenvalue. Generically we would expect a different relationship between τ and τ * . The task remaining, which we leave to future efforts, is therefore to classify the symmetries of the self-energy inner product, in order to develop expansions that are usefully adapted to different coordinate systems and geometries. In a sense, the 'holy grail' would be the construction of an expansion adapted to the confocal ellipsoidal coordinate system, appropriate for studying the equilibrium dynamics of ellipsoidal galaxies 20 .
Some symmetries are already known. For example, in Cartesian coordinates (x, y, z) we trivially have the three cardinal translations (x → x + a etc.). Writing down their associated infinitesimal generators X = i∂ x , Y = i∂ y and Z = i∂ z , their joint eigenfunction e ik·r is just the kernel of the standard Fourier transform, with the wavevector k taking the role of the (continuous) eigenvalue. The Fourier transform would therefore play the same role in the resulting basis set formalism as the Fourier-Mellin transform did in ours (Sec. 3). Poisson solvers directly using the Fourier transform are ubiquitous in astrophysical applications, so it would be interesting to construct a set of 'Cartesian' basis functions and compare their performance with the current stateof-the art.
Other symmetries are known from classical potential theory. Firstly, the Kelvin transform, which is an inversion in a sphere and preserves the self-energy up to a sign (Kalnajs 1976). However it is not a continuous symmetry, so there is no associated infinitesimal self-adjoint operator. Secondly, a symmetry that takes spheres to concentric ellipsoids (sometimes called homeoids). This maps the spherical radius to an 'ellipsoidal' radius, r → m = x 2 /a 2 + y 2 /b 2 + z 2 /c 2 . It has long been known that this transformation preserves the mutual self-energy of any two charge or mass densities (Carlson 1961), up to a constant factor that is essentially just an elliptical integral of the three semi-axes (a, b, c). We can use this to transform any purely spherical basis set 21 into one stratified on concentric ellipsoids. Note however that the concentric ellipsoids in this transformation are distinct from the confocal ellipsoids inherent in the ellipsoidal coordinate system that is more dynamically relevant due to its relationship to the Stäckel potentials (de Zeeuw 1985;de Zeeuw et al. 1986).
Also, we mention some gaps in our analysis. While we purport in this work to provide a general theory of orthogonal basis sets, there are some aspects that are still not fully characterised.
Firstly, it is clear from Sec. 4 that there exists a connection between basis sets which have a classical index-raising polynomial P n (s), and those whose potential and density elements are known in closed-form (i.e. possessing a recurrence relation independent of D or A). However, the exact nature of this connection is unknown, although it is likely related to the fact that the Hahn-type polynomials appearing in the various index-raising polynomials obey second-order difference equations 22 . Secondly, we do not touch on the issue of basis sets appropriate for finite-radius systems. This was approached by Kalnajs (1976) in the case of thin discs, using a formalism initially similar to our own. There are also contributions from Polyachenko & Shukhman (1981) for finite spheres, and Tremaine (1976) for finite elliptical discs. In general it appears to be straightforward to construct basis sets for finite systems out of polynomials or Bessel functions, but a concrete connection to our new formalism would be attractive. A more rigorous form of the argument about completeness in Sec. 3.3 would also be desirable, as would a quantitative comparison with basis sets computed via the Sturm-Liouville approach of Weinberg (1999).
Finally, some broader speculation. It is possible that the general ideas developed here may find applications beyond the solution of Poisson's equation. In physics we are often required to compute the inverse of Hermitian operators with a continuous spectrum -a well-known example being the Schrödinger operator for certain boundary conditions and choices of potential. These operators could conceivably be supplied with a set of (adapted) orthogonal basis functions, by identifying a suitable commuting set of self-adjoint operators and then diagonalising their cyclic vectors. Any such basis set then provides an infinite series representation of the Green's function of the underlying Hermitian operator 23 where the coordinates appear multiplicatively separated in each term. Such series representations may find use in various applications. The appearance of tridiagonal Jacobi operators in particular may presage links to similar numerical methods in quantum mechanics (Alhaidari et al. 2008;Ismail & Koelink 2011).
φ slm (r) = −4π K l (is) r −is−1/2 Y lm (r), (C.1) where K l (is) is defined in (3.5). The expansion of an arbitrary mass density F with respect to the Ψ slm -basis is the Fourier-Mellin transform of F: where F lm (r) = d 2r Y lm (r) F(r) are the spherical multipole moments of F. Inverting this using the Mellin inversion theorem (3.8) (choosing the constant c = 5/2 in the integral), we have The potential corresponding to the density F can be expressed similarly by replacing Ψ slm (r) in (C.3) by its potential φ slm (r). Finally, the mutual energy of two densities F 1 and F 2 is and the Fourier-Mellin basis functions satisfy the orthogonality relation Ψ slm , Ψ tλµ = 8π 2 K l (is) δ mµ δ lλ δ(s − t). (C.5) We also assume we have found the (real) monic polynomials orthogonal with respect to the weight function ω l (s), writing them as p nl (s), so that This ensures that applying P nl (D) to a real function (e.g. ρ 0l (r)) gives a real result. Note that we used p nl (−x) = (−1) n p nl (x), which is true for any orthogonal polynomial where the weight function and domain of integration are both symmetric. | 2023-02-15T06:42:44.166Z | 2023-02-14T00:00:00.000 | {
"year": 2023,
"sha1": "661c46f60a2c8d5fbb7299738bf6bb3d025cf6d9",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "661c46f60a2c8d5fbb7299738bf6bb3d025cf6d9",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
} |
117237701 | pes2o/s2orc | v3-fos-license | Research of the possibilities of application of the Data Warehouse in the construction area
. Today, in information technologies, the direction associated with the use of Data Warehouse (DW) is evolving very dynamically. Using DW, it is possible to implement two types of data analysis: OLAP-analysis: a set of technologies for the rapid processing of data presented as a multidimensional cube; Data Mining is an intelligent, deep analysis of data to detect previously unknown, practically useful patterns (in our case, the construction area). It is noted, that of all the methods used in technology Data Mining, cluster analysis is especially useful for the construction area. At present, the role of DW has increased, significantly due to the fact, that many methods and approaches of Data Mining have formed the basis of a new, promising method of Big Data. We will specify that, that Data processing from the Data Warehouse with the help of technology Big Data, allows to deduce researches in a building area to the higher level. The purpose of this work is to research of the possibilities of application of the Data Warehouse in the construction area. The article suggests the new approach to data analysis in the construction area, based on the use of Big Data technology and elements of OLAP - analysis. In the section "Discussion" is considering the possibility of the new promising business in the construction field, based on the application of Data Warehouse and technology Big Data.
Introduction
Data Warehouse (DW) is widely used in data processing. Recently, due to the advent of intelligent technologies, particularly Big Data, their importance has increased significantly. Indeed, the Big Data technology has largely inherited the principles and methods of the earlier Date Mining intellectual technology, which in turn is based on the use of the Data Warehouse. It should be noted that of all the methods used in technology Data Mining and Big Data, cluster analysis is especially useful for the construction area. We can formulate the obvious conclusion: Big Data technology has very good prospects for the construction industry [1][2][3][4]. Accordingly, the role of the Data Warehouse in the field of construction has increased. It is possible to create the new forms of business, based on the use of technologies Data Warehouse and Big Data. The purpose of this work: research of the
Materials and Methods
First of all, recall the basic principles of Data Warehouse [5]. Data is merged into categories and stored according to the areas they describe, not the applications they use. The data is combined so that it satisfies all the requirements of the enterprise as a whole, not the only function of the business. Data in the Data Warehouse is not created: that is, it comes from external sources, are not adjusted or deleted. The data in the vault is accurate and correct only when it is bound to a certain time.
For the construction industry, last principle mean that it is necessary to store data on the construction objects with time binding in DW. Important point: when you fill DW with data, the value of the total amount of information DW increases, respectively, you can get better results in the processing of information (first of all, is meant processing with Using the Big Data technology).
Let us first consider the "traditional" use of DW in Business Analytics, in the subject area under consideration-construction. It is assumed two types of analysis: OLAP Analysis: A set of technologies for the rapid processing of data presented as a multidimensional cube. Data mining is an intelligent, in-depth analysis of the data, for detection of previously unknown, practically useful regularities and knowledge necessary for decision making (in our case, for construction industry). First, about OLAP -analysis. This term defines the category of applications and technologies that enable carry out the collection, storage, prompt processing, and analysis of multidimensional data.
The information is presented in the form of multidimensional cubes, where the measurements are the parameters of the object, and the cells contain the aggregated data [6]. As an example, figure 1 shows an example of a multidimensional cube in the construction industry: the X-axis is a type of building, the Y axis is the time interval Q1, Q2, Q3,Q4 (Q1 -January, February, Mart) , and the Z axis is the name of construction company. The cells contain specific indicators (for example, an integrated sum of investments in tens of millions of rubles).
For a multidimensional cube, for different axes are produce the slices -to bring data to tables, analyze them, and prepare reports based on that.
In Figure 1, as an example, the slice is produced on the z axis for Construction Company1. Once again, we note the main property of OLAP analysis -managers can prepare the necessary report in a relatively short time.
Unlike OLAP analysis, data mining technology requires much more time. This development technology allows to study the hidden depth patterns and on the basis of this, plan strategic approaches to the construction business.
Data mining is a deep analysis of data, for detection of previously unknown, practically useful regularities and knowledge necessary for making decisions in the construction area. Data Mining is based on the following methods: Associative rules, Decision trees, Classification algorithms, Artificial neural networks, Genetic algorithms, Memory-based Reasoning, MBR, Case-based Reasoning, CBR, Cluster analysis [7].
It should be noted that cluster analysis is especially useful for the construction area. In short, cluster analysis is a multidimensional statistical procedure that collects data that contains information about objects and then orders objects into relatively homogeneous groups. In the context of our consideration, this method allows the uniting of building objects into homogeneous groups and then purposefully exploring these groups.
Many of the Data Mining methods in particular cluster analysis has shifted to later technology -big data. In addition, a number of new approaches were used in big data, such as crowdsourcing [8], data fusion and integration [9] and others.
As a result, big data technology made it possible to quickly process structured and unstructured data of huge volumes and significant diversity. Often for short characteristic of technology Big Data use VVV, which mean: V-Volume: technology allows processing very large amounts of data; V-speed: high speed processing and obtaining results; V-Diversity: possibility of simultaneous processing of different types of data. Now on the subject "Big data" there is a large number of works in which this direction is investigated more in depth and in detail (for example, work [10]), therefore we will confine ourselves to the information set above. As a result, big data technology made it possible to quickly process structured and unstructured data of huge volumes and significant diversity. Often for short characteristic of technology Big Data use VVV, which mean: V-Volume: technology allows processing very large amounts of data; V-speed: high speed processing and obtaining results; V-Diversity: possibility of simultaneous processing of different types of data. Now on the subject "Big data" there is a large number of works in which this direction is investigated more in depth and in detail (for example, work [10]), therefore we will confine ourselves to the information set above.
On further consideration, we will use elements of the theory of sets. Note, that often in the construction area we have to deal with a very large amount of data. Let's consider, as an example, the problem of processing of the data received as a result of operation of multi-storey houses. Each of these objects is characterized by own dataset. They can be represented as: All data can be represented as sum of sets: If you use for storage and processing data DW, each element from a set of Qк should be represented as a set, whose elements bound to time values: t1, t2, …., tp.
Thus, the data of a large-scale construction object (for example -a group of multi-storey houses) represent a huge amount of time-bound numerical data. This fact is one of the main reasons for the use of data warehouse in large-scale construction. The ability to store such large amounts of data was before the technology Big Data, but there were no technologies that would allow to process this data in a rather short period of time. Data processing from the Data Warehouse with the help of technology Big Data allows to deduce researches in a building area to the higher level. The following pattern is characteristic: as new data is added to DW, the quality of the analysis results makes better.
Consider other version of the application of DW and Big Data in the field of construction. Typically, a construction company enters into a contract with an IT company, that owns big Data technology, to get a number of results. For a construction company it would be more interesting to get some tool that allows analyze different variants by manipulating the data obtained from the Big Data analysis. For example, date can be represented as a multidimensional cube Data.
This solution is shown in Figure 2. For clarity, we suggest that the X-axis is the district of the city, Y-year, Z -type of the building. The cells contain the cost of building (predicted with the help of Big Data technology). Then, for different axes may be produce the slices -to bring data to tables, analyze them, and get the results. For example, we can get a forecasted estimate of the value of a building for a certain year.
Despite the formal resemblance to OLAP-cube, the essence of analysis is fundamentally different: OLAP-technology deals with the operational data, and the proposed method is designed to work with data obtained as a result of processing large amounts of information from using Big Data technology.
In the paper the principles of Data Warehouse construction are briefly described: date is merged into categories and stored according to the areas they describe, not the applications they use. Data in the Data Warehouse is not created: that is, it comes from external sources, are not adjusted or deleted. The data in the vault is accurate and correct only when it is bound to a certain time. The corresponding practical interpretation is given: it is necessary to save in DW data on objects of construction, store time-bound data, to aggregate data on various objects [11].
The following pattern is noted: as DW is filled with data, the value of the total amount of information DW increases, respectively, you can get better results in the processing of information.
It is stated that it is possible to implement two types of data analysis using DW. Firstly, OLAP analysis: set of technologies for the rapid processing of data presented as a multidimensional cube. Secondly, Data mining is an intelligent, in-depth analysis of the data, for detection of previously unknown, practically useful regularities and knowledge necessary for decision making (in our case, for construction industry.)Methods included in the Data Mining technology are briefly discussed. It is noted that cluster analysis is especially important for the construction industry, as this method allows to combine construction objects into homogeneous groups and then purposefully investigate these groups [12].
It is noted that the role of DW has significantly increase due to the fact, that many methods and approaches of Data Mining have formed the basis of a new, perspective technology -Big Data. It is shown that this technology can be successfully used in the construction industry. The paper proposes a new approach to the analysis of data in the construction area, based on the use of BIG data technology and elements of OLAP-analysis.
Discussion
This section of the article discusses the following issue (see figure 3). There are a number of enterprises in the construction industry that have useful data (for example, companies that manage operation of the buildings). On Figure 3 these companies are marked with a digit I. The main operation tasks for these companies are the maintenance of the building in proper condition. Research and intellectual analysis, for most of these companies (especially small and medium) is not conducted, many data are eventually destroyed. I II I I I II II II II I I There are a number of enterprises (number II), which need the specified data in the processed form. Modern data processing involves the use of a Data Warehouse, an intelligent technology, such as Big Data. Companies that design new buildings can serve as type II enterprises. To improve the of results, the second type companies need data of the construction or operation of existing objects. But these companies do not have necessary amount of data for modern data processing facilities. (Note, that the use of DW, Big Data and other modern data processing facilities involves significant investments, for small and medium enterprises such a task is impossible). From the above, it is logical -need the companies of III type (figure 3) -are the companies owning, the above-mentioned technologies, in particular DW and Big Data. These companies acquire data from the campaigns I type, put them in the Data Warehouse. At the request of the type II companies, the data, stored in the DW, is processed, the results are transferred to the type II Companies.
Let's make two comments: 1) Now we consider only the statement of the problem, organizational, legal and other aspects of the task are not considered.
2) Companies of type I can make the request to companies of type III -to obtain results of data processing with the help of technology Big Data or other intellectual technology (on figure 3 the information is shown by a dashed line).
Thus, in the construction area, the following business is possible: The company (in our terminology III), owning IT-technologies, invests the G1 funds to pay for the information services of companies I. At the requests of companies II, this company carries out the intellectual data processing, receiving the means G2. We see classic business rule -the difference between the invested and received means G2-G1 should recoup all expenses of the company III and give additional profit. In our opinion, this is quite a promising kind of business in the construction area. | 2019-04-16T13:29:11.975Z | 2018-01-01T00:00:00.000 | {
"year": 2018,
"sha1": "c73584180a4eca2487d29e10e41dc19ce2a9bb4c",
"oa_license": "CCBY",
"oa_url": "https://www.matec-conferences.org/articles/matecconf/pdf/2018/110/matecconf_ipicse2018_03062.pdf",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "320af21eb118dd9238db47853b3b7e5d6821b7be",
"s2fieldsofstudy": [
"Engineering",
"Computer Science"
],
"extfieldsofstudy": [
"Computer Science"
]
} |
228864577 | pes2o/s2orc | v3-fos-license | Communicative Styles through the Prism of Intersubjectivity
The style people communicate has a significant impact on how they get the things they want, express needs, avoid conflicts, and make healthy intersubjective relationships. The success of communication is always the shared responsibilities of both communicators – the sender and the recipient. The article offers the theoretical assumptions and practical results of the research on the communicative correlation between the phenomenon of intersubjectivity, and the existence of the communication styles – assertive, aggressive, and submissive. The authors introduce a semiological approach to the paradigm of intersubjective processes, apply the conversation analysis to the material of English fictional discourse, and characterize the nonverbal profiles of the communication styles under investigation. The article aims at highlighting the specificity of intersubjectivity realization in different communicative styles according to the degree criterion and intensity features. The main findings of the research reveal the intersubjectivity as a communicative style forming principle, which differently actualizes the concept of ''self and other'' and manages the creation of the communicative climates (supportive or unsupportive) via verbal and nonverbal behaviors of the sender and the recipient who can or cannot demonstrate their intersubjective competence.
Introduction
Communication can be defined as a continuous interactive process of expressing, interpreting, and negotiating messages, sharing communicative intentions, emotional attitudes, and social values. Communication is a complex multimodal phenomenon, which is never one-sided. Instead, it is always realized synergistically between people involved in the process of communication. The communication process is reflected in the amount and style of interaction in order for the sender to ensure that his / her mess ages have been received in the way they were intended, and for the recipient to ensure that he / she is interpreting the messages correctly. In this respect, many communication scholars (Gamble, 2012;Norton, 1983) highlight that any style of conversation results from joint, task-motivated efforts of participants to make sense of each other as subjects of the communicative event. Thus, the significance and the rationale of our study are to provide a new critical thinking framework to the communicative styles taxonomy guided by the pragmatics of intersubjectivity.
The concept of intersubjectivity has become of interest to communication researchers in recent years. It has been argued that making cognitive and pragmatic sense of discourse requires the intersubjective competence of how to use and interpret verbal and nonverbal behaviors of communicators. The same concerns the peculiarity of the communicative style (Gamble, 2012) of the discourse leadingthe way of multisemiotic communicators' behavior built upon the opposition ''to win : : to loose'', which has not yet been the object of a particular linguistic research.
This paper aims to consider the English communicative styles through the prism of intersubjectivity. The article outlines the theoretical approaches to intersubjectivity; investigates the communicative styles (assertive, aggressive, and submissive) as certain types of intersubjectivity actualization; differentiate between the nonverbal profiles of each communicative style, highlights the communicator's intersubjective competence for creating the necessary communicative climate.
Theoretical Background of the Research
Communicating or getting our message across is the concern of us all in our daily live s in any language we happen to use. Learning to be better communicators is important for us in our private and public activities (Rehling, 2004). Better communication means better understanding ourselves and others, less isolation from those around us, and more productive, happy lives.
A multidisciplinary interest in the study of language and communication evolves through urgent demands and challenges of time conditioned by modern social standards of human interaction (Levinson, 2013). Language in use is nowadays understood and thoroughly studied not only as a cognitive, psychological, philosophical, neurolinguistic creature but also as a social and intersubjective phenomenon. The process of human interaction is inherently intersubjective, comprising subjective components of assessment and attitude to create ''a speaker's imprint'' in discourse (Finegan, 1995), aimed to be potentially shared by others (Nuyts, 2001). The phenomenon of intersubjectivity and the study of language as a form of social practice have Arab World English Journal www.awej.org ISSN: 2229-9327 134 become topics not only for multidisciplinary research but for linguistics in particular (Shevchenko, 2010).
Philosophically, the problem of ''other minds'' as a central concern in studies of human consciousness and mind respectively reveals shared social experiences as phenomena that transcend human subjectivity per se and leads to the creation of dichotomy of ''self and other'', ''personal and shared''. While subjectivity is explained as the linguistic expression of the speaker's involvement, intersubjectivity is defined as the linguistic expression of the sender's (speaker's / writer's) attention to the recipient (the hearer / the reader) (Traugott, 2010). In other words, intersubjectivity as a phenomenological property inherent in the man as a social being may be characterized as the ability to share mental and emotional states of ''others'' (Martynyuk, 2020).
Meanwhile, there exist two recognized approaches to the topic of intersubjectivity in linguistics: cognitive and interactional. The cognitive approach focuses on the analysis of linguistic structures that provide for intersubjectivity, based on Edmund Husserl's philosophy and Maurice Merleau-Ponty's phenomenology of the human body (Merleau-Ponty, 1964). The interactional approach is based on conversational analysis and Alfred Schutz's phenomenology of the social world (Schutz, 1967). The scholar claimed that intersubjectivity exists as a practical problem and a challenge for every case of communication. It should also be noted that both approaches highlight an essential notion of intersubjectively validated social reality as a sphere of language functioning and understanding that linguistic intersubjectivity is based on normativity in linguistic structures, language use, and everyday practices in social interaction (EtelamЁ aki, 2016).
Based on the semiotic nature of language and discourse, it seems reasonable to introduce a new approach to the study of intersubjectivity-a semiological one. Semiology has its philosophical basis in the Kantian dichotomy of mental (phenomenal) and nonmental (material) worlds, which corresponds to the classic European dichotomy of subjective and objective. The semiological approach can reveal how to induce identical or similar informational states in the minds and attitudes of the sender and the recipient via their different verbal and nonverbal signs of behavior in communication. The use of verbal and nonverbal semiotic resources for intersubjectivity in different discourse genres is influenced by human and contextual factors of communication. They form and shape up the process of communication, where ''personal and shared responsibility'' can be studied in terms of psychological stances and evaluations, social and cultural norms, communicative intentions, styles, and climates.
Intersubjectivity, as one of the central notions of social semiotics tends to expand the application in different discourse practices and domains. Discourse, in its turn, is understood as an interactional and intersubjective process of ''mind interaction'' aimed at constructing language signs and presented as a unique social reality (Martynyuk, 2009;Stepanov, 1998;Shevchenko, 2018). The semiological approach research principle of multimodal intercommunication ''man vs. the world'' presupposes consideration of problematic issues of verbal and nonverbal semiosis, including the problem of exteriorization of emotional states with the help of gestures, posture, mimics, facial signals, voice characteristics, etc. in different discourse practices, coming into the focus of attention of another subject.
Arab World English Journal www.awej.org ISSN: 2229-9327 135 Along with verbal means (words, sentences), we use voice, gestures, facial expression, and many other nonverbal means of communication to convey meaning to persons around us. An awareness of body language -the subtle messages given by postures, hand movements, eyes, smilesis among the many avenues to improved communication. Communication researchers claim that in social interaction, individuals communicate far more nonverbally than we do verbally.
The discovery of the importance of nonverbal communication has transformed the study of human social behavior. Nonverbal signs appear to operate at three levels of communication. First, they define and condition the communication system. Secondly, nonverbal cues help to regulate the communication system. They signal referents, statuses, indicate who is to speak next, provide feedback about evaluations and feelings. Finally, nonverbal signs communicate contents and intentions in discourses.
Effective communication requires the use of verbal and nonverbal means to avoid a direct answer or to hide one's intent while appearing to be open and forthright. In both instances, an understanding of what is really happening, as opposed to what one would like to see happening, is the first step towards improved intersubjectivity of communication.
The success of a particular communication strategy depends on many factors such as the willingness of people to understand each other, the level of their intersubjective competence and language acquisition, the social and cultural backgrounds, and the choice of appropriate discourse behaviors of participants.
In the context of the intersubjective communicative act (Martynyuk, 2020), both verbal and nonverbal semiotic systems as interrelated perceptual stimuli refer to the same referential situation and activate conscious/unconscious, shared/different sensory-motor, affective, cognitive, and volitional structures of the subjects' experience which are associated with the situation and determine the communicative meanings in the process of interpretation, resting on the intersubjective nature of human consciousness .
Amongst three models of communication, claimed by Schiffrin (1994), coding-encoding, cognitive, inferential, and the interactional, namely, the latter is entirely different from the previous in the interpretation of intersubjectivity. In the framework of the interactional model of communication, intersubjectivity is viewed as psychological and phenomenological experiencing common interests, actions, etc. and this ''unity'' is not permanent; it is considered as a part of ''communicative work'' aimed at its reproduction and maintenance in each new act of communication (Dubtsova, 2015). The importance of ''intersubjective knowledge'' is difficult to underestimate in the process of construing extra-linguistic reality in different discourse domains, genres, modes. ''Shared knowledge'' and ''shared responsibility'' together with various forms of their representation and actualization determine the choice of communicative styles, communicative climate, communicative outcomes, distancing or rapprochement as politeness strategies manifestations in discourse, etc.
To sum up, it is essential to underline the significance of the intersubjective aspect of communicative behavior both in verbal and nonverbal interaction since intersubjectivity is formed in the process of direct communication as a result of bodily-sensory and spiritual experience acquired in different discursive practices, creating and defining the features of this practice.
Methods
To achieve the aim of the research and accomplish its tasks, this study was conducted to assess the fundamental role of intersubjectivity actualization in the communicative style differentiation. Several methods of linguistic analysis were used namely, conversation, semiotic, contextual, and pragmatic analyses. The conversation analysis was used to provide a detailed description of turn-taking peculiarities, ''repair'' actions, sequences of actions that participants choose to address troubles of speaking, hearing, and understanding. The rest of the methods were used within the ideology of linguistic pragmatics. They concern about how saying something can count as doing something in terms of speech acts, emotions, and attitudes; how communicators produce and listen to the clues that allow for the intersubjective identification of whatever is meant to be doing.
Applied to English interaction, the semiotic analysis offers to reveal a set or inventory of nonverbal actions that is typically used in the aggressive, assertive, and submissive communicative styles. Levinson (2013) argues that the ''front-loaded'' information of voice pitch, gaze, gesture, and turn-initial tokens (such as ''oh,'' ''look,'' ''well'') can potentially tip off the recipient as to what is being intended and done intersubjectively.
The research material includes discursive fragments, singled out from modern English fictional conversational discourse, focusing on the designation of the characters' nonverbal behaviors in everyday communicative situations. As an illustration, let us consider the following husband-wife dialog regarding the intersubjectivity of their communicative styles. It is evident that the wife chooses the aggressive style of communication by letting her husband know with her words, voice, physical actions that she is upset and outraged. She doesn't care about her husband's excuses. So, the degree of her intersubjectivity here is low. On the contrary, the husband is choosing the submissive style by responding in a kind manner, using words, vocal cues, and gestures to explain his behavior and to show respect to his wife. Thus, the degree of his intersubjectivity in this communicative event is high. Both individuals' communicative styles are affected by the nature of the situation (they are late for the theater), by their attitudes (how they feel about what is occurring), and by their past experiences.
Results
We like thinking of communication as ideal, conflict -free cooperation. In reality, people enter every conversation with definite needs, interests, and aims for which they are ready ''to fight''. To achieve a required communicative result in a tur n-taking process, people use different discourse strategies and tactics to create a necessary style of their communication: they make orders, give promises, ask for favors, extend invitations, flatter, blame, pleading, etc. Norton (1983) explained the communicative style as the way one verbally and nonverbally interacts, transmitting direct and indirect meaning. According to Norton's Theory of Communicator Styles, nine styles are typically used in the communication processdominant, dramatic, animated, relaxed, friendly, attentive, open, contentious, and impression leaving. Researchers developed that theory qualitatively and quantitatively (Rehling, 2004;Waldherr, 2011), and Gamble (2012) offered to reduce the number of styles to the following three types: aggressive, submissive, and assertive communicative styles. We fully support this comprehensive subdivision of styles, which we have reconsidered from the viewpoints of intersubjectivity. Our research findings are as follows.
The pragmatic intent of the aggressive style is to dominate. Aggressive people always want to win: they insist on standing up for their own rights while ignoring and violating the rights and interests of others. The aggressive person begins by attacking, thereby initiating conflicts. The following example may illustrate the above-mentioned statement, "Marielle looked up into his face. He was furious. His dark eyes were flashing and his mouth was set in a thin line of displeasure. "I'm lost. I, uh, got out to sort of stretch my legs for a moment and then …" She faltered. "And then you damned near got run over!" he finished for her (H. Whitley). Here, the problem of ''other minds'' doesn't exist at all: the main concept of intersubjectivity ''self and other'' is violated as the communicative style is based on the opposition ''winner-loser''.
The assertive person intends to communicate in a confident, partnership, cooperative way, i.e., honestly, clearly, attentively, and friendly. The aim of the assertive person is to support other's beliefs and ideas without harming the recipient. Assertive people behave in a comfortable manner that attracts attention and respect by showing a strong, confident personality. For example, "It sounds like fun," Marielle said. "Of course I'll go." Bandy's face creased with pleasure. "Miss Mari, I'd be honored," he said gallantly. He took off his hat and, from the saddle, bowed with a flourish (H. Whitley). In the assertive style, the key Arab World English Journal www.awej.org ISSN: 2229-9327 138 concept of ''self and other'' is not violated; instead, it is favored as this communicative style is based on the opposition ''winnerwinner''.
Submissive people are neither aggressive nor assertive in their discourse behavior. They are very shy, are ready to lose, and show the willingness to obey other people. "Are you okay? he asked. "You look a little feverish. "No, no," she hastily replied, coming back to the present with a jolt. "I mean …yes." She laughed nervously. "I mean, no, I'm not feverish and yes, I'm okay" (H. Whitley). As this communicative style is based on the opposition ''loser-winner'', the key intersubjective concept of ''self and other'' is specifically violated under the influence of uncertainty and subordination of the character.
Consider how differently people demonstrate their communicative style by such nonverbal means as voice characteristics and body language (see Table one), increasing or decreasing the intensity if intersubjectivity in conversations: Based on the material analysis, we came to a conclusion that the intersubjectivity is a communicative style forming principle. Intersubjectivity of negative or low degree completion corresponds to the aggressive style of communication; regular positive and intensive realization of intersubjectivity favors the formation of the assertive communicative style, while the weak, uncertain verbal and nonverbal realization of intersubjectivity is relevant to the submissive communicative style.
The combinations and sequences of these styles with different degrees of intersubjectivity realization can create two types of communicative climates based either on cooperation or competition. These climates are called the supportive and unsupportive (or defensive). Supportive climate is provided by such discourse actions as cooperation, encouragement, satisfaction. This is Arab World English Journal www.awej.org ISSN: 2229-9327 139 achieved by description, problem orientation, spontaneity, empathy, and equality expressed verbally and nonverbally. The unsupportive climate is based on defensive behavior, which occurs when a participant perceives or anticipates a threat. Defensive actions give rise to defensive means, such as the vocal, facial and postural cues that accompany words. We behave defensively when we perceive others are attacking our self-concept. The unsupportive climate is marked by the following behaviors: evaluation, control, strategy, neutrality, superiority, and certainty. The effectiveness and feedback of the relevant communicative climate creation are closely connected with the sender's and the recipient's intersubjective competence, which requires thorough investigation and assessment in discourse-analysis and communication studies.
Conclusion
With the research aim to consider the communicative styles through the prism of intersubjectivity, we applied the conversation analysis to the material of English fictional discourse. Due to this, we managed to characterize the nonverbal profiles of the communicate styles under investigation, discovered the specificity of intersubjectivity realization in different communicative styles according to the degree and intensity features. A low degree of intersubjectivity corresponds to the aggressive style of communication; positive and intensive realization of intersubjectivity favors the formation of the assertive communicative style; weak, uncertain verbal and nonverbal realization of intersubjectivity is relevant to the submissive communicative style. Thus, we may conclude that intersubjectivity is a communicative style forming principle, which differently actualizes the concept of ''self and other'' via the intersubjective competence of the sender and the recipient. | 2020-11-19T09:12:45.318Z | 2020-11-15T00:00:00.000 | {
"year": 2020,
"sha1": "046ada198e38438b5e0dbf54f35da2bf18214875",
"oa_license": null,
"oa_url": "https://awej.org/images/AllIssues/Specialissues/SpecialIssueonheEnglishLanguagenraqiContext2020/SpecialIssueonheEnglishLanguageonUkraineContext2020/12.pdf",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "ecc21e2dd8e627a6a835118dded0cd37e5641281",
"s2fieldsofstudy": [
"Linguistics"
],
"extfieldsofstudy": [
"Psychology"
]
} |
27497530 | pes2o/s2orc | v3-fos-license | Identification of Central‐Pacific and Eastern‐Pacific types of ENSO in CMIP3 models
Much understanding of the El Niño‐Southern Oscillation (ENSO) has been obtained from the analyses of the climate simulations produced for World Climate Research Programme's Coupled Model Intercomparison Project phase 3 (CMIP3). However, most of these analyses do not consider the existence of the Eastern‐Pacific (EP) and Central‐Pacific (CP) types of ENSO events, which have been increasingly recognized as two distinct types of interannual sea surface temperature (SST) variation in the tropical Pacific. This study uses a regression‐Empirical Orthogonal Function method to identify how well these two ENSO types are captured in the pre‐industrial simulations of nineteen CMIP3 models. It is concluded that most CMIP3 models (13 out of 19) can produce realistically strong CP ENSOs, but only a few of them (9 out of 19) can produce realistically strong EP ENSOs. Six models that realistically simulate both the EP and CP ENSOs and their intensity ratio are identified. By separating the SST variability into these two types, it is further revealed that the leading periodicity of the simulated EP ENSO is linearly related to the latitudinal width of SST variability and varies from 1 to 5 years. As for the simulated CP ENSO, its leading periodicity is either 2 or 4 years depending on whether its SST variability is located to the east of the dateline or in the western‐Pacific warm pool, respectively. The identification produced in this study offers useful information to further understand the two types of ENSO using the CMIP3 models.
Introduction
[2] Significant advances in climate research have been obtained from analyses of the simulations produced for the World Climate Research Programme's (WCRP's) Coupled Model Intercomparison Project phase 3 (CMIP3), comprised of extended integrations with 24 coupled atmosphere-ocean general circulation models (CGCMs). Many studies have analyzed the El Niño-Southern Oscillation (ENSO) simulations produced by these models and have reported models' successes and deficiencies in capturing the observed features of ENSO [e.g., Guilyardi et al., 2009]. However, most of the analyses do not consider the existence of two different types of ENSO events, which has been increasingly suggested by recent studies [Larkin and Harrison, 2005;Yu and Kao, 2007;Ashok et al., 2007;Kao and Yu, 2009;Kug et al., 2009]. These two types include a conventional ENSO type [Rasmusson and Carpenter, 1982] that has its primary sea surface temperature (SST) anomalies centered in the eastern Pacific, and a non-conventional ENSO type that has SST anomalies confined more to the central Pacific. Kao and Yu [2009] refer to these two types as the Eastern-Pacific (EP) and Central-Pacific (CP) types, respectively. It should be mentioned that different flavors of ENSO have been noticed before these recent studies. Trenberth and Stepaniak [2001], for example, were among the first to recognize that the different characters and evolutions of ENSO events could not be fully accounted for without considering the SST contrast between the eastern and central equatorial Pacific. However, there is still no documentation of how well these two types of ENSO events are captured in the CMIP3 models. Such documentation could be useful for selecting CMIP3 models to study these two types of ENSO events. The purpose of this study is to provide such documentation and to demonstrate new information on ENSO simulations that can be obtained by taking this two-ENSO view.
Data
[3] Pre-industrial integrations produced by the CMIP3 models are analyzed in this study, in which greenhouse gases are held fixed at pre-industrial levels. Nineteen CMIP3 CGCMs are analyzed in this study, excluding five that show little interannual SST variability in the tropical Pacific. For comparison purposes, only 100 years of the integrations are analyzed. For the observational SST, we use Extended Reconstruction of Historical Sea Surface Temperature version 3 (ERSST V3) data [Smith and Reynolds, 2003] and Met Office Hadley Centre Sea Ice and Sea Surface Temperature data (HadISST) [Rayner et al., 2003] over the period of 1950-2009. Monthly SST anomalies from the observations and the pre-industrial simulations are computed by removing the monthly mean climatology and the trend.
Results
[4] Following Kao and Yu [2009], we apply a combined regression-Empirical Orthogonal Function (EOF) analysis (similar to the conditional EOF of An [2003]) to the monthly SST anomalies to identify the EP and CP types of ENSO events. We first remove the tropical Pacific SST anomalies that are regressed with the Niño-4 SST index and then apply the EOF to the residual SST anomalies to obtain the SST anomaly pattern for the EP ENSO. Similarly, the SST anomaly pattern of the CP ENSO is obtained by applying the EOF to the residual SST anomalies after the anomalies regressed with the Niño1+2 SST index are removed. Different from Kao and Yu [2009], we remove not only the simultaneous regression but also the regression at lag −3, −2, −1, +1, +2, and +3 months to consider the propagation of SST anomalies between the central and eastern Pacific, although the results are not very different with or without these additional removals. Figure 1 shows the results obtained from this regression-EOF analysis. Figure 1 appear to have its simulated ENSO displaced too far westward if it is judged based on the total SST variability. But after separating the variability into the EP and CP types, we find that both the EP and CP ENSO patterns are reasonably close to the observations. Nevertheless, the simulated intensity of the EP ENSO is too weak, which leads to the too-far-westward appearance of the total SST variability. We have also calculated the correlation coefficients between the principal components of the leading EP and CP EOFs and find them to be small for most models (the mean cor- relation is 0.26), which indicates that the EP and CP ENSOs are reasonably separated by the regression-EOF method.
[5] In the observations (Figures 1a and 1b), the EP ENSO is characterized by SST variability extending from the South American Coast into the central Pacific along the equator. The observed CP ENSO has most of its SST variability located in the central tropical Pacific (between 160°E and 120°W) and extends towards the subtropics of both hemispheres. Most of the models produce an EP ENSO similar to the observations, except their latitudinal widths can differ from the observations. The pattern correlation between the simulated EP ENSO and the observed one varies from 0.21 (UKMO-HADCM3) to 0.93 (INGV-ECHAM4), with an average of 0.75. For the CP ENSO, some models can capture the poleward extension of the SST variability pattern, but some have the variability more confined near the equator. Furthermore, some simulated CP ENSOs are located in the western Pacific warm pool and others towards the international dateline. The average pattern correlation coefficient is 0.62.
[6] We then calculate the maximum STD values from Figures 1 (middle) and 1 (right) to quantify the intensities of the observed and simulated EP and CP ENSOs. Figure 2a displays the EP versus CP ENSO intensity through a scatter diagram. The observed intensities (averages of Points A and B in Figure 2a) are about 0.7°C for the CP ENSO and 1.0°C for the EP ENSO. The ratio of the EP-to-CP ENSO intensities is about 1.4, which means the EP ENSO dominates the CP ENSO by about 40% in intensity. We use the lower limit of the 95% significance interval of the observed ENSO intensities (based on an F-test) as the criteria to determine which CMIP3 models produce realistically strong EP and CP ENSOs. The lower 95% limit values are 0.78°C for the EP type and 0.51°C for the CP type. Based on these criteria, [7] We can also examine the ratio of the EP-to-CP ENSO intensities in Figure 2a. Among the nine models that produce strong enough EP and CP ENSOs, we find that six models (BCCR-BCM2.0, CNRM-CM3, GFDL-CM2.1, GISS-EH, UKMO-HADGEM1, and INMCM3.0) produce the most realistic intensity ratios. They are indicated in Figure 2a as Points C, G, L, M, O, and Q, which are most close to the observation points (A and B). Based on the intensity ratio alone, we find seven models whose EP type dominates the CP type (i.e., the intensity ratio > 1) in their ENSO simulations. The other twelve models have the CP type dominates the EP type (i.e., the intensity ratio < 1). Therefore, most of the CMIP3 models produce stronger CP ENSO relative to EP ENSO.
[8] To demonstrate that separating the EP and CP types of ENSO in the CMIP3 simulations can reveal new information, we look further into the leading periodicities of these two ENSO types. Figure 2b displays the scatter diagram of the leading periodicity of the EP ENSO versus that of the CP ENSO for the nineteen CMIP3 models and the two observed SST data sets. The leading periodicity is determined by performing a power spectrum analysis of the principal component of the leading SST EOF modes of the two ENSO types. Shown in Figures 2c and 2d are the power spectra calculated from the ERSST data set for the EP and CP ENSOs. For the observed EP ENSO (Figure 2c), the power spectrum is dominated by a peak near 4 years. For the observed CP ENSO (Figure 2d), the power spectrum has two comparable peaks: one near 2 years and the other near 4 years. The 2-year peak has a larger power than the 4-year peak. Therefore, in Figure 2b the ERSST (Point A) is shown to have a leading period of 2 years for the CP ENSO and a leading period of 4 years for the EP ENSO. Similar leading periodicities are found from the HadISST data (Point B). Interestingly, it is noticed from the scatter diagram that the leading periods of the simulated CP ENSOs are grouped into two periods, namely, 2 and 4 years, which are the two major periods found in the observed CP ENSO. This result indicates that one group of CMIP3 models captures the 2-year component of the observed CP ENSO and the other group produces the 4-year component.
[9] We inspect the SST variability patterns in Figure 1 for these two groups of models and notice an interesting difference: the models that produce the 4-year CP ENSO tend to have their SST variability located towards the western-Pacific warm pool, while the models that produce the 2-year CP ENSO tend to have their SST variability located to the east of the international dateline. This difference is better revealed in Figure 3, which shows the zonal distribution of the equatorial (5°S-5°N) SST variability calculated from the leading EOF mode of the CP ENSO (Figure 1, middle) for the thirteen CMIP3 models that produce a realistically strong CP ENSO. The values shown are normalized by the respective maximum value of each distribution. The bluesolid lines in Figure 3 represent the models whose CP ENSO has a 4-year leading period, while the red-dashed lines indicate those whose CP ENSO has a 2-year leading period. Figure 3 shows that most of the red-dashed lines have their peak SST variability centered between 160°W and 120°W, while most of the blue-solid lines have their peak variability centered between 120°E and 160°E. These results suggest that there are two variants of the CP ENSO: a warmpool-CP and a dateline-CP, which are separated by different leading periodicities and different SST variability centers. Further studies are needed to better understand how these two variants of CP ENSO are produced and why some CMIP3 models produce one but not the other.
[10] From Figure 2b, we also notice that the leading period of the simulated EP ENSO can vary from close to one year to more than 4 years, while the observations show a 4-year leading periodicity. We inspect the EP ENSO patterns in Figures 1 and notice the leading periodicity seems to be related to the latitudinal width of the SST variability of the EP ENSO. This linkage is verified in Figure 4, where the leading periodicity of the EP ENSOs simulated by the nine CMIP3 models that produce strong EP ENSOs versus the latitudinal width (L y ) of their SST variability is shown. Here L y is defined as the e-folding width of the maximum value of the EP SST variability. A linear relationship appears among these scatter points, which shows the larger the latitudinal width, the longer the EP ENSO period. This is consistent with the suggestion of Kao and Yu [2009] that the EP ENSO is produced by subsurface variation processes similar to those described by rechargedischarge theory [Jin, 1997]. According to this theory, the EP ENSO acts as a mechanism to remove excess ocean heat contents from the equatorial to the off-equatorial Pacific. Therefore, when a wider latitudinal range is involved in the SST variability, it takes longer to complete the rechargedischarge process resulting in a longer periodicity for the ENSO. This result is also consistent with previous studies that showed ENSO period is related to the latitudinal width of the wind stress anomalies [e.g., Kirtman, 1997;Capotondi et al., 2006;Neale et al., 2008].
Summary and Discussion
[11] In this study, we examined the pre-industrial simulations produced by nineteen CMIP3 models to document how well these models capture the EP and CP types of ENSO events. Based on the intensity information, the CMIP3 models can be separated into groups that produce both CP and EP ENSOs that are realistically strong (nine . Leading periods of EP ENSO versus the meridional widths of their SST variability from the nine CMIP3 models that produce strong enough EP ENSO. models), only CP ENSOs that are realistically strong (four models), CP ENSOs that are too weak (six models), EP ENSO that are too weak (ten models), and realistic EP-to-CP ENSO intensity ratios (six models). This grouping information helps to determine which CMIP3 models should be used to study the EP ENSO, CP ENSO, and their interactions. By separating ENSO SST variability into the EP and CP types, we find two interesting features in the simulated ENSO periodicity. The period of the simulated EP ENSO varies between 1 and 5 years and is linearly related to the latitudinal width of its SST variability pattern. But no such connection is found for the simulated CP ENSO, whose periodicity can only be 2 or 4 years. Although we do not know the reason for this period selection, we found that the selection is related to location of the center of the CP ENSO variability. The longer 4-year CP ENSO is located over the warm pool and the shorter 2-year CP ENSO is located to the east of the dateline. This interesting result supports the suggestion that the CP ENSO is not generated by the thermocline variation mechanism. Otherwise, the warmpool-CP ENSO should have a shorter period than the dateline-CP ENSO because the former is located closer to the western Pacific and should be influenced by the thermocline wave reflection and propagation sooner than the latter [An and Wang, 2000]. In conclusion, the identification produced by this study offers information that can be useful for using CMIP3 models to study the dynamics of EP and CP ENSOs. It should be noted that the results reported in this study are obtained from 100 years of pre-industrial integrations, which may not necessarily be the same in different centuries of model simulations due to the decadal and century timescale variability of ENSO events [e.g., Knutson and Manabe, 1998;Timmermann, 1999;Meehl et al., 2006]. | 2018-01-24T06:06:47.564Z | 2010-08-01T00:00:00.000 | {
"year": 2010,
"sha1": "09458f418b1da07f28f2585d451d776655ec33ce",
"oa_license": "CCBY",
"oa_url": "https://escholarship.org/content/qt7w59j395/qt7w59j395.pdf?t=n57p06",
"oa_status": "GREEN",
"pdf_src": "Wiley",
"pdf_hash": "02ba7c8ca300f01ec392782741a3cc985cc8545e",
"s2fieldsofstudy": [
"Environmental Science"
],
"extfieldsofstudy": [
"Geology",
"Environmental Science"
]
} |
252081841 | pes2o/s2orc | v3-fos-license | Association of electroencephalogram epileptiform discharges during cardiac surgery with postoperative delirium: An observational study
Background Delirium is a frequent and serious complication following cardiac surgery involving cardiopulmonary bypass (CPB). Electroencephalography reflects the electrical activity of the cerebral cortex. The impact of electroencephalographic epileptiform discharges during cardiac surgery on postoperative delirium remains unclear. This study was designed to investigate the relationship between intraoperative epileptiform discharges and postoperative delirium in patients undergoing cardiac surgery. Methods A total of 76 patients who underwent cardiac surgery under CPB were included. The baseline cognitive status was measured before surgery. Electroencephalograms were monitored continuously from entry into the operating room to the end of surgery. The presence of delirium was assessed through the Confusion Assessment Method or the Confusion Assessment Method for the Intensive Care Unit on the first 3 days after surgery. Univariate and multivariate logistic regression analyses were performed to evaluate the association between epileptiform discharges and delirium. Results Delirium occurred in 31% of patients and epileptiform discharges were present in 26% of patients in the study. Patients with delirium had a higher incidence of epileptiform discharges (52.63% vs. 13.95%, P < 0.001) and longer durations of anesthesia and CPB (P = 0.023 and P = 0.015, respectively). In addition, patients with delirium had a longer length of hospital stay and a higher incidence of postoperative complications. Multivariate logistic regression analysis showed that age and epileptiform discharges were significantly associated with the incidence of postoperative delirium [odds ratio, 4.75 (1.26–17.92), P = 0.022; 5.00 (1.34–18.74), P = 0.017, respectively]. Conclusions Postoperative delirium is significantly related to the occurrence of epileptiform discharges during cardiac surgery.
Introduction
Delirium is a type of acute brain dysfunction associated with changes in consciousness, attention, and cognitive function; it is common after cardiac surgery (1,2). Delirium can increase hospitalization costs and length of stay (LOS), can reduce the quality of life, and is closely associated with early postoperative cognitive dysfunction (3)(4)(5)(6). Notably, one of the pathophysiological mechanisms of delirium is neurotransmitter imbalance, which can lead to changes in electroencephalogram (EEG) patterns (7,8). A study has found that adults who undergo cardiac surgery utilizing cardiopulmonary bypass (CPB) are more likely to experience seizures and increased operative mortality (9).
The epileptiform discharges on the EEG are a good indicator of the abnormal excitability of the brain; they are associated with neurocognitive decline (10), Alzheimer's disease (11), hypoxic encephalopathy after cardiac arrest (12), and autism (13). Therefore, epileptiform discharges may provide a method for early identification of abnormal discharges of the brain. In our preliminary observations, we found that patients undergoing CPB demonstrate epileptiform discharges on the EEG during surgery. However, the relationship between epileptiform discharges and delirium is unclear.
This study was designed to investigate the relationship between epileptiform discharges on the EEG and postoperative delirium during cardiac surgery under CPB. It aimed to provide clinical evidence for the use of electroencephalography in patients with postoperative delirium.
Materials and methods Patients
This single-center prospective observational study was performed at the General Hospital of Ningxia Medical University between July 2, 2020, and July 16, 2021. It was approved by the Ethics Committee of the General Hospital of Ningxia Medical University (KYLL-2021-358) and is registered at https://www.clinicaltrials.gov/ (NCT04943939). The study was performed according to the Declaration of Helsinki. All participants provided written informed consent before participating in the study.
A total of 76 patients having American Society of Anesthesiologists physical status III, who were aged >18 years and had corrected preoperative conditions, were scheduled for elective open-chamber cardiac valve reconstruction or replacement with cardiopulmonary bypass, and admitted to the cardiac surgical care unit after surgery were enrolled in this prospective observational study. Those receiving off-pump cardiac surgery; undergoing surgery for correction of congenital heart disease; having a history of stroke, schizophrenia, depression, epilepsy, dementia, and drug addiction; unable to communicate due to language impairment or significant hearing or visual impairment; having severe liver dysfunction or severe renal insufficiency requiring preoperative renal replacement therapy; or having a history of intraoperative awareness or falls in the last 6 months were excluded. Patients who withdrew consent and refused further participation were excluded from the study.
Perioperative management
Patients received standard monitoring, including electrocardiography, pulse oximetry, and continuous monitoring of radial arterial blood pressure, nasopharyngeal temperature, end-tidal CO 2 , and urine output. Anticholinergic drugs such as scopolamine were strictly prohibited during the study period. Atropine was only used to treat bradycardia, and midazolam was not used as an anxiolytic. Etomidate or propofol, sufentanil, and rocuronium were used for induction of anesthesia. After tracheal intubation, anesthesia was maintained by continuous infusion of propofol, remifentanil, and rocuronium boluses according to clinical needs. The use of vasoactive drugs was personalized according to the patient's condition. Propofol was administered with the depth of anesthesia adjusted based on the patient status index, which was generated by SedLine (Masimo Inc., United States). After the operation, patients were sent to the cardiac surgical care unit for further standardized treatment.
EEG recording and analysis
The SedLine monitor is a four-channel EEG monitoring device that collects data from the frontal lobe of the brain at locations specified through Fp1, Fp2, F7, and F8; it has been developed to monitor sedation depth and brain electrical activity in patients who are under anesthesia. Electrode impedance was maintained at less than 5 kΩ in each channel. The EEG was recorded continuously from the baseline (before anesthesia) to the end of surgery. The EEG results were independently analyzed with a focus on amplitude and frequency according to the American Clinical Neurophysiology Society's Standardized Critical Care EEG Terminology (14) and classified as rhythmic polyspikes (PSR), periodic epileptiform discharges (PED), delta with spikes (DSP), and suppression with spikes (SSP) (15). The EEG was interpreted by neuroelectrophysiology physicians. The data were considered as epileptiform discharges based on the agreement of neuroelectrophysiology physicians.
Data collection
Data regarding the patient's general condition, history of alcohol use, and comorbid conditions were recorded. The baseline cognitive function was evaluated using the Mini-Mental State Examination (MMSE). The time of intraoperative surgery and details of anesthesia including the dosage of anesthesia and vasoactive drugs used were also recorded. The hemoglobin after surgery, LOS, and other complications (cardiac events, cerebrovascular events, kidney injury, and infection, among others) that occurred during the postoperative hospitalization were also recorded.
Postoperative delirium assessment and analysis
The development of delirium was the primary outcome of the study. The presence of delirium was evaluated by trained research team members using the Confusion Assessment Method (CAM) or the Confusion Assessment Method for the Intensive Care Unit (CAM-ICU) at specific times, as their reliability has been previously demonstrated (16). Researchers screened patients for delirium using the CAM or the CAM-ICU twice a day (at 6-9 am and 6-9 pm) for 3 days, starting from the first day after surgery. The CAM-ICU was used to evaluate those patients who had a Richmond Agitation Sedation Scale score of −3 or greater during cardiac surgical care unit admission with stable hemodynamics and respiration and in whom the tracheal tube could not be removed or who could not answer questions. Patients who did not achieve a Richmond Agitation Sedation Scale score of −3 were reassessed later, and the score was recorded. Patients were classified as having postoperative delirium if they screened positive at any time.
Sample size calculation
The sample size was calculated based on the incidence of postoperative delirium in a previous study, where 72% and 32% of patients undergoing surgery did and did not have epileptiform discharges, respectively (17). The sample size that would yield a statistical power of 90% at the difference based on a two-tailed significance level of 0.05 was calculated to be 62 patients. Considering an estimated dropout rate of 20%, we intended to enroll 76 patients. The sample size was calculated using PASS 11.0 software.
Statistical analysis
Statistical analysis was performed using SPSS 25.0 software. Data are presented as medians with the interquartile range (IQR) or as frequencies (n) and percentages (%). Significant differences in categorical variables were compared between the groups using the chi-square test. For parametric continuous variables, the t-test was used. Continuous variables that did not meet the criteria for parametric testing were evaluated using the Mann-Whitney U test. The odds ratio with 95% confidence intervals and corresponding P values were calculated for each risk factor. To determine the impact of epileptiform discharges on the incidence of delirium, we performed a binary logistic regression analysis of independent associations. Variables that significantly affected delirium on univariate analysis were included during multivariate logistic regression analysis. A two-sided P value of <0.05 was considered statistically significant.
Results
The study flow diagram is presented in Figure 1. Patients were enrolled between July 2, 2020, and July 16, 2021. A total of 76 patients were selected for this prospective observational study, of whom 62 were eventually enrolled and their data analyzed. The intraoperative EEG data could not be analyzed in nine and five patients owing to poor or incomplete data collection during surgery and excessive artifacts (eyelid motion and muscle artifacts), respectively.
The median age of the remaining patients was 59 years; 65% were male, and the median body mass index was 23.95 kg/m 2 . At presentation, the physical status of all patients was of grade III, according to the American Society of Anesthesiologists. The MMSE scale was used to assess the preoperative cognitive function of the patients; the median value was found to be 25. The median anesthesia maintenance, CPB, and aortic cross-clamp times were 330, 135, and 88 min, respectively. On postoperative delirium screening (CAM or CAM-ICU), 19/62 (31%) patients tested positive; 43/62 (69%) did not have postoperative delirium. The characteristics of the patients in the two groups are summarized in Table 1; there were no significant differences between the two groups in terms of baseline characteristics (P > 0.05). Patients in the delirium group differed from those in the nondelirium group in that they were older [64 (60-66) vs. 56 (51-62) years, P = 0.001] and the LOS was longer [21 (17-27) vs. 18 (13)(14)(15)(16)(17)(18)(19)(20)(21)(22) days, P = 0.004]; anesthesia (P = 0.023) and cardiopulmonary bypass (P = 0.015) times were also longer; and hemoglobin after surgery was lower [100 (113-129) vs. 120 (132-148), P = 0.001].
A total of 18 patients developed postoperative complications; patients with delirium had higher complication rates than those without delirium (36.84% vs. 25.58%). However, it was unclear whether ischemic or latent stroke occurred, as confirmatory computed tomography or magnetic resonance imaging was not performed. In addition, none of the patients admitted to the cardiac surgical intensive care unit were observed to have seizures.
Univariate analyses of independent associations
For univariate binary logistic regression analysis, the occurrence of delirium was used as the dependent variable, and the factors associated with delirium were used as independent variables. The preoperative MMSE score did not differ between patients with and without delirium (P = 0.153). Likewise, the duration of surgery and anesthesia and dosage of anesthesia did not differ between the groups (P = 0.122, P = 0.754, and P = 0.273, respectively). Previous studies have shown that prolonged CPB leads to poor prognosis in patients undergoing cardiac surgery; however, there was no statistically significant difference between the two groups in this study [odds ratio, 2.70 (0.77-9.50); P = 0.122]. Univariate analysis showed that only age [odds ratio, 6.33 (1.79-22.41); P = 0.004] and epileptiform discharges [odds ratio, 6.85 (1.97-23.84); P = 0.002] were significantly and independently associated with postoperative delirium in our study ( Table 3). The incidence of delirium increased with age; 52% of patients were aged over 60 years. In this study, only age was found to be related to epileptiform discharges [odds ratio, 4.26 (1.19-15.25); P = 0.026]. We did not find other factors, such as time of CPB and dose of anesthetic drugs, to be related to epileptiform discharges.
Multivariable logistic regression analyses
On univariate logistic regression analysis of the risk factors for delirium, only age and epileptiform discharges were considered relevant predictors for further multivariate logistic regression analysis. The overall model for both predictors was significant. The values were as follows: age [odds ratio, 4.75 Flow diagram demonstrating the process of patient enrollment. POD, postoperative delirium; EEG, electroencephalogram. Table 3).
Discussion
In this study, we found that the incidence of epileptiform discharges is surprisingly high in patients undergoing cardiac surgery; they were observed in 16/62 (26%) of patients. Overall, 19/62 (31%) of patients developed delirium after cardiac surgery. Epileptiform discharges and age were identified as independent confounders related to the development of delirium in patients after cardiac surgery. Therefore, we speculate that the occurrence of epileptiform discharges is related to postoperative delirium in patients undergoing cardiac surgery.
We found that among the patients who are at a high risk of delirium after cardiac surgery, approximately 31% developed delirium; this agrees with the incidence of delirium (23%-52%) reported by previous studies on patients who underwent cardiac surgery (18,19). Although the pathophysiological mechanism of delirium remains unclear, the changes in brain electrical activity caused by a variety of factors play an important role in pathogenesis; these include neurotransmitter imbalance, changes in neuronal excitability, and overactivated inflammatory responses (20, 21). In particular, neurotransmitter imbalances such as increased dopamine and glutamate and decreased glutamine in the cerebrospinal fluid may increase the fragility of the brain, thereby contributing to delirium (22).
EEG is an effective tool for monitoring the electrical activity of the brain. Epileptiform discharges are characterized by abnormal spontaneous discharges in the brain that can affect cognition and awareness. Hanak et al. (23) found that the expression of metabolic glutamate receptor 5 in the hippocampus decreases during epileptic seizures; this leads to a large accumulation of glutamate, which acts on ionic receptors. This in turn results in Ca 2+ and Na + influx and K + outflow, inducing abnormal synchronous discharge and epileptic seizures. Inflammatory processes in the brain and damage to the blood-brain barrier often cause destruction of ion channels, abnormal neurotransmitter uptake and release, and excitatory neurotoxicity (24). This indicates that the occurrence of epileptiform discharges and delirium may have a common mechanism. Epileptiform discharges can be observed in EEG records of other surgical patients, especially in those with brain dysfunction (25). In addition, epileptiform discharges can also occur in patients without seizures or a diagnosis of epilepsy (26). In our study, the intensive care unit staff did not observe seizures in the patients after surgery. In our study, 26% of patients demonstrated epileptiform discharges during cardiac surgery. In another study on postoperative patients in the cardiac surgical care unit, Tschernatsch et al. (27) found the incidence of epileptiform discharges and abnormal EEGs to be 9% and 33%, respectively. The high incidence of epileptiform discharges in our study may be attributed to the fact that we analyzed intraoperative EEG data and abnormal electrical activity in the brain caused by anesthesia drugs and changes in cerebral blood flow during CPB. Our study found epileptiform discharges to occur during anesthesia induction in 5/16 (31%) patients. A previous study found that anesthesia drugs, such as propofol and sevoflurane, can induce delirium and epileptiform discharges (28). Therefore, the sudden increase in blood concentration of propofol and other anesthesia drugs during anesthesia induction may cause abnormal brain discharges and be related to the occurrence of epileptiform discharges. In addition, 12/16 (75%) patients had epileptiform discharges during CPB, which may be because of ischemic hypoxia, hypoperfusion, and hyperperfusion of the brain tissue during CPB and cause ion channel and blood-brain barrier dysfunction, leading to abnormal neuronal firing and seizures (29, 30). Therefore, Frontiers in Surgery the occurrence of epileptiform discharges on intraoperative EEGs may be related to the above factors; however, its specific mechanism needs further exploration. Previous studies have shown that EEG may correlate with the occurrence of delirium (31, 32). Fritz et al. (33) found that a longer duration of intraoperative EEG suppression is associated with a higher occurrence of delirium. Eskioglou et al. (34) found that the EEG of patients with delirium in the intensive care unit may demonstrate burst suppression, rhythmic or periodic patterns, and seizures or status epilepticus. Patients with delirium also show increased slowwave activity involving the occipitoparietal and frontal cortex with disruption of functional connectivity (35). These findings imply that EEGs play an important role in predicting the occurrence of delirium.
This study further explored and analyzed the relationship between intraoperative EEG and delirium. Using univariate and multivariate logistic regression analysis models, we found a significant association between epileptiform discharges and delirium. In our study, we found that epileptiform discharges were more common in patients with delirium. The results of the univariate logistic analysis showed that the occurrence of epileptiform discharges and age were independent confounding factors for delirium.
In this context, persistent cognitive deficits in terms of memory and learning occur in patients with epileptic seizures or persistent epileptic states (36) and the use of the antiepileptic drug levetiracetam can effectively reverse behavioral abnormalities and reduce epileptic seizures in patients with Alzheimer's disease (37). Therefore, the identification and detection of early epilepsy and appropriate timely treatment may have important clinical implications for the occurrence and development of delirium. The results from the present study provide a valuable basis for further study of the sources and clinical effects of abnormal EEG discharges.
Age is currently known to be a risk factor for delirium. Our study found a significant association between age and delirium; the incidence of delirium increased with age. The two groups in our study did not differ in terms of other risk factors for delirium, including a history of alcohol use and comorbid conditions; we found that the LOS was longer in patients with delirium.
The results of this study offer promise for the diagnosis of delirium. However, certain limitations need to be mentioned. First, the patients were only monitored through intraoperative EEG; they were not monitored after admission to the cardiac surgical care unit and the staff did not record any convulsions. Second, we excluded patients with cognitive impairment and dementia before surgery; this may make our study less comprehensive. Therefore, the results cannot be generalized to patients with cognitive dysfunction to assess whether the preoperative cognitive state has any relation with intraoperative epileptiform discharges and postoperative delirium. Third, the anesthesia drugs may affect EEG electrical activity. Fourth, we did not analyze the duration of burst suppression, other abnormal brain waves, and the duration of epileptiform discharges. At last, we did not have access to tools such as computed tomography and magnetic resonance imaging to confirm the occurrence of stroke, cerebral microthrombosis, and other cerebrovascular accidents after surgery. In the future, it will be necessary to combine EEG recordings with findings on magnetic resonance imaging or other investigations to establish the association between epileptiform discharges and delirium.
Conclusion
Our results suggest that intraoperative epileptiform discharges on EEG during cardiac surgery may be associated with the occurrence of postoperative delirium. The results of this study indicate the need for additional research to further characterize epileptiform discharges, investigate whether interventions can reduce the incidence of delirium, and determine any causal relationships between intraoperative EEG abnormalities and postoperative delirium.
Data availability statement
The original contributions presented in the study are included in the article/Supplementary Material, further inquiries can be directed to the corresponding author.
Ethics statement
The studies involving human participants were reviewed and approved by the Ethics Committee of the General Hospital of Ningxia Medical University (KYLL-2021-358). The patients/participants provided their written informed consent to participate in this study.
Funding
The work was supported by the Key Research and Development Program of Ningxia Hui Nationality Autonomous Region (2021BEG02036). | 2022-09-06T14:01:53.694Z | 2022-09-06T00:00:00.000 | {
"year": 2022,
"sha1": "d470ab590e251d43325cce6fc800378662efd87a",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Frontier",
"pdf_hash": "d470ab590e251d43325cce6fc800378662efd87a",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
3792181 | pes2o/s2orc | v3-fos-license | Multitask learning for host–pathogen protein interactions
Motivation: An important aspect of infectious disease research involves understanding the differences and commonalities in the infection mechanisms underlying various diseases. Systems biology-based approaches study infectious diseases by analyzing the interactions between the host species and the pathogen organisms. This work aims to combine the knowledge from experimental studies of host–pathogen interactions in several diseases to build stronger predictive models. Our approach is based on a formalism from machine learning called ‘multitask learning’, which considers the problem of building models across tasks that are related to each other. A ‘task’ in our scenario is the set of host–pathogen protein interactions involved in one disease. To integrate interactions from several tasks (i.e. diseases), our method exploits the similarity in the infection process across the diseases. In particular, we use the biological hypothesis that similar pathogens target the same critical biological processes in the host, in defining a common structure across the tasks. Results: Our current work on host–pathogen protein interaction prediction focuses on human as the host, and four bacterial species as pathogens. The multitask learning technique we develop uses a task-based regularization approach. We find that the resulting optimization problem is a difference of convex (DC) functions. To optimize, we implement a Convex–Concave procedure-based algorithm. We compare our integrative approach to baseline methods that build models on a single host–pathogen protein interaction dataset. Our results show that our approach outperforms the baselines on the training data. We further analyze the protein interaction predictions generated by the models, and find some interesting insights. Availability: The predictions and code are available at: http://www.cs.cmu.edu/∼mkshirsa/ismb2013_paper320.html Contact: j.klein-seetharaman@warwick.ac.uk Supplementary information: Supplementary data are available at Bioinformatics online.
Figures
x model Figure 2: Part of the "glucose transport pathway" in human. Grey nodes represent the human proteins (genes) involved. Edges represent causality in the process. This pathway involves the transport of glucose from outside the cell to various components inside the cell.
Feature description and details
The following features were defined for the 3 high throughput human-bacterial datasets for bacterial species: B. anthracis, F. tularensis, Y. pestis. The features for S. typhi. were downloaded from the authors' website.
1. GO similarity features: These features model the similarity between the functional properties of two proteins. Gene Ontology (GO) provides GO-term annotations for three important protein properties: molecular function (F), cellular component (C) and biological process (P). We derive 3 types of features using these three properties. The similarity between two individual GO terms was computed using the G-Sesame algorithm. This feature can be considered as a matrix M of all the GO term combinations found in a given protein pair: < p b , p h >. The rows of the matrix represent GO terms from protein p b and the columns represent GO terms from p h . In this matrix M , we set the features corresponding to each of the GO-term combinations to be 1 and the remaining features to be 0. This feature thus records the co-occurrance of GO terms.
2. Protein sequence n-mer or n-gram features: Since the sequence of a protein determines its structure and consequently its function, it may be possible to predict PPIs using the amino acid sequence of a protein pair. Shen et al. (2007) introduced the "conjoint triad model" for predicting PPIs using only amino acid sequences. Shen et al. (2007) partitioned the twenty amino acids into seven classes based on their electrostatic and hydrophobic properties. For each protein, they counted the number of times each distinct three-mer (set of three consecutive amino acids) occurred in the sequence. To account for protein size, they normalized these counts by linearly transforming them to lie between 0 and 1 (see (Shen et al., 2007) for details). They represented the protein with a 343-element feature vector, where the value of each feature is the normalized count for each of the 343 (73) possible amino acid three-mers. We use two-, three-, four-, and five-mers. For each hostpathogen protein pair, we concatenated the feature vectors of the individual proteins. Therefore, each hostpathogen protein pair had a feature vector of length at most 98, 646, 4802, and 33614, in the cases of two-, three-, four-, and five-mers, respectively.
3. Graph based features using the human interactome: These features are derived using only the human protein 'p h ' from the pair. Pathogens generally target host proteins that are important in several host processes; these host proteins interact with many other host proteins to carry out their tasks. This insight is captured in the form of three graph properties: degree, between-ness centrality and clustering coefficient of the human protein "node" in the human interactome graph. The human interactome was downloaded from HPRD. The degree of a node is the number of its neighbouring nodes in the graph. The clustering coefficient of a node 'n' is defined as: the ratio of the number of edges present amongst n's neighbors to the number of all possible edges that could be present between the neighbours. Betweenness centrality for a node 'n', is defined as the sum over all pairs of nodes (u, v), the fraction of shortest paths from u to v, that pass through n. Mathematically, it is: . Intuitively, nodes that occur on many shortest paths between other vertices have higher betweenness than those that do not.
Gene expression features:
The intuition behind this feature is that genes that are significantly differentially regulated upon being subject to bacterial infection, are more likely to be involved in the infection process. These features are derived using the gene of the human protein 'p h ' from the pair. We selected transcriptomic datasets GSE12131, GSE14390, GSE5966 for B. anthracis, GSE12108, GSE22203 for F. tularensis and GSE22299, GSE18293 for Y. pestis from the GEO database. These give the differential gene expression of human genes infected by the bacteria, under different control conditions.
Precision Recall curves from 10 fold CV results
We plot the recall vs the precision obtained by our method, MTPL on the 4 tasks in Figure 3. We used the results from the 10 fold CV experiments. The classifier score for each test instance was aggregated from the various pairwise models a manner similar to what is explained in Section 4.3 of the main paper. Let the classifier scores (i.e w x) from each model for a given test instance x be {s 1 , s 2 . . . s m−1 }. The aggregated multi-task classifer score of x is given by: The classifier threshold was then varied and the precision (P), recall (R) were computed for each threshold. The final curve was obtained by aggregating the P-R curves from each of the ten folds.
Results from 50 bootstrap sampling experiments
We validate the improvement in performance by checking for statistical significance using Paired ttests on 50 bootstrap sampling experiments. We only compare MTPL (our method) with Indep. (independent models). Each bootstrap sampling experiment consists of the following procedure: we first make two random splits of 80% and 20% of the data, such that the class ratio of 1:100 is maintained in both. The training set is then constructed using a bootstrap sample from the 80% split and the test data from the 20% split. A total of 50 models are thus trained and evaluated. We do not tune parameters again for each model and instead use the optimal setting of parameter-values from our 10 fold CV experiments. The F1 is computed for each experiment thereby giving us 50 values, which will be our samples for the hypothesis test. The F1 values averaged from 50 experiments are in the table below.
Note that the overall results of both methods are worse than the 10 fold CV. This happens due to the bootstrap sample, which allows an instance to be sampled multiple times. A sample of size n will have fewer than n unique instances. Hence, the effective size of the training data seen by the classifier is less than 80%. Compare this to the 10 fold CV experiments, where the classifier has access to 80% of the data during training.
The performance of MTPL is better than Indep for the three high throughput datasets. We see an improvement of 3.8 for B. anthracis, 2.7 for F. tularensis, 3.5 for Y. pestis. The Indep results are better for S. typhi by 3.2 F points. 5 Intersection of pathways enriched in the PPIs from training and predicted.
Here we show the intersection between the pathways enriched in the predicted interactions and the pathways enriched in the gold-standard positives. For both enrichment computation, the human genes from the interactions are considered. We used Fisher's exact test and a p-value cut-off of 1e-7. The filled circles on the left of each intersection represent the enriched pathways in the predictions. The empty circles on the right show the enriched pathways in the training data. We can see that there are several new pathways enriched in the predictions as compared to those enriched in the gold-standard data. Figure 4: Enrichment intersection between training PPIs and predicted PPIs. Cut-off used for enrichment: 1e-7.
6 List of 17 pathways commonly enriched from predictions across all bacterial datasets | 2015-09-16T00:41:02.000Z | 2013-06-19T00:00:00.000 | {
"year": 2013,
"sha1": "58ef15f083c00ee3ae8e604a9c6e258576b7c7af",
"oa_license": "CCBYNC",
"oa_url": "https://academic.oup.com/bioinformatics/article-pdf/29/13/i217/18536239/btt245.pdf",
"oa_status": "HYBRID",
"pdf_src": "PubMedCentral",
"pdf_hash": "e30ed4938187ce36364831f6854629999c57cb68",
"s2fieldsofstudy": [
"Biology",
"Computer Science"
],
"extfieldsofstudy": [
"Computer Science",
"Biology",
"Medicine"
]
} |
55909731 | pes2o/s2orc | v3-fos-license | UV effects on the primary productivity of picophytoplankton: biological weighting functions and exposure response curves of Synechococcus
A model that predicts UV effects on marine primary productivity using a biological weighting function (BWF) coupled to the photosynthesis–irradiance response (BWF/P-E model) has been implemented for two strains of the picoplanktonic cyanobacteria Synechococcus , WH7803 and WH8102, which were grown at two irradiances (77 and 174 μmol m−2 s−1 photosynthetically available radiation (PAR)) and two temperatures (20 and 26 C). The model was fit using photosynthesis measured in a polychromatic incubator with 12 long-pass filter configurations with 50 % wavelength cutoffs ranging from 291 to 408 nm, giving an effective wavelength range of 280–400 nm. Examination of photosynthetic response vs. weighted exposure revealed that repair rate progressively increases at low exposure but reaches a maximum rate above a threshold exposure (“ Emax”). Adding Emax as a parameter to the BWF/P-E model provided a significantly better fit toSynechococcus data than the existing “E” or “ T ” models. Sensitivity to UV inhibition varied with growth conditions for both strains, but this was mediated mainly by variations inEmax for WH8102 while both the BWF andEmax changed for WH7803. Higher growth temperature was associated with a considerable reduction in sensitivity, consistent with an important role of repair in regulating sensitivity to UV. Based on nominal water column conditions (noon, solstice, 23 ◦ latitude, “blue” water), the BWFEmax/P-E model estimates that UV + PAR exposure inhibitsSynechococcus photosynthesis from 78 to 91 % at 1 m, and integrated productivity to 150 m 17–29 % relative to predicted rates in the absence of inhibition.
Introduction
Inhibition of phytoplankton photosynthesis by solar ultraviolet (UV) and photosynthetically available radiation (PAR) occurs at least episodically in almost all near-surface waters of global marine and freshwater environments (Villafañe et al., 2003;Harrison and Smith, 2009).However, most detailed studies of the spectral dependence of inhibition in marine phytoplankton have focused on species or assemblages characteristic of high-latitude polar environments where the UVB (280-320 nm) spectrum has been (and continues to be) affected by seasonally severe depletion of stratospheric ozone (Weatherhead and Andersen, 2006).Hence, little is known about the spectral dependence of photosynthetic response for phytoplankton prevalent in the central mid-and low-latitude ocean.Important components of the assemblages in these regions are strains of picoplanktonic cyanobacteria under the broad taxonomic classification Synechococcus sp.-major contributors to global primary production.Synechococcus sp.inhabit a wide range of temperate and subtropical environments having moderate to high UV transparency (Fichot and Miller, 2010;Garczarek et al., 2008).Although there is ample evidence from laboratory studies that the photosynthetic apparatus of Synechococcus is impaired by UV radiation (e.g., Garczarek et al., 2008;Mella-Flores et al., 2012;Six et al., 2007b), it is not clear how much these effects result in decreased photosynthetic performance under natural conditions.
Definition of the spectral dependence of inhibition in Synechococcus sp. is motivated by several reasons.First, efficient numerical approaches have been developed enabling the inclusion of spectrally dependent inhibition in the prediction of global marine productivity (Cullen et al., 2012).In these approaches, approximations are constructed for the relationship between productivity and in situ irradiance weighted for inhibition effectiveness by a set of spectral coefficients, the biological weighting function (BWF).However, no BWFs have been defined for the dominant taxa of the central oceanic regions, either in culture or for natural populations.Moreover, all current BWFs for UV inhibition of photosynthesis have been defined using eukaryotes, and there is evidence that resistance to irradiance stress is less in prokaryotes of the open oligotrophic ocean (Kulk et al., 2011).Early attempts to relate the relative effect of in situ (or simulated in situ) exposure on phytoplankton photosynthesis to broadband irradiance (weighted or unweighted) in temperate to tropical open ocean environments showed on the order of a factor of 2 variation in response at any given exposure (Smith et al., 1980;Behrenfeld et al., 1993).These estimates will presumably be better constrained if more is known about BWFs of the species in these assemblages and how they vary with growth condition.
A primary driver in much of the work on defining the spectral response to UV, in both aquatic and terrestrial environments, has been to assay the effect of variations in short-wavelength UVB coupled to changes in stratospheric ozone (Day and Neale, 2002).Stratospheric ozone was depleted over the latter decades of the 20th century, but continued depletion was halted by controls imposed on the atmospheric release of chlorofluorocarbons (CFCs).Presently, chlorinated species derived from the UV-mediated decomposition of CFCs that have accumulated in the upper stratosphere continue to depress ozone, but the effect is mild in the mid-and low latitudes (Weatherhead and Andersen, 2006).There is little expectation that the very low ozone columns that occur seasonally in high latitudes (where decomposition is enhanced by the catalytic effect of ice crystals) could ever be reached under present or typical future atmospheric conditions in temperate or tropical regions of the ocean where Synechococcus sp. is abundant.However, over longer (geological) timescales episodes of low ozone could occur (or may have already occurred) if the stratosphere is (was) affected by particle showers from a gamma ray burst (Thomas, 2009).Biological weighting functions extending to the short wavelengths that would be highly enhanced under such a scenario will also be useful in estimating the possible impact of a gamma ray burst on marine productivity.
The present contribution reports on the exposure response curves and spectral dependence (BWF) for UV inhibition of photosynthesis in two well-studied strains of Synechococcus sp., generally known by the codes assigned by the Woods Hole Oceanographic Institution, WH7803 and WH8102.The approach is similar to that used in previous studies of the spectral weighting function of phytoplankton photosynthesis, based on a custom spectral incubator, the "photoinhibitron" (Neale and Fritz, 2001).In the present studies, we used a new, expanded version of the photoinhibitron that enhances the estimation of BWF coefficients at short wavelengths.We also present a new model integrating the BWF and the photosynthesis-irradiance curve (BWF/P-E model) that better accounts for the effects of inhibition over an extended range of exposure.
Growth conditions
Synechococcus strains CCMP1334 and CCMP2370 (synonymous with WH7803 and WH8102, respectively) were obtained from the National Center for Marine Algae (NCMA, formerly CCMP) and were grown on SN media (Andersen et al., 2005).Growth irradiance was provided by cool white fluorescent lamps on a 14 h light : 10 h dark cycle.Growth PAR was measured with a 4-π probe immersed in water inside a culture flask.The average (±SD) of PAR in the growth chamber was 77 ± 12 (medium irradiance, or ML) and 174 ± 22 (high irradiance, or HL) µmol m −2 s −1 and T = 20 and 26 • , with semi-continuous dilution.Although these strains can grow at lower temperatures, we chose these temperatures so that they could also be used in parallel experiments with Prochlorococcus, which has a narrower range of growth temperature.Growth curves were measured for each set of culture conditions and experiments were performed with cultures in early-to-mid-log phase.Experiments were repeated at least three times with independently grown cultures for each set of conditions.Cell enumeration was performed with a Multisizer 4 particle sizer/counter (Beckman/Coulter) using a 20 µm aperture (minimum resolution 0.4 µm).Samples were diluted as necessary with filtered seawater before counting.
Photosynthesis measurements
Incubations for the measurement of photosynthesis were performed using a polychromatic UV + PAR incubator with 2.5 kW xenon lamp (photoinhibitron), based on the design of Cullen et al. (1992) with modified block construction similar to that described by Smyth et al. (2012).The present study used six blocks (16.5 cm × 7 cm × 3.5 cm), each separately plumbed for coolant flow to enhance temperature regulation.Each block has vertical wells (formed by tubes connecting holes on the top and bottom of the block) for 20, 1 cm diameter, quartz bottom, cylindrical cuvettes, arrayed as 4 lengthwise rows of 5 wells.Two filter combinations were used in each block, each covering 10 wells (2 lengthwise rows), giving a total of 12 spectral treatments per incubation, with neutral density screens positioned in some slots to produce approximately equal increments of PAR within each treatment.× 2 * Letters identify the panel in Fig. 2 with example results for the listed spectral treatment group.a All Schott filters were 3 mm thick; "× 2" denotes that two pieces of filter glass or film were combined, doubling the effective pathlength (thus red-shifting the cutoff wavelength, especially for 1 % T ).b All filter combinations (either two pieces of filter glass or filter glass + film as indicated) were assembled with a high-purity silicone optical grease.This decreased variation in the index of refraction along the optical path and minimized scattering.Filter-filter or filter-film combinations had the same transmittance in the visible as single layer elements.c LG filters are manufactured by Corion and contain a polymer film sandwiched between two glass layers.The WG295 "prefilter" protected the polymer film from solarization by the short-wavelength irradiance from the xenon arc lamp.d Courtgard, a film manufactured by Solutia, Inc, blocks UV below 400 nm.
The filter combinations are listed in Table 1.Spectral irradiance (mW m −2 nm −1 ) for each well in the photoinhibitron was measured with a custom-built fiber-optic spectroradiometer as described by Neale and Fritz (2001).Some combinations resulted in exposure of suspensions to E(λ) at wavelengths < 290 nm, which do not normally reach the ocean surface.These treatments were added to extend model predictions to extremely low ozone columns as would occur if the stratosphere receives particle showers from a gamma ray burst (Thomas, 2009).
Photosynthesis per unit chlorophyll (Chl) a (P B ) was measured as total 14 C assimilation (acid-stable) in 1 mL aliquots during a 1 h incubation as described in Sobrino et al. (2007).Chl a was determined fluorometrically on culture aliquots filtered on to GF/F (Whatman) filters.The cells were disrupted by homogenizing the filters in 90 % acetone with a teflon pestle, followed by overnight extraction at −10 • C. The extracts were clarified by centrifugation and fluorescence was measured before and after acidification on a Turner Designs model AU-10 fluorometer calibrated with Chl a from spinach.Pigment absorbance (a p (λ), m 2 mg Chl −1 ) was measured using the quantitative filter technique (QFT) following the procedures described by Tzortziou et al. (2006).
Photosynthesis model
Data were fit to BWF/P-E functions: where ERC is the exposure response curve for inhibition of photosynthesis.The set of ε(λ) is the BWF (see Table 2 for symbols and units).Given spectral irradiance in each cell, ε(λ) and the other parameters are estimated using a non-linear regression approach.Details of the principalcomponent-based estimation procedure and error assessment are given in Cullen and Neale (1997).Standard errors for the parameter means over-replicate experiments (n ≥ 3) are the root mean square (rms, quadrature) of the estimation standard errors (propagated from regression standard errors) and the standard error due to between-replicate variability.Three different response models (ERCs) were considered for inhibition, differing in how the rate at which activity is restored (referred to as "repair") depends on relative inhibition (referred to as "damage") (Fig. 1).The first ERC considered was the "E" model.This is the original ERC used to model responses to UV as measured in the photoinhibitron (Cullen et al., 1992).It assumes that repair is proportional to damage at all exposures (fixed rate constant for repair), so that at steady-state (2) In later studies it was observed that the E model was not consistent with UV + PAR inhibition of photosynthesis in a eukaryotic picophytoplankter, Nannochloropsis (Sobrino et al., 2005).For this species, photosynthesis had a steeper decline (greater increase in inhibition) with increased exposure than would be predicted if repair rate scaled with damage for all exposures.This alternative response was better predicted using a fixed repair rate, which is the basis for the "T " model: (3) The "T " designation relates to the presence of a threshold (E * inh = 1) above which, by definition, photosynthesis is inhibited.Compared to the E model, this model was significantly better at fitting observations at high E * inh in several studies of cultures and natural assemblages (Sobrino and Neale, 2007;Sobrino et al., 2005;Sobrino et al., 2009 Graphical illustration of the implied dependence of repair and damage rates for the E, T and E max models.
Symbol
Name Units Irradiance-weighted chlorophyll-specific absorption of PAR at depth z m 2 mg Chl −1 a PI Irradiance-weighted chlorophyll-specific absorption of PAR in the photoinhibitron m 2 mg Chl −1 BWF Biological weighting function c Scaling factor for exposures Underwater PAR irradiance at depth z adjusted for difference in pigment W m −2 absorption of PAR in situ vs. in the photoinhibitron E s Characteristic irradiance for onset of saturation of photosynthesis W m −2 E Q (0 − , λ) Spectral photon flux density of PAR (400-700 nm) at the sea surface µmol photons m Spectral photon flux density of PAR (400-700 nm) in the photoinhibitron µmol photons m −2 s −1 high exposures.This model is called the "E max " model and uses a combination of the E model at low exposures and T model at high exposures: The new E max parameter defines the transition between the exposure range over which repair rate increases with damage and higher exposures for which repair rate is constant (i.e., operating at some maximum rate).A related, timedependent, version of the model, the R max model, has been used previously to fit the UV + PAR inhibition of photosynthesis in phytoplankton from the Ross Sea, Antarctica (Smyth et al., 2012).
Each of these models has an implied relationship between damage and repair that is illustrated in Fig. 1.The E max model has a greater range of possible response shapes that is gained through an additional parameter compared to the E or T model.Whether sufficient increase in explained variance is gained to justify the incorporation of an additional parameter was assessed by evaluation of the Akaike information criterion (AIC; Andersen et al., 2005) for each of the fits using the Matlab NonLinearModel function (Statistics toolbox).
Predicting depth profiles of photosynthesis
For an initial evaluation of model performance under quasireal-world conditions, example profiles of photosynthesis were calculated using a set of nominal estimates of downwelling spectral irradiance in the oligotrophic ocean.Spectral irradiance just under the surface, E Q (0 − , λ), and diffuse attenuation coefficients, K d (λ), were estimated using the methods of Cullen et al. (2012), using their provided worksheet for estimation of bio-optical data.Input parameters were 23 • N latitude, 21 June noon, 0.1 mg Chl m −3 and 0.01 m −1 CDOM (colored dissolved organic matter) absorption at 400 nm.Photosynthesis rates at each depth point, z, were then computed following a procedure similar to that described in Lehmann et al. (2004), with some modifications.The model photosynthetic response is a function of total PAR (E PAR , W m −2 ).A factor is applied to underwater PAR to correct for spectral differences of model PAR vs. the filtered xenon irradiance used to measure photosynthesis.Irradianceweighted chlorophyll-specific absorption for the photoinhibitron, a PI (m 2 mg Chl −1 ), was calculated by weighting the phytoplankton chlorophyll-specific spectral absorption coefficient (a p (λ), m 2 mg Chl −1 ) with the average photoinhibitron spectrum, as photon flux, (5) The calculation was based on photon (quantum) flux (E Q ) since photosynthesis is a quantum process.The wavelength resolution ( λ) was 1 nm.A similar calculation was performed for the underwater profile to obtain the irradiance-weighted absorption of in situ irradiance, a IS (z) (m 2 mg Chl −1 ): Finally, a corrected PAR irradiance for the photosynthesis model was calculated as For consistency with previous usage of the BWF/P-E model, the corrected PAR is in energy units.This correction adjusted E PAR for the greater (or lesser) absorption of underwater irradiance compared to photoinhibitron irradiance; this applied to both light-limited photosynthesis at low irradiance and PAR inhibition at high irradiance (pigments mediate both processes).Next, profiles of total weighted irradiance for inhibition (E * inh ) were calculated: and productivity at depth was obtained by evaluating Eq. ( 1) with the E max ERC (Eq.4).
Photosynthesis
A representative set of photosynthesis measurements obtained using the photoinhibitron as configured for this study is shown in Fig. 2. The rate of photosynthesis over a 1 h incubation is plotted versus the PAR irradiance, each panel showing results obtained with one of the 12 filter combinations used in the experiment.Each filter combination controls the minimum wavelength of exposure ("cutoff") and the shape of the spectrum near the cutoff.The position of each cell within the projected beam of the xenon arc lamp also affects the spectral shape as well as the total irradiance.Further variation in total irradiance was configured using neutral density screens.Since both spectral composition and irradiance can vary even within one spectral group, the observations do not necessarily follow a smooth response vs. exposure trend (e.g., circled points in Fig. 2).However, the variation in spectral shape is taken into account in the fitting of the BWF given the treatment irradiance spectrum separately measured for each cell.To illustrate this, Fig. 2 also shows the corresponding predicted values from the fitted BWF E max /P-E model (x's, overall R 2 = 0.96 and root mean square error (RMSE) = 0.53 mg C mg Chl −1 h −1 ), which closely follow the observed variation both between treatments and within each treatment group.
Selection of exposure response curve
We tested how well each of three possible ERCs fit the response of Synechococcus to UV + PAR exposure.All three models were fit for each photoinhibitron experiment.An example set of photoinhibitron results using the responses of a WH8102 culture (grown at HL and 26 • C) is shown in Fig. 3, where photosynthesis (P B ) is plotted versus weighted irradiance (E * inh , dimensionless).Fits using all of the ERCs reproduced the general variation within the data set, with an R 2 >0.90 (RMSE <0.7 mg C mg Chl −1 h −1 ).However, there were small but significant differences in the performance of the ERCs.When a BWF was estimated with the E model, there was good agreement between observed (points) and predicted (line) biomass-specific photosynthesis at low exposure, but the best-fit model systematically overestimated photosynthesis at high exposure (Fig. 3 -E model).In comparison, the predicted photosynthesis from the BWF/P-E using the T model (Fig. 3 -T model) was closer than the E model predictions at high exposure but systematically underestimated photosynthesis at low exposure.This motivated the application of the hybrid model (E max ) that uses an E model response at low exposure and a T model response at high exposure (see model description in Materials and methods).The E max model gave a better overall fit to the data than either the E or T model, although response in all cases was still consistently overestimated when inhibition was > 80 % 1. Due to heterogeneity in the Xe lamp emission, spectral composition varies somewhat even within a spectral treatment.This causes some scatter in the P-E relationship (e.g., circled points) but does not affect the generally good agreement between observed and fitted.The root mean square error (RMSE) for the fit is 0.53 (mg C mg Chl −1 h −1 ).
(Fig. 3 -E max model).The average R 2 was 0.96 and 0.95 and average RMSE was 0.37 and 0.40 (mg C mg Chl −1 h −1 ) for WH7803 and WH8102 cultures, respectively.To assess whether the improved fit from the E max model was sufficient to justify an additional parameter, we computed the AIC for each fit.The results confirmed that the additional parameter was justified by the improved predictive accuracy of the E max model.The AIC takes into account both the prediction performance and number of model parameters; the best model is the one providing the lowest AIC (Andersen et al., 2005).Consistent with the examples in Fig. 3, the AIC was lower for the T vs. E model fit, and for the E max vs. the T model.Although the improvement varied from experiment to experiment (averages presented in Table 3), the AIC consistently decreased for all experiments with WH8102 (n = 15) and WH7803 (n = 16), except for one WH8102 experiment where the T model and E max model AIC were essentially the same (AIC was 0.6 higher for E max ).
Spectral range of the biological weighting function
In most experiments previously conducted with the photoinhibitron, the spectral treatments were defined with 8 different long-pass cutoff filters, and the shortest wavelength included in the exposure was in the 281-290 nm range (e.g., Neale and Fritz, 2001).Generally the irradiance at wavelengths < 300 nm was very small, though this is in line with solar irradiance.Moreover, because treatments with shortest wavelength cutoffs also have very high levels of UVA that usually cause considerable inhibition, the response to the additional UVB is relatively small.Thus, these treatments do not have much leverage in the overall fit.In contrast, for this study we were particularly interested in having more statistical power to estimate coefficients at the short wavelengths to maximize our ability to assess the impact of a drastic decrease in ozone as would occur with a gamma ray burst.To increase statistical power in the short UV-B, the number of spectral treatments was increased to 12 and the minimum treatment wavelength extended to 265 nm (cf.Table 1).To assess how this change in experimental design affected the estimation of the BWF, we performed fits both with the full data set and a reduced-size data set that had a similar spectral range as previous fits with eight spectral treatments.The reduced data set omitted the two treatments with spectral irradiance below 282 nm (filter combinations A and B in Table 1), with the total number of photosynthesis observations reduced from 120 to 100.The BWFs estimated using the principal components method were very similar (identical within the standard error of the estimates) for the wavelength range 300 to 400 nm (Fig. 4).However, below 300 nm the BWF estimates diverged.When the BWF was fit with the full data set, the general log-linear slope of the BWF in the UV-B above 300 nm continues approximately the same below 300 nm.In contrast, in the reduced data set, the loglinear slope steepened below 300 nm.
This suggests that the steeper slope in the reduced data set is an "edge" artifact related to the application of the PCA estimation method.Near the wavelength lower limit, both for UV inhibition of photosynthesis in Synechococcus WH8102 (HL 26 • C) and 7803 (ML 20 • C) comparing results obtained using all the data from each experiment ("full", n = 120, shortest treatment wavelength 265 nm) and a reduced data set, omitting the two treatments with spectral irradiance shorter than 282 nm ("part", n = 100).
the mean and standard deviation (SD) of treatment (photoinhibitron) irradiance approaches zero.Since the reciprocal SD is a scaling factor in the BWF calculation (Cullen and Neale, 1997), the decline in SD at the short-wavelength edge forces high weights irrespective of their actual effect.The overestimation is a structural bias (as opposed to being due to experimental error) since the estimated standard error (which is conditioned on having identified the correct model) is small.The statistical leverage of data for which this bias results in a loss of fit must be small, as essentially the same R 2 was obtained for the model fit to the n = 100 data set with either BWF (for BWFs in Fig. 4, difference in R 2 is ca.0.002; results not shown).Similarly, the artifact has a nearly negligible effect on model predictions under solar exposures (in the absence of a gamma ray burst) since wavelengths < 300 nm make a very small contribution to E * inh (cf.Sobrino et al., 2009).In the BWF fit to the full data set, an analogous steepening of the BWF can be observed for wavelengths < 280 nm.This is likely caused by the same edge artifact in the PCA method, shifted to 20 nm shorter wavelengths due to the inclusion of treatments with the shorter cutoffs.Since wavelengths shorter than 280 nm are extremely unlikely to reach the surface of the ocean in any significant amount, even in the presence of a gamma ray burst, only weighting coefficients for 280 nm and above are presented herein.
Effect of pre-exposure
We tested the effect of recent light history on photosynthetic response to UV by conducting incubations with cultures that had been pretreated by 1 h exposure to moderately high PAR (400 µmol m −2 s −1 ) and UV from a xenon source filtered to exclude wavelengths < 350 nm.These results were compared to those obtained for cultures that were transferred directly from the growth chamber to the photoinhibitron.The latter experiments were performed first, and used to choose the pretreatment conditions so as to cause minimal or low inhibition (E * inh < E max ).The photosynthetic rates of samples without pre-exposure (Fig. 5 left panel) exhibited relatively low rates at high UV, below even the best-fit E max predictions, whereas the pretreated samples (Fig. 5 right panel) had increased photosynthesis and showed a better fit to the E max model.In addition, the fitted value for E max increased from 0.32 ± 0.16 without pre-exposure to 0.73 ± 0.24 with preexposure, consistent with an increase in repair capacity.This implies that pretreatment increased resistance to UV exposure, an interpretation that is also consistent with the kinetic data of WH8102 during moderate UV + PAR exposure as reported by Fragoso et al. (2013).They observed that effective quantum yield of photosystem II (PSII) during UV + PAR exposure dropped to an initial low steady state followed by a "rebound" in yield over the next 20 min period.Kinetic analysis showed that the rebound was consistent with an increase in repair rate.
Model fits for different growth conditions
Maximum rates of uninhibited photosynthesis (P B s ) and saturation irradiance parameter (E s ) were higher for cultures grown at higher temperature (Table 4).There were no consistent differences in these parameters for the two different irradiance conditions, except for WH8102, 26 • ; E s was higher for cultures grown under HL vs. ML, and P B s was higher, but the difference was not significant within the variability of the measurements.The parameter for inhibition by E PAR , ε par , also tended to be lower for cultures grown at higher irradiance, but not significantly so.
Average BWFs ± SE (n ≥ 3) for WH8102 and WH7803 cultures grown at two irradiance levels and two temperatures are shown in Fig. 6.The two strains showed different patterns of response to changes in growth irradiance and temperature.For WH8102, the variation in average BWF with growth irradiance or temperature was small relative to the standard error of the mean.Differences bordered on significant in the UV-A, with lower sensitivity at the higher irradiance and temperature (Fig. 6 upper panels).On the other hand for WH7803, growth irradiance had a strong effect on the BWF at 20 • C, and weak effect at 26 • C, and on average, weights were much lower for cultures grown at the higher temperature (Fig. 6 lower panels).The variation in the E max parameter complemented the variation in weights.For WH8102, E max was greater at the higher temperature for ML, and was slightly increased for HL.For 7803, E max was, in contrast, lower at the higher temperature at ML, with the same, but not significant, trend for HL-grown cultures (Table 4).
Prediction of in situ profiles of photosynthesis
Both spectral weight (ε(λ)) and E max affect the overall sensitivity to inhibition, so comparisons of responses between strains or between growth conditions need to take both parameters into account.To provide a context for assessing the combined impact of variation in the UV response and P-E parameters, we performed trial calculations of the depth profile of photosynthesis given the fitted BWF E max /P-E obtained for each strain and growth condition, and blue water bio-optics as described in Sect.2.3 (Fig. 7).The calculations used average phytoplankton spectral absorption coefficients for each culture condition; these results are not shown as they are similar to those in the literature (e.g., Six et al., 2004).Since the in situ spectrum is efficiently absorbed by Synechococcus pigments, spectrally corrected PAR penetrates deeper than 100 m, and the depth at which E PAR (z) decreases to 1 % of E PAR (0) is > 120 m.Thus, the model predicts significant rates of production at depths of 100-150 m.It should be kept in mind that these depths are normally below the thermocline and no attempt has been made to correct for changes in photosynthetic response related to the depth-dependent decreases in temperature and photoacclimation to lower growth irradiance.Nevertheless, the profile calculations are useful to compare the overall responses implicit in the fitted models.
Average biomass-specific productivity varied by a factor of three among the different parameter sets.The magnitude of the variation is mainly driven by the similar range in variation of P B s with growth temperature.In addition, the overall photosynthetic performance (productivity relative to uninhibited potential rate) was better at the higher temperature (Table 4, last column).The enhancement was most pronounced for ML-grown cultures.Another manifestation of the greater effect of inhibition on cultures grown at lower temperature is that the depth of the peak was consistently deeper; i.e., the effect of near-surface inhibition extended further down into the water column.Generally, predicted responses were more variable for WH7803 than WH8102.
Although the effect of near-surface exposure differed among the parameter sets, in every case UV + PAR inhibition was predicted to depress a major fraction of water column productivity.Integrated production (to 150 m) was 71 % of the potential for the most sensitive case (WH7803, 20 • C, ML) and 83 % of potential for the least sensitive case (WH7803, 26 • C, HL), corresponding to an inhibition (1 relative performance) of 17-29 %.The range was smaller for WH8102.Predicted inhibition under surface exposure (1 m) varied between 91 and 78 % (average response for the most and least sensitive case, respectively; cf.Fig. 8).These calculations were performed using spectral irradiance modeled with an ozone column of 300 Dobson units (DU), climatological for the Northern Hemisphere midlatitude summer (Lamsal et al., 2004).Decreasing ozone column to 200 DU, in the range of what would occur in a gamma ray burst (Thomas, 2009), resulted in a small decrease in surface rates, between 0.7 and 1.5 % relative to the uninhibited rate.Because the rates at the surface are already low, this is a 6.0 to 7.6 % decrease below surface productivity predicted under the normal ozone column (300 DU).Nevertheless, these effects are confined to the near-surface zone so that integrated productivity is predicted to be at most 0.3 % lower under the lower-ozone column.
To put these responses in the context of previous studies of UV + PAR inhibition effects on phytoplankton photosynthesis, we predicted productivity under the same set of trial conditions using a BWF E max /P-E model fitted from data for the common coastal/estuarine species Thalassiosira pseudonana.Calculations were performed based on experiments using cultures (strain 3H) grown under conditions similar to the 20 • C, ML conditions used here (data from Sobrino et al., 2009).These observations were refit to the BWF E max /P-E model for the purpose of comparisons (results not shown).Using this fitted model, we predicted for both T. pseudonana and Synechococcus the profile of (1) potential productivity (no inhibition), (2) productivity including only PAR inhibition and (3) productivity predicted by the full model with both UV and PAR inhibition (Fig. 8).For Synechococcus we show the results for 20 • C, ML cultures averaged over all experiments with WH8102 and WH7803.Under these conditions, the diatom and Synechococcus have similar Chlspecific productivity in the absence of inhibition.However both PAR and UV + PAR inhibition are appreciably more severe for Synechococcus.This is evident from both the lower rates at the surface and the slower increase of productivity with depth for Synechococcus.The difference in response to UV contributes the most to the contrast.Performance over the water column for full spectral exposure was 71 % of potential production for Synechococcus compared to 83 % for T. pseudonana.
Discussion
Photosynthesis by both of the studied Synechococcus strains was strongly inhibited by UV exposure.These strains were both originally isolated from surface waters but are considered representative of assemblages from different oceanic regions (Six et al., 2007a).Different clades of Synechococcus strains have been defined based on genomic sequence (Scanlan et al., 2009).WH8102 has been classified as a member of clade III, which is most common in oligotrophic regions.It has a characteristically high ratio of phycouribilin to phycoerythobilin (PUB : PEB), which maximizes light absorption in the blue (Scanlan, 2003).WH7803 is a member of clade V, a more generalist clade, which has a low PUB : PEB ratio enabling light absorption over wavelengths characteristic of both oligotrophic and coastal waters.
Response to high PAR and UV of both strains was dependent on the growth conditions.Sensitivity to UV, e.g., as measured by the UV associated decrease in model predicted in situ production, was reduced when cultures were grown at the higher temperature.Higher growth irradiance was also associated with lower sensitivity.Similar dependence of sensitivity to UV inhibition of photosynthesis on growth conditions has been reported for other laboratory studies of phytoplankton (Litchman and Neale, 2005;Sobrino and Neale, 2007).The sensitivity of PSII quantum yield to UV exposure in WH7803 was also less in high-light-grown versus medium-or low-light-grown cultures (Garczarek et al., 2008).Changes in sensitivity were reflected in differences in the fitted BWF, E max and inhibition by PAR (ε PAR ).Interestingly, while overall sensitivity showed similar trends for both strains, the pattern of changes in the model parameters was different between the strains.For WH8102, the lower sensitivity was mainly caused by an increase in E max .There was some shift in the BWF towards smaller weights, but this shift was small relative to the inherent variability between replicate cultures (Fig. 6).In contrast for WH7803, E max was actually somewhat lower for hightemperature/high-light cultures.Unlike WH8102, BWF coefficients decreased significantly with higher growth irradiance in 20 • cultures, and the average BWF for highertemperature cultures was several-fold lower than that of lower-temperature cultures (Fig. 6).These results suggest that WH8102 and WH7803, which have different pigment configurations, also differ in how photoacclimation influences the response to inhibitory exposure.Reducing sensitivity to UV can be accomplished by increasing photoprotection (thus decreasing damage) and/or increasing repair (Banaszak, 2003;Neale, 2000).By definition, the weighting coefficients in the BWF E max /P-E model represent a ratio of the rate constants of damage and repair.A rate constant is a measure of the likelihood that a given UV-susceptible target is damaged and/or reactivated per unit time.On the other hand, E max is a measure of repair capacity that is related to what proportion of target sites are damaged when repair rate reaches an upper limit.WH8102 showed large changes in E max suggesting that part of its photoadaptation strategy is to increase repair capacity.This apparent increase in repair capacity over growth timescales is interest-ing given that WH8102 also appears to increase repair over shorter timescales (minutes-hour) in response to acute UV exposure (Fragoso et al., 2013). Fragoso et al. (2013) suggest that this could be related to increased expression of a more UV resistant isoform of the PSII reaction center core protein, D1.Other than the change in E max between growth conditions for WH8102, there was little change in the BWF, and overall a narrower acclimation range than estimated for WH7803 (e.g., based on extent of differences in relative profile performance between growth conditions).The more constrained acclimation behavior of WH8102 would be consistent with its overall genomic characterization of having less regulatory genes (e.g., kinases) than other cyanobacteria, thought to be related to its association with more constant oligotrophic environments (Palenik et al., 2003).
For WH7803, BWF coefficients were variable between growth conditions that could occur through changes in rate constants of damage (e.g., increased photoprotection), rate constants of repair or a change in both of these rate constants.Unlike WH8102, E max decreased with HL or highertemperature growth, which may seem to be inconsistent with the overall decline in sensitivity.However, E max more precisely represents the point of transition between a scaled (= rate constant × number of damage sites) and a constant rate of repair (Fig. 1 -E max model).The decrease in E max (e.g., between 20 and 26 • cultures) could thus be due to a greater increase in the rate constant of repair than repair capacity.This is equivalent to a steeper initial slope of function in Fig. 1, and there will be a corresponding lowering of the fraction needed to reach a fixed repair capacity (transition point moves closer to origin).This suggests that WH7803 has (relative to WH8102) a more diverse repertoire of acclimation to irradiance and temperature, manifested in its variable sensitivity to inhibition by UV irradiance.This is consistent with the distribution of the clade V group to which WH7803 belongs over a broad range of oligotrophic and coastal environments (Scanlan et al., 2009).
Overall, the E max model clearly provided the best prediction of inhibition of photosynthesis in both studied strains of Synechococcus.Together with previous studies using the "T " model (Sobrino et al., 2005;Sobrino and Neale 2007;Sobrino et al., 2009), these results suggest that there is an upper limit to repair rate in many phytoplankton during exposure to high UV + PAR.The physiological mechanism(s) limiting absolute repair rates is (are) presently not known and more work is needed.The dynamics of photosynthetic complexes involved in photoinhibition and recovery, particularly PSII, has received considerable study (recent reviews: Vass, 2012, andTyystjärvi, 2013) and lead to development of models of PSII damage and repair (Campbell and Tyystjärvi, 2012).However, these models presently assume a fixed rate constant for repair (equivalent to the E model).Nevertheless, it is also recognized that repair, e.g., of PSII, is a multistep process involving degradation of damaged components, their resynthesis and reintegration into a reactivated complex (recent review: Takahashi and Murata, 2008).The rate of any of these steps, or a step in the repair of another complex such as RUBISCO, could reach a maximum under high exposure and set the upper limit of repair rate.Furthermore, there is increasing evidence that repair itself can be inhibited under irradiance stress (Takahashi and Murata, 2008).Such inhibition could account for the tendency of even the E max model to over-estimate photosynthesis at very high exposure (cf.Fig. 3).Our results suggest that the primary productivity of open ocean Synechococcus will be significantly depressed in the near-surface, "photoactive" zone, with most of the effect induced by the UV portion of the spectrum.Presently, there are few field data to compare with the predictions of the BWF E max /P-E model.Early studies of UV inhibition of photosynthesis in open ocean assemblages (e.g., Behrenfeld et al., 1993;Smith et al., 1980) pre-dated the routine use of flow-cytometric techniques to quantitate the contribution of Synechococcus.Nevertheless, the range of relative inhibition reported in those studies encompasses the inhibition obtained in the example calculations presented here.More recently, rates of photosynthesis have been reported for in situ incubations of a Synechococcus-dominated assemblage in the Coral Sea (22 • S), comparing containers transmitting full spectrum, UVA + PAR and PAR only (Conan et al., 2008).Again, the results of these near-surface, 6 h incubation results are broadly consistent with our example calculations, with the full spectrum rate at 1 m 35 % of the PAR-only rate, and the UVA + PAR rate 53 % of the PAR-only rate.Currently, a more comprehensive effort is underway to estimate productivity based on the BWF E max /P-E model over a range of latitudes and time using representative oceanic observations from the Pacific Ocean to make more specific comparisons, including a more thorough evaluation of possible gamma ray burst effects; these results will be reported in a future contribution.Also, the UV responses of the typical co-dominant of Synechococcus in the picoplankton, Prochlorococcus, have also been studied using laboratory cultures and will be presented in a separate report.
Our study is the first report of the spectral dependence of inhibition for photosynthesis by marine prokaryotic (pico)phytoplankton.Overall, the strains used in this study were more sensitive to inhibition than a eukaryotic nanoplankton, Thalassiosira pseudonana, grown under similar conditions.The sensitivity is also high in the context of the large-scale analysis of Cullen et al. (2012), who examined the relative decrease in integrated productivity based on selected BWFs (all for eukaryotic nano/microplankton) under a wide range of conditions.Their maximum inhibition was 24 %, which was exceeded by the responses of the ML 20 • cultures (28 % average inhibition of integrated productivity under the test conditions).The higher sensitivity of Synechococcus occurred despite the demonstrated existence of mechanisms protecting against and recovering from high light exposure (Bailey and Grossman, 2008).Neverthe-less, the relatively higher sensitivity of Synechococcus vs. T. pseudonana is consistent with a general trend of greater resistance to light stress and faster photoacclimation for eukaryotic vs. prokaryotic phytoplankton grown under the same conditions (Kulk et al., 2011).While UV sensitivity has been extensively studied for eukaryotic phytoplankton, further studies are needed to confirm the generality of high sensitivity to UV for picocyanobacteria, especially spectral and temporal dependence.
Figure 2 .
Figure 2. Representative set of photosynthesis measurements from a Synechococcus photoinhibitron experiment; shown are results from the exposure of a HL 26 • C WH8102 culture plotted vs. PAR exposure (W m −2 ) of each treatment.Observed rate of photosynthesis (open circles) and predicted rate of phytosynthesis by the BWF E max -PE model (x's).Panel titles give the 1 and 50 % wavelength cutoffs of the spectral treatment shown in each panel, color coded on a gradient from short-wavelength UVB (magenta) to long-wavelength UVA (blue); further details are listed by panel letter in Table1.Due to heterogeneity in the Xe lamp emission, spectral composition varies somewhat even within a spectral treatment.This causes some scatter in the P-E relationship (e.g., circled points) but does not affect the generally good agreement between observed and fitted.The root mean square error (RMSE) for the fit is 0.53 (mg C mg Chl −1 h −1 ).
Figure 3 .
Figure 3. Observed (points) and fitted (lines) biomass-specific photosynthesis (mg C mg Chl h −1 ) as a function of UV + PAR exposure weighted by a spectral biological weighting function for inhibition, E *inh (dimensionless).The panels illustrate the observed vs. fitted results for three BWF/P-E models, the E (left), T (center) and E max (right) models.RMSE is in mg C mg Chl −1 h −1 .The fitted value for E max is shown by the vertical line in the panel on the right.Observations are for an exposure in the photoinhibitron of a HL 26 • C culture of WH8102 (same as Fig.2).
Figure 4 .
Figure 4. Fitted biological weighting functions (±standard error) for UV inhibition of photosynthesis in Synechococcus WH8102 (HL 26 • C) and 7803 (ML 20• C) comparing results obtained using all the data from each experiment ("full", n = 120, shortest treatment wavelength 265 nm) and a reduced data set, omitting the two treatments with spectral irradiance shorter than 282 nm ("part", n = 100).
Figure 6 .Figure 7 .
Figure6.Average (±standard error) of biological weighting functions for WH8102 (upper three panels) and WH7803 (lower three panels) grown in either "medium" irradiance (ML, 77 µmol m −2 s −1 PAR) or "high" irradiance (HL, 174 µmol m −2 s −1 PAR), and growth temperatures of 20 or 26 • C. Weights were estimated for 265-400 nm, but the weights for wavelengths below 280 nm are not shown since coefficients in the 265-280 nm range may be influenced by the edge artifact illustrated in Fig.4.
Figure8.Depth profiles of productivity as predicted using the BWF E max /P-E model evaluated with the P-E equation and no inhibition (solid line), with only inhibition by PAR (ε PAR ) included (long dashes) or with full UV + PAR inhibition (short dashes).The curves in (A) show the average response by Synechococcus (ML 20 • cultures, WH8102 and WH7803 combined); in (B) is shown predicted response based on a fit of the BWF E max /P-E model to the photoinhibitron data for the diatom Thalassiosira pseudonana (strain 3H) grown at comparable growth irradiance and temperature (fromSobrino et al., 2009).
Table 1 .
Filter configuration for the 12 spectral treatment groups of the photoinhibitron as used in this study.
Table 3 .
Difference in AICs calculated for E, T and E max model fits to experimental data on the response of Synechococcus photosynthesis to UV + PAR exposure.The comparison is limited to experiments conducted after a 1 h pre-exposure of culture sample to moderate UV + PAR (see Sect. 2).
• C), for which, prior to a standard 1 h incubation in the photoinhibitron, one was maintained under growth conditions and the other exposed to moderate UV + PAR (no pre-exposure or pre-exposed, respectively; see details in Materials and methods).Solid line shows the fitted values from the BWF E max /P-E model.
Table 4 .
Fitted parameters for the E max model, mean ± standard errors for n ≥ 3 experiments under each condition.Integral productivity predicted for test profile conditions (Fig.7), ratio of prediction for full UV + PAR exposure to potential productivity (inhibition term excluded). * Sobrino et al., 2009)ductivity as predicted using the BWF E max /P-E model evaluated with the P-E equation and no inhibition (solid line), with only inhibition by PAR (ε PAR ) included (long dashes) or with full UV + PAR inhibition (short dashes).The curves in (A) show the average response by Synechococcus (ML 20 • cultures, WH8102 and WH7803 combined); in (B) is shown predicted response based on a fit of the BWF E max /P-E model to the photoinhibitron data for the diatom Thalassiosira pseudonana (strain 3H) grown at comparable growth irradiance and temperature (fromSobrino et al., 2009). | 2018-12-07T08:05:19.363Z | 2013-12-11T00:00:00.000 | {
"year": 2013,
"sha1": "8b9e39e8c7da7c1ea95bddc78d179530f44eb6f8",
"oa_license": "CCBY",
"oa_url": "https://bg.copernicus.org/articles/11/2883/2014/bg-11-2883-2014.pdf",
"oa_status": "GOLD",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "1dd0737c2a1cf39abbe5c864aca5b3c09e2eeaf3",
"s2fieldsofstudy": [
"Environmental Science"
],
"extfieldsofstudy": [
"Biology"
]
} |
234092104 | pes2o/s2orc | v3-fos-license | The Use of Allelochemicals of Aquatic Macrophytes to Suppress the Development of Cyanobacterial “Blooms”
Harmful algal “blooms”, or HABs, is a hazardous natural phenomenon that often occurs under the influence of anthropogenic factors, for example, during the anthropogenic eutrophication of water bodies. An increase in the frequency and duration of cyanobacterial “blooms” carries a number of serious threats, including local and global degradation of water resources and the impact of cyanotoxins. There are various methods of fighting cyanobacterial “blooms” - physical, chemical, the use of bacterial preparations, etc. However, these methods are not effective enough and, most importantly, do not allow effectively solving the problem of suppressing HABs in water bodies without damage to other components of the aquatic ecosystem. Allelopathy is a natural phenomenon for both stimulatory and inhibitory effects of one plant upon another including microorganisms that resolves this problem. Allelochemicals of macrophytes can be considered as natural algaecides and become the basis of a nature-like convergent technology to suppress the development of plankton cyanobacteria and prevent HABs in water bodies. In our work, we used some allelochemicals of aquatic macrophytes to create a combined algicide of the new generation for suppressing the development of cyanobacteria. The effectiveness of suppressing cyanobacterial “blooms” is demonstrated by the example of field experiments with mesocosms and natural phytoplankton.
Introduction
Harmful algal "blooms", or HABs, is a hazardous natural phenomenon that often occurs under the influence of anthropogenic factors, for example, during the anthropogenic eutrophication of water bodies. An increase in the frequency and duration of cyanobacterial "blooms" carries many serious threats, including local and global degradation of water resources and the impact of cyanotoxins [1][2][3]. This problem is especially relevant and acute for millions of small reservoirs widely used for various types of water consumption: fisheries and aquaculture, water supply for various industries, including agricultural, drinking, and domestic water supply, recreational purposes, including sporting events. HABs occur when algae or cyanobacteria (most often they are) develop beyond measure and produce harmful effects on other hydrobionts, fish, aquatic and terrestrial animals, and birds as well as people [4,5]. HABs disrupt the esthetics of water bodies and render the water unsuitable for various kinds of water uses. Economic damage due to HABs can be millions of dollars [6,7].
Widespread HABs is a phenomenon to which special attention should be drawn since such "blooms" pose a number of serious threats, including local and global degradation of water resources and exposure to cyanotoxins [8][9][10][11][12][13][14].
Cyanobacterial "blooms" of water bodies are officially recognized as a global problem of modern ecology. Seasonal intense cyanobacterial "blooms" of reservoirs bring additional undesirable properties to natural and drinking water, such as a specific smell, taste, and the presence of toxins (microcystins). In some regions, the importance of this problem has been increasing recently [15]. The Working Group on the Evaluation of Carcinogenic Risks to Humans listed cyanotoxins as a carcinogenic substance harmful to humans [16].
The introduction of biotechnological methods into the practice of water body management that have maximum efficiency is one of the tasks of modern science. These include, first of all, the so-called convergent nature-like technologies, i.e. technologies that are based on any natural mechanisms causing this or that effect. These are precisely technologies that may be intended to ensure the sustainable development of modern countries [17][18][19].
Such technologies, aimed at managing the development of plankton communities in general and phytoplankton communities, in particular, may be based on such a phenomenon as allelopathy. This natural phenomenon can be very useful for effectively preventing and stopping the development of cyanobacterial "blooms" in water bodies [20][21][22]. Many existing methods of combating cyanobacteria [23] do not effectively solve the problem of "blooms" of water bodies without damage to other components of the ecosystem [3]. Usually, they are associated with serious adventitious effects on aquatic organisms and ecological systems [24].
At the same time, the application of the method of metabolic allelopathic control of HABs in water bodies during eutrophication is an effective and innovative solution to this problem. This approach preserves and restores water quality in water bodies, makes them suitable for multifunctional use, and natural allelochemicals (metabolites of macrophytes and their synthetic analogs) can be an effective alternative to existing algicides [20,22,25].
In reservoirs where macrophytes are developed (as a rule, at least 30% of the projective cover of the water area), water "bloom" is almost never observed. These circumstances are the causal basis for the development of nature-like technologies for the prevention and suppression of HABs with the help of new generation algicides based on allelochemical substances characteristic of aquatic macrophytes.
It has become apparent that metabolites-allelochemicals may be functioning in the processes of chemical suppressing of planktonic cyanobacteria in the aquatic ecosystems. However, data from field experiments are few concerning the effect of aquatic macrophyte allelochemicals on cyanobacteria, which is necessary for the development of nature-like technologies for preventing and suppressing cyanobacterial "blooms", and therefore they are the objects of "hottest" areas of research. Utilization of allelochemicals from aquatic macrophytes or using their synthetic analogs to inhibit cyanobacterial overgrowth is an environment-friendly technology for suppressing HABs. Some reviews are focusing on the practice of the application of allelochemicals in agriculture [26,27], but the field of using nature-like allelopathic technology to manage aquatic ecosystems is still poorly developed.
In the present study, we aimed to provide the information on the suppressing of cyanobacteria by macrophytes allelochemicals and the possibility to develop an algaecide of the new generation as a convergent nature-like technology for preventing and stopping the development of HABs in water bodies based on such a phenomenon as allelopathy.
Suppression of the development of cyanobacteria by aquatic macrophytes
Allelopathy as a natural phenomenon had been repeatedly recorded for a very long time in the 3rd century BC in ancient Chinese literature [28]. The term "allelopathy" was coined comparatively recently, in 1937 by Austrian plant physiologist Hans Molisch [29], who can be named as the father of allelopathy [30]. In general, we can consider allelopathy as an area of science, which investigates inhibitory or stimulatory biochemical interactions between the two plant/plant or plant/microorganism species.
The recent history of the study of low molecular weight organic compounds, which are small molecules (less than 900 amu) and constitute the low molecular weight metabolic profiles of organisms, should apparently begin with the discovery of the inhibitory effect of volatile plant excreta on microorganisms by Tokin Boris Petrovitch during the experimental work of 1928-1930 [31]. The research resulted in a number of publications, in one of which ("Bactericides of plant origin (phytoncides)") [32], the term "phytoncides" appeared. In the future, the doctrine of phytoncides was developed, which was reflected in the publication of several monographs. The history of research on phytoncides of aquatic and coastal plants began in the 40s of the XX century with the works of Gurevich Faiva Abramovich [33], a student of B.P. Tokin. These studies ended in 1973 with the defense of a doctoral dissertation "Phytoncides of aquatic and coastal plants, their role in biocenoses" [34]. In particular, it was F.A. Gurevich who showed that the phytoncidal activity of aquatic plants is closely related to the macrophyte species and peculiarities of its development. He also showed that phytoncides are a very significant factor in the distribution of hydrobionts in a water body, including invertebrates.
At present, we can say that the macrophyte and algal allelopathy is paid much less attention than allelopathy in terrestrial ecosystems. Macrophytes and cyanobacteria are known to have an antagonistic relationship in different natural and experimental aquatic ecosystems [25,35,36].
It is a recognized fact that phytoplankton is poorly developed in macrophytic lakes. Even if we take into account the opinion that this is due to such factors as winning competition for nutrients and shading, then in the overwhelming number of cases, the main factor providing suppression of phytoplankton development is undoubtedly allelopathic suppression [37]. Apparently, the competition for nutrients cannot be recognized as a decisive factor in the outcome of the struggle between macrophytes and cyanobacteria, including considering that most aquatic macrophytes are rooted, and they usually obtain the main part of the necessary nutrients from the bottom sediments, which is characterized by high nutrient concentrations [38].
It is well known the phenomenon when shallow-water lakes can change their trophic status and the type of lake ecosystem, being either a pure water body with well-developed aquatic vegetation or a water body with low transparency, high turbidity, and intensive phytoplankton (mainly cyanobacteria) development. In other words, they can shift from one state to another [36,[39][40][41][42][43]. As this takes place, the mutual inhibitory allelopathic activities of macrophytes and phytoplankton may lead to the dominance of either macrophytes or phytoplankton [44].
We observed a similar effect in a floodplain lake with a changing trophic state in the Volga-Akhtuba interfluve, when cyanobacteria and macrophytes dominated in the same water body in different years [36]. Some evidence exists [45][46][47][48] that allelopathy is a factor affecting the development of phytoplankton (including cyanobacteria) in shallow lakes at the projective cover of macrophytes from 20 to 100%.
The importance of allelopathy as a powerful regulatory mechanism initiates a lot of studies devoted to the study of the inhibitory (sometimes stimulating) allelopathic effect of macrophytes on cyanobacteria and algae in aquatic ecosystems [49][50][51][52][53][54][55][56][57][58]. More than 60 species (67) of macrophytes are known to exhibit allelopathic activity against cyanobacteria. They are presented in Table 1.
According to the principle of allelopathic action, it is possible to prevent or mitigate the massive development of Сyanobacteria (blue-green algae), which leads to the HABs in water bodies. The implementation of this research direction promises huge benefits since it will solve the problem of the "blooms" of water bodies without negative consequences for other components of the ecosystem. [20,22,25].
As follows from Table 1, data from laboratory studies, in general, prevail in the observation and proof of the effect of macrophyte allelopathy on cyanobacteria. These studies are based on laboratory-scale experiments using the co-cultures systems, adding plant extracts, or leachate collection. This state of affairs is associated with a more complex organization and interpretation of field studies. In this regard, data from field experiments and observations, for example with mesocosms, are of particular value. Numerous studies (including those included in Table 1) strongly suggest that allelopathy might thus be relevant in natural waters and suppress cyanobacteria and algae.
There are observations on the differentiation of the inhibitory effect of macrophytes on various species of cyanobacteria and algae. For example, it was concluded that the extracts, exudates, and live material of macroalgae Chara australis (Charophyta) exhibited strong inhibitory effects on the cyanobacterium Trichormus variabilis (formerly Anabaena variabilis), but no effect was observed on the growth of the green alga Scenedesmus quadricauda [82].
The available data allow us to speak about the selective inhibition of various species of cyanobacteria by allelochemicals of various species of macrophytes. As a result, the allelopathic effect of macrophyte association on cyanobacteria (and all phytoplankton) seems to be stronger than the effect of one macrophyte species. This is evidenced by the fact that, as has been shown, the allelopathic effect of excretions of the association of macroalgae (Chara hispida, C. baltica, C. vulgaris, Nitella hyaline) and Myriophyllum spicatum is characterized by a significantly stronger effect than the effect of monoculture of macrophytes [83]. Such a combination of selective inhibition of macrophyte allelochemicals and a more strong impact of macrophyte assemblages toward the undesired cyanobacteria may be useful for biocontrol of HABs in water bodies as well as in aquaculture to remove harmful cyanobacteria and leave other algae to be used as food for hydrobionts and fish. The author [83] suggested that different allelochemicals produced by different macrophytes may exhibit a synergistic effect concerning cyanobacteria. It was also noted in [128] that different plants produce different types of allelochemicals and in different quantities. These summarized findings are therefore provided with more probability the basis for an effective strategy for reducing cyanobacterial biomass by introducing into water bodies with mixtures of submerged or floating native macrophytes for both restorations of aquatic ecosystems and mitigation of the HABs. Lombardo et al. [129] suggested that lake trophic state and extent of submerged vegetation coverage maybe the most important factors during formation in situ macrophyte-phytoplankton patterns at a large scale of natural water bodies. In this case, with a larger projective cover, a greater allelopathic effect will be achieved [45][46][47][48].
Not all macrophytes have the same allelopathic effect on cyanobacteria. Macrophytes that have the greatest suppressive effect on cyanobacteria (taking into account, among other things, information from In the study [131], it was concluded that of all the 15 tested aquatic macrophytes, Nymphaea odorata and Brasenia schreberi have the highest allelopathic potential. However, this conclusion was obtained in experiments with lettuce sprouts, and not with cyanobacteria. These macrophytes inhibited 78% and 82% of lettuce seedling radicle growth and 98% and 68% of L. minor frond production respectively. Elakovich S. D. and Wooten J. W. [132] also reported that Nuphar lutea has high allelopathic activity.
Similar results were obtained with the macrophytes Potamogeton maackianus, Potamogeton wrightii, and Potamogeton crispus, which exhibited different inhibitory effects on the two species of algae [128]. There is a view that most allelochemicals are released during the early developmental stage of plants. It is assumed that during this period, plants are most dependent on stress conditions and competition with other surrounding plants for resources such as light, nutrients, and water [133]. However, in our studies, we found that the active synthesis of allelochemicals in aquatic macrophytes can continue even at later stages of plant development [22].
For the sake of completeness, it should be noted that some terrestrial plant materials (for example, barley straw) exhibit a strong allelopathic effect on cyanobacteria under certain conditions [134][135][136], which is no coincidence, since terrestrial plants also contain numerous allelochemicals [28]. It was shown in [137] that salcolin (two enantiomers that differ in their anti-cyanobacterial abilities) is the key allelochemical in barley straw's which exhibits an inhibitory effect on cyanobacteria and could be used as an agent in the control of cyanobacterial HABs. A review of typical terrestrial allelopathic plants with algistatic or algicidal effects is presented in [24].
Anti-cyanobacterial allelochemicals produced by aquatic macrophytes
Low-molecular-weight anti-cyanobacterial allelochemicals produced by aquatic macrophytes are very diverse. They belong to different classes of chemical compounds and are functionally diverse. Allelochemicals from the following groups of chemical compounds are the most important [22,30,55]: aldehydes, ketones, ethers, terpenes and terpenoids, phytoecdysteroids, fatty acids, sulfur-containing compounds, nitrogen-containing compounds, alcohols, lactones, polyacetylenes, quinines, phenolics, cinnamic acid and its derivatives, coumarins, flavonoids, tannins. These groups include hundreds of allelochemicals inhibiting cyanobacteria and algae [24], which should be discussed in detail in a special review.
These allelochemicals can be extracted from the plant biomass, but also their synthetic counterparts can be produced and used. This will reduce the consumption of natural plant resources. The effectiveness of synthetic allelochemicals can be similar to their natural counterparts. Thus, synthetic allelochemicals are a hopeful alternative to the use of natural metabolites-allelochemicals against HAB-forming cyanobacteria [20,21].
Realizing that it is impossible to consider all groups of allelochemicals, here we will focus on considering only fatty acids and phenolic compounds as the most promising (in our opinion) for biotechnological use in the fight against HABs.
Studies of potential biological activities of major low molecular weight organic compounds of aquatic macrophytes using the QSAR method [138,139] have shown that fatty acids and gallic acid are characterized by various types of bioactivity with the highest probability of manifestation (Pa > 0.9) that can induce cyanobacteria growth suppression. Further studies based on the results obtained suggest clarifying experimental studies of the reaction of various species of cyanobacteria to the effects of selected allelochemicals.
As it was received in laboratory experiments conducted with fatty acids for their effect on the cyanobacteria Synechocystis aquatilis and Aphanizomenon flos-aquae, and which are described in detail in [140], selected allelochemicals (linoleic, heptanoic, octanoic, tetradecanoic, hexadecanoic, and gallic acids) possess inhibitory allelopathic activity against cyanobacteria. However, their inhibitory effect was different. The highest values of the Suppression index (SI, defined as the cyanobacterial density in control divided by the cyanobacterial density in an experiment with allelochemicals) (SI > 10) were recorded (in ascending order) for hexadecanoic, linoleic, tetradecanoic, gallic acids, and a mixture of four allelochemicals (heptanoic, octanoic, tetradecanoic and gallic acids).
The highest SI values for Synechocystis aquatilis were obtained when the culture of cyanobacteria was exposed to gallic acid (SI = 30) and a mixture of heptanoic, octanoic, tetradecanoic, and gallic acids (SI = 35.3). Aphanizomenon flos-aquae was found to be more sensitive to the effect of the given mixture of allelochemicals. SI for it on the 23rd day of the experiment was 17495 [140].
In works [141,142] problems have been raised concerning effective algal inhibitors and control HABs. To address these issues, the authors suggested using unsaturated fatty acid (linoleic acid) in conjunction with alginate -chitosan microcapsule technology. They demonstrated that the linoleic acid microsphere had good encapsulation efficiency and release property. Besides, linoleic acid sustained-released microspheres could inhibit Microcystis aeruginosa (Cyanobacteria) growth to the non-growth state, and thus linoleic acid microsphere may be used as a potential candidate for HABs control.
Studies on the use of microgranules saturated with an allelochemical or a combination of allelochemicals (for example, a combination of fatty acids and phenolic compounds) to suppress cyanobacteria look very promising. The inhibitory agent, gradually releasing from the microgranules, prolongs its allelopathic effect on cyanobacteria. A sustained-release time of allelochemicals can range from 40 to 120 days [142][143][144]. A review of the studies carried out in this direction is presented in [128]. Results obtained in different investigations open up new promising areas for scientific research and practical use of allelochemicals of aquatic macrophytes.
According to results received in [112], nonanoic acid can inhibit the growth of cyanobacteria Leptolyngbya tenuis (formerly Phormidium tenue) and M. aeruginosa, whereas, no inhibitory effects of stearic, and palmitic acids was found.
The essential oil of some allelopathic plants (Potamogeton cristatus, Potamogeton maackianus, Potamogeton lucens, Vallisneria spinulosa, Ceratophyllum demersum, and Hydrilla verticillata) was demonstrated to inhibited Microcystis aeruginosa, during which fatty acids constituted an important part of the essential oils isolated.
Recently, Wang et al. [95] reported the inhibitory effects of some fatty acids on Microcystis aeruginosa. The authors stated that pentadecanoic acid, linoleic acid, alpha-linolenic acid, and stearic acid were the most potent allelochemicals from Elodea nuttallii along with dihydroactinidiolide and beta-ionone.
We showed [140] that such plants as Potamogeton natans, Nuphar lutea, Nymphaea alba, Myriophyllum spicatum, Persicaria amphibia are the most active producers of allelochemical fatty acids, and therefore they can have a significant allelopathic effect on cyanobacteria and phytoplankton in total. In these plants, the proportion of fatty acids in the content of volatile organic compounds can exceed 60-70%.
Our studies of the metabolome of Potamogeton perfoliatus from different habitats in Lake Ladoga show that the abundance of cyanobacteria in the associations of this macrophyte depends on the content of carboxylic acids in a given plant (Figure 1).
The study by Gao et al. [145] demonstrates that nonanoic acid may be involved in synergistic interactions with other allelochemicals, demonstrating a stronger allelopathic effect against Microcystis aeruginosa.
Similar results were obtained for octadecanoic acid [146], which may participate in synergistic, antagonistic, and additive allelopathic interactions. These findings led to the conclusion that joint effects of different allelochemicals depend on various factors such as the chemicals used, their respective proportions, the total concentration of the mixture, and the receptor species [146].
In addition to fatty acids, among allelochemicals, special attention should be paid to phenolic compounds.
As early as in 1981 [100], the results were published, which demonstrated that phenolic compounds extracted from Myriophyllum spicatum exhibit algicidal activity against cultured algae and natural phytoplankton assemblages. Later, it was found that such aquatic macrophytes as representatives of the genus Myriophyllum are able to excrete polyphenol-like allelochemicals to inhibit the growth of green algae and cyanobacteria [98]. A number of identified polyphenols (ellagic, gallic, pyrogallic, and catechin) and fatty acids (hexadecanoic acid, stearic acid, α-linolenic acid) were shown to significantly suppress the development of HABforming cyanobacteria species [147,148].
Additionally, a study [78] has revealed that the major allelochemicals identified in tested macrophyte ethyl acetate extract of Nasturtium officinale included quercetin, tannic acid, and gallic acid. Also, findings are the combinations of different types of polyphenols, such as pyrogallic acid, gallic acid, and ellagic acid may have an additive or synergistic effect on cyanobacterium Microcystis aeruginosa and the joint action of phenolic allelochemicals may be an important allelopathic pattern of submerged macrophytes to inhibit the growth of HAB-forming cyanobacteria in natural aquatic ecosystems [53,146,[148][149][150].
In a study [54] during the investigation of contributions of five allelochemicals, (+) catechin, eugeniin, and ellagic, gallic, and pyrogallic acid, in the allelopathic effects of Myriophyllum spicatum on the cyanobacterium M. aeruginosa it was observed that these compounds, on average, may provide up to 50% of the allelopathic effects of M. spicatum. According to results received in [112], four phenols (sinapic, syringic, caffeic, and gallic acids) inhibited the growth of cyanobacteria Leptolyngbya tenuis (formerly Phormidium tenue) and M. aeruginosa. The inhibitory effect of pyrogallic acid and gallic acid produced by M. spicatum in relation with cyanobacteria was also demonstrated in [53,114].
It is beyond question that there is a huge amount of scientific material regarding the allelopathic properties of fatty acids and gallic acid ( [52, 54, 56, 67, 88, 103, 112, 113, 118, 119, 124-126, 146, 148, 151-166], etc.). This circumstance gives every reason to use them to create a new generation of algicides based on allelochemical substances of aquatic macrophytes. The use of this information, as well as the results of our researches [36,138,140], formed a prerequisite for the development of a new generation algicide based on allelochemicals of aquatic macrophytes against cyanobacteria. It is precisely fatty acids (heptanoic, octanoic, tetradecanoic acids) and gallic acid that were included in its composition [167].
Mesocosm study of the effects of allelochemicals on cyanobacteria
Evidence of suppression of the development of phytoplankton, including planktonic cyanobacteria, in real natural conditions by traditional observations, even in the most obvious cases [36], is nevertheless indirect and often contradictory [48,168]. Taking this into account, the way of assessing the effect of allelochemicals on cyanobacteria in experiments with mesocosms in natural conditions is more promising and makes it possible to obtain results corresponding to natural aquatic ecosystems.
A good example is a field study by Hilt et al. [169] in which the authors found an allelopathic effect of the macrophyte Myriophyllum verticillatum on natural phytoplankton (including cyanobacteria) in Lake Krumme Lake (Berlin, Germany). In a mesocosm study [170] in Laguna Blanca lake in Manantiales (Maldo-nado, Uruguay) it was observed that macrophytes species (Egeria densa and Potamogeton illinoensis) seem to exert strong biological effects on phytoplankton biomass, and they are able to keep phytoplankton biomass low through allelopathic influence, even in the absence of zooplankton grazing.
In another mesocosm study [171], similar results were obtained, demonstrating that another species of the genus Myriophyllum (Myriophyllum spicatum) under conditions of 85 l mesocosms during 13 days of exposure had an only short-term inhibitory effect on total phytoplankton and green algae, whereas consistent negative effects (allelopathic) were detected concerning M. aeruginosa.
After the development of an algicide containing fatty acids (heptanoic, octanoic, tetradecanoic acids) and gallic acid, the rationale for the use of which is presented in detail in [140], we conducted the first experiments with this algicide with natural phytoplankton communities under conditions mesocosms.
In the field experiments, mesocosms with a volume of 700 liters were used. The experiments were carried out on two ponds on the territory of St. Petersburg (Russia): at Pulkovo Pond (pond 1; coordinates 59.835899, 30.328642) and Aviator's Pond (Pond 2; coordinates 59.868343, 30.300443). The depth of the ponds at the location of the experiments was about 3 m. The mesocosms were filled with water from the pond, then algicide was added to them in an amount so that its concentration in the water of the mesocosms was 1 mg/l.
In Pulkovo Pond, the experiment was carried out from June 25 to July 5, 2019. In the Aviatorov Pond, the experiment was carried out from July 2 to July 16, 2019. The temperature and light conditions in the mesocosms corresponded to those in the water of the pond outside the mesocosms. The change in water temperature in the surface layer of the studied ponds is shown in Figure 2.
The results of the algicide impact on the phytoplankton of pond 1 are shown in Figures 3-6.
As can be seen from Figure 3, in the water of pond 1, both the abundance and the biomass of all phytoplankton increased during the experiment. At the same time, this was not observed in the mesocosm. In the first three days, a decrease in phytoplankton biomass without a change in its abundance occurred. Subsequently, the abundance and biomass of phytoplankton in the mesocosm remained approximately at the same level as they grew in the pond. By the end of the experiment (on the 11th day), the phytoplankton biomass in the pond exceeded that in the mesocosm by about 5 times, and the abundance -by almost 12 times. The greatest differences were observed on the 8th day of the experiment; the difference in biomass and abundance was 7 and 20 times, respectively. Thus, the action of an algicide based on fatty acids and gallic acid inhibited the growth of phytoplankton.
The data of phytoplankton analysis are confirmed by the data on the measurement of optical density in the pond and the mesocosm (Figure 4). By the end of the experiment, an increase in optical density in the pond and a significant decrease in optical density in the mesocosm were observed (Figure 4). By the end of the experiment, the difference was about 2.3 times. This was also noticeable visually: the water in the mesocosm was more transparent than the water in the pond surrounding the mesocosm (Figure 5).
It is interesting to trace how the quantitative indicators of cyanobacteria in the pond and the mesocosm changed. Dolichospermum solitarium (formerly Anabaena solitaria) was the dominant cyanobacterial species in the pond (and at the beginning of the experiment in the mesocosm). This species belongs to cyanobacteria capable of causing the phenomenon of HABs [172]. A decrease in both the number and biomass of cyanobacteria both in the pond and in the mesocosm was observed on the third day of the experiment. Moreover, in the mesocosm, this decrease was more pronounced. Subsequently, an increase in the number and biomass of cyanobacteria both in the pond and in the mesocosm was observed. However, it was more intense in the pond. By the end of the experiment (on the 11th day), the biomass of cyanobacteria in the pond exceeded that in the mesocosm by about 2.5 times, and the number -by 1.5 times. The greatest differences were observed on the 8th day of the experiment, the difference in biomass and abundance was 4.4 and 39 times, respectively. At the end of the experiment, the same species Dolichospermum solitarium remained the dominant species in the composition of cyanobacteria. At the same time, Cuspidothrix ussaczevii ( formerly Aphanizomenon elenkinii) began to dominate in the mesocosm among cyanobacteria. This species is also included in Change in the optical density of the water mass in pond 1 and the mesocosm when exposed to algicide with a concentration of 1 mg/l. the bloom-forming Cyanobacteria from water bodies of the North-Western Russia list [173]. However, C. ussaczevii is less toxic than D. solitarium, for which toxigenic strains producing delayed-action toxins have been isolated [174].
Thus, the action of an algicide based on fatty acids and gallic acid prevented the growth of the number of cyanobacteria and changed their species structure.
In pond 2, the beginning of the experiment coincided with an intense cyanobacterial "bloom" (Figure 7), while their biomass was more than 55 mg/l. At the same time, in the surface layer of the pond, the maximum water temperature (20.5°C) for the entire duration of the experiment was noted (Figure 2). The cyanobacteria Aphanizomenon flos-aquae, C. ussaczevii, and Dolichospermum affine (formerly Anabaena affinis) dominated in phytoplankton. Aphanizomenon flosaquae is one of the most widespread species that form HABs in ponds and lakes in Northwest Russia [173]. The species is capable of synthesizing dangerous (including for humans) toxins [173]. Cuspidothrix ussaczevii also often causes water "bloom" in water bodies of St. Petersburg and the Leningrad Region, being the dominant or subdominant in bloom-forming cyanobacteria [173].
By the fourth day of the experiment, the water temperature in the pond dropped to about 18°C. This led to a decrease in the number and biomass of cyanobacteria, apparently, mainly due to their sinking into the lower layers of the reservoir. However, an even greater decrease in the development of cyanobacteria was observed in the mesocosm, in which cyanobacteria could not sink so deeply (Figure 8). This is also confirmed by data on the optical density of water in the pond and in the mesocosm, where a more significant decrease was noted (Figure 9). Subsequently, the optical density slightly decreased to approximately the same level in the pond and mesocosm and almost did not change in the pond and mesocosm. At the same time, the control of the development of cyanobacteria from pond 2 in the laboratory, where there was no decrease in temperature, showed their significant growth in the control. With that, under the influence of allelochemicals, significant suppression of plankton growth was observed, recorded by optical density (Figure 10).
By the 8th day of the experiment, a further decrease in the optical density of plankton under the influence of algicide was noted in the laboratory. At the same time, a decrease in optical density and the control was observed, obviously, due to the inability of natural plankton to laboratory conditions (the experiment was carried out in 0.5-liter jars).
By July 8, the species of cyanobacteria Aphanizomenon flos-aquae and Cuspidothrix ussaczevii in the mesocosm dropped out of the dominant composition, although they continued to dominate in the pond water. As our laboratory experiments with this algicide have shown [140], this species of cyanobacteria was especially sensitive to the used mixture of allelochemicals. So, a complete suppression of the development of the culture of Aphanizomenon flos-aquae was observed in the experiment with the combined effect of heptanoic, octanoic, tetradecanoic, and gallic acids at various concentrations (0.1, 1, and 10 mg/l).
Figure 9.
Change in the optical density of the water mass in pond 2 and the mesocosm when exposed to algicide with a concentration of 1 mg/l.
Figure 10.
Change in the optical density of the water mass in pond 2 and the mesocosm when exposed to algicide with a concentration of 1 mg/l during exposure in the laboratory. In the last phase of the experiment (from July 12), representatives of Cryptophyta -Cryptomonas sp., Komma caudata (formerly Chroomonas acuta) dominated the pond in the composition of phytoplankton (Figure 11). Among the cyanobacteria, Aphanizomenon flos-aquae and Aphanocapsa conferta dominated. In the mesocosm at this time (especially toward the end of the experiment) cryptophyte algae (98% of the total phytoplankton biomass) with the dominant Cryptomonas sp. reached a very high development (with biomass of more than 42 mg/l) (Figure 11). Cyanobacteria were represented by the species Dolichospermum affine, Aphanocapsa conferta with very little quantitative development.
It is noteworthy that by the end of the experiment in the mesocosm, the total phytoplankton biomass returned to almost the same high values as at the beginning of the experiment. However, if at the beginning of the experiment cyanobacteria prevailed (about 99% of the total biomass of phytoplankton), then by the end of the experiment cryptophyte algae accounted for more than 98% of the biomass of phytoplankton. Cryptomonas sp. dominated among cryptophyte algae. That is, the replacement of dangerous toxicogenic species of cyanobacteria with cryptophyte algae occurred, which can be consumed by aquatic organisms and which are safe for other organisms, including humans. If we project this result to entire aquatic ecosystems, then we can get a very beneficial ecosystem effect, expressed in the suppression of HABs and the development of algae, whose production can be consumed, for example by zooplankton and planktivorous fish.
Thus, the main results of the experiments carried out on the effect of an algicide of four allelochemical components (heptanoic, octanoic, tetradecanoic, and gallic acids) on the phytoplankton of natural water bodies can be considered the following results, indicating that allelochemical substances of aquatic macrophytes: 1) are able to effectively reduce phytoplankton development and suppress even intense HABs; 2) may lead to the replacement of dangerous cyanobacteria in phytoplankton with safe algae, whose production can be used in the food chains of aquatic organisms.
Conclusions and perspectives
In this way, available data show that the use of allelochemicals from aquatic macrophytes to inhibit cyanobacterial overgrowth is an environment-friendly and perspective technology for suppressing HABs. Allelochemicals can be considered as natural algaecides and become the basis of a nature-like convergent technology to mitigate the development of plankton cyanobacteria and prevent HABs in water bodies.
One can quite agree with the conclusion of work [24] that allelopathy is a promising strategy to control HABs as the effectiveness of allelochemicals on inhibiting microalgae cells has been discovered, investigated, and confirmed in many works and for many years [175]. However, there are several problems that must be investigated in order to understand what determines the strength of the manifestation of the allelopathic effect. One of these problems is undoubtedly the action of various environmental factors.
Another problem is the resistance of allelochemicals in the aquatic environment and their chemical or biochemical (under the influence of bacteria) changes [26,74,168,176]. In this regard, very promising are works in which systems are being developed that allow dosing and prolonging the release of allelochemicals into the aquatic environment [141][142][143].
The development and research of allelopathy and its application for suppressing the HABs are striving toward a future for sustainable, rational, and effective using the water resources worldwide. The algicides of the new generation developed based on the phenomenon of allelopathy can definitely reduce the amount of synthetic algicides and herbicides used.
While allelochemicals have shown growth inhibition of planktonic cyanobacteria, there is still insufficient knowledge of the impact on various species of cyanobacteria (especially their action in real aquatic ecosystems), the influence of various factors on the action of allelochemicals, and the molecular mechanisms of their action. These gaps may limit their use as conventional biotechnology for the mitigation and prevention of HABs in aquatic ecosystems.
All the laboratory studies can propose only the potential for allelopathy of macrophytes metabolites toward cyanobacteria, its real use as biotechnology for the management of planktonic communities and HABs will be possible only after convincing field studies using mesocosms and entire ecosystems. In addition, if we are to understand more about the mechanisms of allelochemicals actions that cyanobacterial cells respond to, more cognizance needs to be taken of the molecular peculiarities of interactions between allelochemicals and cyanobacterial cells.
[1] Anderson D. HABs in a changing world: a perspective on harmful algal blooms, their impacts, and research and management in a dynamic era of climactic and environmental change. Harmful Algae. 2012 (2012 | 2021-05-10T00:04:19.888Z | 2021-01-29T00:00:00.000 | {
"year": 2021,
"sha1": "fe62b2233f1bc1868c325e68fd70ccada8acd084",
"oa_license": "CCBY",
"oa_url": "https://www.intechopen.com/citation-pdf-url/74844",
"oa_status": "HYBRID",
"pdf_src": "Adhoc",
"pdf_hash": "f11035ffea2260d52facc23425b1e692b11c18e9",
"s2fieldsofstudy": [
"Environmental Science"
],
"extfieldsofstudy": [
"Environmental Science"
]
} |
140421690 | pes2o/s2orc | v3-fos-license | Quantitative Communication Research: Review, Trends, and Critique
Trends in quantitative communication research are reviewed. A content analysis of 48 articles reportingoriginal communication research published in 1988-1991 and 2008-2011 is reported. Survey researchand self-report measurement remain common approaches to research. Null hypothesis significancetesting remains the dominant approach to statistical analysis. Reporting the shapes of distributions,estimates of statistical power, and confidence intervals remain uncommon. Trends over time includethe increased popularity of health communication and computer mediated communication as topicsof research, and increased attention to mediator and moderator variables. The implications of thesepractices for scientific progress are critically discussed, and suggestions for the future are provided.
among other things, evaluated by objectivity-closeness and the safeguards against subjectivity.Most would agree that absolute objectivity and a perfect correspondence between understanding and reality is not feasible.Instead, these are viewed as ideals to strive for.This philosophical core is shared across disciplines in the quantitative social sciences, but different disciplines and research traditions differ in how realitycloseness and objectivity are sought.
In terms of data gathering, the most common approaches to quantitative communication involve survey research, lab behavioral experiments, and content analysis of various media.Much of the research involves college students as research subjects and most data are collected in the United States.The individual study published in a peer-refereed academic journal is the typical unit of research accomplishment.
The statistical approach adopted by most quantitative communication research typically rests on a conventional (and many believe logically problematic; see Levine, Weber, Hullett, Park, & Lindsey, 2008) approach to data analysis that Gigerenzer et al. (1990) call modern hybrid statistics.This approach involves testing substantive hypotheses against nil-null statistical distributions.In this view, science is about testing and confirming statistical hypotheses based on probabilistically discrediting a lack of findings using the p < .05standard.Although this view has been the subject of intense criticism across the social sciences, it nevertheless is nearly universally practiced in published quantitative communication research as well as most other quantitative social sciences.It is expected in the peer review process and it is ubiquitous in the training of new quantitative communication researchers.All commonly used statistical software packages contribute to the dominance of the hybrid approach.
The hybrid approach is a merging of two distinct statistical frameworks, one by R. A. Fisher and the other by Neyman-Pearson (Gigerenzer & Murray, 1987).The approach specifies two statistical hypotheses, the null (H0) specifying the distribution of the sample statistic if there is no difference or association, and the alternative (H1) which is defined as not H0.If the probability of the data presuming H0 is less than or equal to the conventional .05level, the null is rejected, and evidence for H1 is inferred (Levine et al., 2008).
Quantitative research resides in an interesting place in the field of human communication.Numerically, quantitative communications researchers represent a relatively small but highly visible minority of researchers in the field.Despite being a minority, quantitative research and researchers are disproportionally highly represented in virtually all analyses of scholarly output such as publication rates (e.g., Bolkan, Griffin, Holmgren, & Hickson, 2012), research funding (Levine, 2012), and citations (Levine, 2010).
Statistics and methodology aside for a moment, when I contemplate the current state of knowledge about human communication, I am struck by how little I think we know.I acknowledge, however, the opposite conclusion is also defensible.In many ways, communication research has evolved at a rapid pace.
In my opinion, there are three especially fundamental questions that are at the core of our discipline.After more than 20 years as a communication teacher and researcher, I find it troubling that I do not have deeper answers to these questions, especially the first.Having just sampled four dozen published communication articles for this essay, as well as in my more general in-field reading, I have the impression that communication research is highly fragmented and fails to yield much insight into core communication processes.In fact, I find it scandalous that no one is actually communicating in the data that is used in much communication research and that most data is static rather than process oriented.I worry that our theory and methods are ill suited to the task of achieving a deep understanding of communication.
To explain my concern, I provide an example that I believe is representative.I very recently reviewed a paper for publication in a communication journal looking at athletic coach verbal aggression and student athlete motivation.I found this an interesting question.It might be interesting to code coach half time speeches and look at subsequent performance.If one wanted to be experimental, one could get a coach to enact different types of communication and then measure motivation.But, what the study I reviewed did was have a sample of student athletes read either a brief aggressive or a non-aggressive "message" from a hypothetical coach about their performance in a hypothetical athletic event, and then fill out some scales about their motivation and their opinions of the hypothetical coach.I wonder things like: How well can people project how motivated they might have been if the situation had been real?Research such as this may have its place, but wouldn't it be nice if more communication research studied, as data, actual communication rather than imagined communication?Methodologically, I can name two culprits that I think are especially responsible for slowing intellectual progress.My current targets are what I perceive to be an over-reliance on self-report survey methods and modern hybrid null hypothesis significance testing (NHST for short) as the preferred statistical tool.
Surely, these two practices are not without merit and surely also many other factors could be legitimately blamed for slow progress.Certainly, too, much of my own research involves self-report methods, NHST, or both.Nevertheless, I hope my colleagues around the world will ref lect on the implications of the field's reliance on self-report measurement and NHST as well as the advantages of studying communication as a behavioral process.This means observing communication over time.
Before I address these and other issues in quantitate communication research, however, the results of a small-scale informal content analysis of communication research is offered.Impressions can offer insight,
Method Sample and Sampling
The sample consisted of N = 48 randomly selected published articles reporting original quantitative research in leading communication journals.Four journals were selected: Human Communication Research (HCR), Communication Monographs (CM), Communication Research (CR), and Journal of Communication (JOC).In the author's opinion, these are the top 4 journals in the field.This opinion is based on current and historical citation patterns and centrality in network analysis (Feeley, 2008;Feeley & Moon, 2010;Levine, 2010;cf. Bolkan et al., 2012).Volumes corresponding to eight years for each of these journals were sampled.The years included 1988to 1991and 2008and 2011. Twenty-four articles published between 2008 and 2011 were initially selected using random numbers generated on Random.org.Articles were randomly selected sequentially without replacement.To obtain each article, first one of the four journals was randomly selected.Then, one of the four volumes was randomly selected from within that journal.Then an issue was randomly selected from within the randomly selected volume.Finally, an article was randomly selected from within each issue.If the selected article did not report original quantitative research, the next article in the same issue that met the criteria was selected.This procedure was repeated until 24 articles were sampled.For each of the 24 articles, a matching article in the same journal from 20 years earlier was selected.So, for example, one article selected was in CM, 2008, the second article in issue four.
The matching article was in CM, 1988, the second article in issue 4. All 48 articles that were selected were downloaded in PDF format for subsequent coding.
Coding
After reading all the selected articles, the author devised a coding system that captured the apparent trends, hunches about trends, insights gained from the reading, and the directions that the author hoped this essay would go.The articles were then re-read.The general topic of the research was recorded, along with the general method employed, the approach to measurement, and the types of statistical analyses.It was also recorded if the research was funded, if multiple studies were reported, if the research was limited to a college student sample, and if the data were entirely collected in the USA.All coding was done by the author.The coding was straight forward, and multiple coders were not deemed necessary to make the desired points.
Twenty years ago, two broad topics of research (interpersonal and media) accounted for more than 80% In a larger sample of published research, health communication emerged as the single most studied topic in communication journals (Levine, 2012).It is speculated that the increase in health as a topic stems from the increased pressure on faculty in the United States to seek external research funding (Levine, 2012).
In spite of increased pressure for funding, the proportion of sampled articles that were funded declined (although not significantly so) over time.Levine (2012) Communication remains a USA-centric academic discipline, but internationalization is likely to continue.
In terms of the method employed, the similarities over time were more striking than the differences.
Survey methodology remained the most prevalent method.Lab experimentation was evident in 25% to 30% of the research.Content analysis was less common (8% overall; declining from 13% to 4% over time).In the 2008-2011 sample, one article each employed non-meta-analytic secondary data analysis, meta-analysis, and naturalist behavioral observation.
With regard to measurement, as expected, self-report scaling was ubiquitous (79% overall; increasing from 71% to 88% over time).Behavioral or media coding (e.g., coding actual communication whether mediated or interpersonal) was reported in just over 40% of the articles samples, declining from 54% to 29%.
These differences were marginally significant.
There were several noteworthy observations regarding statistical analyses and statistical reporting.First, in terms of descriptive statistics, the reporting of central tendency in the form of the arithmetic mean was very common (85% over all; increasing from 75% to 96% over time).Reporting dispersion was also common, although less so.Standard deviations or variances were reported in 56% of sampled articles.It is encouraging to note that the rate increased from 38% to 75% over time.Unfortunately, reporting distribution shapes remains atypical (6% overall; increasing from 0% to 13%).Readers can only know if the mean is a meaningful description of central tendency when the distribution is reported.Many uses of NHST also make assumptions about distributions.Finally, sometimes the shapes of distributions are substantively informative.Thus, it is unfortunate that most articles fail to report how the data are distributed.
In terms of inferential statistics, as expected, NHST was nearly universal being reported 96% of the sampled articles.No other methodological or statistical practice was as ubiquitous as NHST.The good news is that NHST were accompanied by effect size estimates in nearly three-quarters (71%) of the selected articles and that the reporting of effect sizes increased from 58% to 83% over time.In the reporting of effect sizes, communication may be well ahead of many of the other social sciences.Less encouraging were the findings that confidence intervals remain unusual and occurred in less than 10% of the articles exam- In terms of the types of NHST used, zero-order correlations (46%), multiple regression (31%) and ANO-VA (29%) were common.The prevalence of chi squares, t-tests, and log linear analysis are apparently in decline while structural equation modeling has become common (25% in the more recent set of articles).
The use of MANOVA, multiple discriminant analysis, network analysis, and multi-level modeling were also observed in the sampled articles.
Discussion
This review focuses on trends in quantitative communication research.Forty-eight published articles reporting original quantitative research were sampled and content analyzed in an effort to provide an empirical foundation for the current discussion.Half of the articles analyzed were recent while the other half was twenty to twenty-four years old.All articles were randomly sampled from leading communication journals.
Among the noteworthy findings was a shift in the topics of research.Not surprisingly, as the nature of communication media and technology has evolved over time, communication research has followed.Research on newspapers, radio, and television has not been abandoned, but there has been some shift in focus to newer media such as computer-mediated communication, video games, and cell phone communication.
A major tension in this research is between communication research that involves new technology, and research on new technology that involves social considerations.The challenge for communication researchers interested in technology is to maintain the primacy of communication processes in theory and research.
Nevertheless, the shift from old to new media seems a natural response to the changing ecology of human communication.
The second major topical shift is the increased research on health communication.Health, of course, is not a new concern and the interplay between health and communication is multifaceted and important.
Still, the sheer amount of research on health communication seems disproportional to its centrality to the field.Increased pressure for external funding in universities in the United States seems to be behind this trend.Public funding of higher education in the United States has been constant or declining while administrative costs have skyrocked.University administrators have responded by pressuring faculty to seek external research funding so that the universities can reap the overhead on grants.Since health is presumably the most fundable topic of communication research in the United States, universities have increasingly created and expanded health communication research, and the results of this trend are ref lected in the increased proportion of communication articles that address health issues.Interestingly, however, the increase in health related research has not produced a corresponding increase in published funded research.
In the future, it will be interesting to see if the increase in health communication is associated with a Timothy R. Levine www.rcommunicationr.orghealthier population.
Another noteworthy finding was that the frequency of college student data was identical in both samples and that only half of studies examined reported exclusively student data.The author found it surprising that student data were not used exclusively in a majority of studies, nor was the use of student data different over time.The use of expedient student data is somewhat controversial and is conventionally considered a limitation.The actual issue is much more nuanced.For many core communication processes, students might not be meaningfully different than non-students.In my area of research (deception detection), there is ample evidence that students do not produce different results than non-students in traditional research designs.But, age, education, socio-economic status, and living in a college student environment certainly affect some communication processes and outcomes.Further, the extent to which findings might be different if a different sample was used is not well understood in many areas of communication research.Collecting data with different types of populations requires conceptual and measurement equivalence to make meaningful comparisons.Absent such equivalence, simply using harder to collect samples is unlikely to provide added value (Levine, Park, & Kim, 2007).Theory provides a better path to generality than methodological strategies (Levine, 2011).
Reporting and interpreting descriptive statistics is essential (Levine, Weber, Park, & Hullett, 2008).In this regard, communication research has improved substantially over time, but further improvement is still needed.Substantial proportions of articles in leading communication journals report means, standard deviations, and estimates of effect size (typically in units of zero-order correlations, standardized regression coefficients, multiple correlations, and eta squared).Where improvement is most needed is in reporting the shape of distributions.This, I believe, is one area where improvement is both needed and easy to accomplish.
There are at least three reasons why reporting the shape of distributions is valuable.First, noting the shape of distribution can have important substantive implications.For example, Serota, Levine and Boster (2010) recently observed that lying rates are not normally distributed and that most lies are told by a few prolific liars.Second, when distributions are not normal, the median and mode may be more informative than the mean, and the mean can be misleading.Therefore, readers need information about distributions in order to understand central tendency.Third, many significance tests rest on assumptions about the nature of statistical distributions.Even though statistics may be robust to violated assumptions or corrections may be reported, reporting distributions is informative.
Regarding the actual reporting of shapes of distributions, there are many ways this can be done, and the options depend both on the nature of the data and the goals of the research.If the data are approximately normally distributed, then researchers should say this and report at least means and standard deviations.
If the data are substantially skewed, this should be noted, and it may make sense to report the mean, median, and mode(s) as central tendency.If there is more than one mode, the multi-modal nature of the data should certainly be reported.Graphing distributions with histograms or stem-and-leaf plots can be very Another desirable but infrequent aspect of statistical reporting is providing confidence intervals around estimates of effect sizes.Confidence intervals were reported in only 3 of the 48 reports examined.Information and examples for calculating confidence intervals are provided in Levine, Weber, Park, and Hullett (2008).Reporting effect sizes and the confidence intervals around the effect sizes would vastly improve reporting practices and overcome many of the limitations stemming from hybrid significance testing.
A surprising result was how infrequently power analyses are reported.Less than 10% of the articles sampled mentioned statistical power.This low rate of reporting is surprising because the reporting of statistical power is often specified in journal's instructions to authors.Simple rules of thumb like "anyways report estimates of statistical power" however are problematic.
Statistical power is one of the more complex and confusing issues addressed in this review.For one thing, statistical power does not exist in modern hybrid NHST.Power is a Neyman-Pearson idea and requires the specification of a precise H1.In hybrid NHST, H1 is simply defined as not H0.To calculate power, the sample size, the alpha level, and the effect size must be known.The problem is that the effect size is usually not known, and, if it was known, then there might be no need to the research (because the effect was already well documented).As a consequence, power is most often calculated based on arbitrary effect size levels making the results of power analyses also arbitrary.This makes power a confusing issue.But, power is also a critical issue because the lower the power, the more likely statistical inference errors.Statistical power can be improved by using larger sample sizes and by increased reliance on meta-analysis.
From the author's point of view, one of the most unfortunate findings was the frequent use of survey methodology (in 60% of the articles sampled) and self-report measures (in 79% of articles, increasing from 71% to 88% over time).Survey methodology and self-report measurement are clearly useful approaches to research design and measurement, but given the subject of human communication, the prevalence of surveys and self-reports seem disproportionate to their utility.Communication has a prominent behavioral aspect, whereas surveys and self-reports tend to get at cognitions and affect.
Generally, surveys and self-reports are maximally useful under two jointly necessary conditions: People must be willing and able to answer the questions.That is, they must know the answers and they must be willing to accurately communicate responses to the researchers.When limitations of self-reports are discussed, it is usually in reference to the second of the criteria.Researchers worry about things like social desirability distorting answers.While I suspect that subjects are not always honest in response to self-reports, the ability issue is usually the greater concern for me.I often doubt that people have the self-awareness and meta-communicative wherewithal to accurately answer what is asked of them.A recent meta-analysis of verbal aggression and argumentativeness, for example, suggests that self-report and behavioral studies do not converge and that correlation between self-reports and behavioral observation is low (Levine, Kotowski, Beatty, & Van Kelegom, 2012).The lack of association may stem from unreliability in behavior.Unfor-Timothy R. Levine tunately, it is also possible that people lack the objective self-awareness to accurately uncouple their desired traits, projections of their own behavioral predispositions, and what they actually do.
As I write this review, I am working my way through a book titled The Folly of Fools: Deceit and Selfdeception in Human Life (Trivers, 2011).The focus of Trivers' book is on explaining self-deception from an evolutionary biology perspective and his main thesis is that the primary function of self-deception is in the service of other deception.By deceiving ourselves, Trivers argues, we more effectively deceive other people thereby gaining advantage for ourselves and our offspring.This is possible, according to Trivers, because much human brain functioning happens without conscious awareness.
Even if Trivers is wrong about self-deception functioning to advance other deception, he is almost certainly correct that much human functioning, including many communication processes and outcomes, are not subject to conscious awareness, and, therefore, ill-suited to study with self-report methods.Research shows that when asked, people typically will give answers even when they do not know the answer (Schwarz, 1999).The excessive reliance on self-report measures in quantitative communication research limits our knowledge to aspects of communication that can be accurately and consciously known by our research subjects.
A second concern I have with self-reports is that much self-report research uses scales that are of questionable validity.As I have an interest in both individual differences and measurement validation, I have from time to time conducted validation research on previously published and supposedly already validated measures.More often than not my own data (e.g., see Levine et al. 2003;Levine, Kotowski, Beatty, & Van Kelegom, 2012) suggest serious validity problems.Scores on the scales do not seem to measure the constructs they were designed to assess.Consequently, the conclusions drawn from research using these measures are dubious.It seems to me that valid measurement is a prerequisite for genuine knowledge advancement and that highly fallible measurement will like lead to empirical dead ends and confusion.Readers and reviewers should demand better and stronger evidence of validity including confirmatory factor analysis and evidence of behavioral prediction, if relevant.
To sum up my major concerns with self-report research in communication, I worry that much knowledge critical to understanding communication is of a sort that cannot be understood with self-reports methods.
Communication researchers would be well served by devoting more efforts in observing actual communication as it happens and less time having people recall or imagine communication.Second, even for topics well suited to self-report measurement, I worry that the scales used to measure communication variables are not properly validated and yield scores that measure something other than intended.The net result, I believe, is a slowing of progress.Much published research does not tell us very much, or worse yet, some research actively provides misinformation about communication.
Besides self-reports, another major concern is with the dominance of modern hybrid NHST.As the current content documented, NHST is pervasive and was reported in all but a couple of the articles sampled.
Quantitative Communication Research: Review, Trends, and Critique 81 2013 , 1 (1), 69-84 My concerns have been expressed elsewhere in more technical detail (see Levine, Weber, Hullett et al., 2008), but I will raise a couple of basic issues here.These should be sufficient to explain why I think NHST retards scientific progress.
One concern is that what a small p-value for a NHST tells us is that the finding was unlikely given that the nil null statistical hypothesis was true.The nil null hypothesis specifies no differences between groups or no association among variables.In the social sciences, including communication, the nil null is almost never literally true, regardless of the plausibility or viability of the thinking that gives rise to the alternative hypothesis.Things are never exactly equal.The variables are almost never perfectly uncorrelated.So, a significant p-value lets us reject this implausible nil null.But, so what?Knowing that a findings is not zero provides little new knowledge or understanding.This is why I advocate for effect sizes with confidence intervals.
Further, when findings are not significant, not much is learned either.In NHST, the null is never accepted.Non-significant does not mean that there is no difference or effect.Instead, it means that the difference or effect was simply not large enough given the sample size.The net result is that the outcome of NHSTs is typically not substantively satisfying.Increased reliance on descriptive statistics and on effect sizes with confidence intervals are more desirable alternatives.
The other point I want to make is that NHST as a decision can have very high error rates.Type I errors (i.e., false positive results; giving p < .05when the null is true) are improbable unless large numbers of tests are produced and culled, so only the significant outcomes are reported.This appears to be a growing scandal in Psychology were p-value farming (culling significant findings from larger collections of non-significant findings) and other questionable research practices are becoming recognized as threats to scientific inference.
Type II rates (i.e., false negatives; p > .05when the null is false), however, can be common.Type II errors happen at a rate of 1.0 minus statistical power.In practice, I might offer a guess of a 30% type II error in quantitative communication research using NHST given typical sample and effect sizes.If statistical power is, on average, .7,and if the nil null is almost always literally false, than the 30% type error II rate is expected.Solutions to lower this error rate include increasing sample sizes, greater reliance on metaanalysis, and focusing more on effect sizes and confidence intervals.
An implication of substantial Type II error rates and Type I errors produced by p-value farming is that virtually all literatures in quantitative communication research can be summarized as providing a confusing set of "mixed" results.Valid hypothesis are sometimes supported and sometimes not, and the same is true of invalid theories and hypotheses.The use of NHST and laws of probability guarantee this outcome.
Fortunately, meta-analysis can help sort things out, but absent that, it is often hard to make sense what some set of studies tell us.
With regard to reducing Type I errors, there is one approach that is common that I believe is actually Timothy R. My point about NHST is that such tests get it wrong much of the time (due to statistical power and pvalue farming) and even when they get it right, the substantive yield is meager.Not-zero is not a very high bar to test a hypothesis or theory against and knowing that an effect is not zero tells us little about what the effect is. NHST is all about trying to negate nullities and at the end of the day knowing "not zero" means a paper might be publishable, but it does not make the findings particularly informative.Increasing the use of a full complement of descriptive statistics and reporting confidence intervals around effect sizes would go a long way toward minimizing these concerns and facilitating progress.
Commercial use
The licensor permits others to copy, distribute, display, and perform the work for non-commercial purposes only, unless you get the written permission of the Author and the Journal.
Modifications of the work
The licensor permits you to copy, distribute, display and perform only unaltered copies of the work.
These are: (1) how is that people communicate with each other, and (2) what constitutes communication competence and effectiveness, and (3) how can communication competence and effectiveness be enhanced.Timothy R. Levine www.rcommunicationr.org but it seems appropriate to collect and use some quantitative data.The general question guiding the data collection involves the trends and practices in quantitate communication research with attention to what has changed and what has not changed over the past 20 years.More specifically, it would be useful to offer data relevant to my claims regarding the ubiquity of self-report research and NHST.Therefore, it seemed reasonable to sample some recent communication research.It also seems reasonable to have a comparison or control group of older research.Finally, it was reasoned that the results would provide a useful way to frame and organize the current discussion.Quantitative Communication Research: Review, Trends, and Critique Square [χ 2 (df = 1, n = 48) = 4.76, p = .03,ϕ = .38]6 of the articles sampled from the four leading communication journals.Twenty years later, that proportion dropped to just over 50%.Communication technology and health communication research are now prevalent combining to account for 46% of the 2008-2011 sample.The shift from old (e.g., newspapers, radio, and television) to old and new media is not surprising given the rise of e-mail, texting, social networks and the like.The increased trendiness of health communication research has been noted in other recent research.
I
want to close by saying that I do both self-report research and NHST.But, I do not just do self-reports and just do NHST.I try to use self-reports when I think they are the best method, and I try to take measurement validity and validation seriously.As for NHST, I typically report them in my research, but I have come to increasingly focus on descriptive statistics with special attention to looking at how my data are distributed.Quantitative communication research is highly conventional.Understanding it requires knowing the conventions.Communication research can be improved by considering which conventions serve us well and which ones impede progress.It is hoped that this review constructively helps toward this desired end.Copyrights and RepositoriesThis work is licensed under the Creative Commons Attribution-NonCommercial-NoDerivs 3.0 Unported License.This license allows you to download this work and share it with others as long as you credit the author and the journal.you cannot change it in any way or use it commercially without the written permission of the Author (Timothy R. Levine) and the Journal (Review of Communication Research).Attribution you must attribute the work to the Author and mention the Journal with a full citation (it must at least include the data that appears in the suggested citation in the first page of the article), whenever a fragment or the full text of this paper is being copied, distributed or made accessible publicly by any means.
Table 1 .
Trends in Quantitative Communication Research over 20 years in Premiere Communication Journals
Table 1 .
Trends in Quantitative Communication Research over 20 years in Premiere Communication Journals (continued) The proportion of articles that reported multiple studies was similar over time with between 20 and 25% of articles reporting multiple studies.Multiple studies are common in some top psychology journals such as Journal of Personality and Social Psychology.This trend has apparently not gained momentum in communication.The frequency of use of college student data was identical in both samples with 50% of studies reporting only student data.Data collected only in the United States was more common than the use of student data.More than 90% of the research in the 1988-1991 sample involved exclusively USA data.There has been a statistically significant internationalization of communication research over the past 20 years.Still, however, nearly two-thirds of the 2008-2011 sample was limited to data collected within the United States.
also reported a lack of statistically significant differences in the proportion of published research that was funded.Funded research remains atypical in published communication research, even in the best journals.Timothy R. Levine www.rcommunicationr.org . Further, although scales are commonly used, factor analysis remains uncommon.
Quantitative Communication Research: Review, Trends, and Critique 77 2013 , 1 (1), 69-84 ined Levine www.rcommunicationr.orgcounterproductive.MANOVA is often used as a "gatekeeper" test for the purpose of reducing the risk of Type I errors.Univariate effects are only reported if the multivariate effect is statistically significant.The problem is that in communication research hypotheses are usually univariate in nature leading researchers to report both the multi-and the univariate effects.By necessity, this practice produces more, not fewer, significance tests, and therefore seems to make the problem worse.Although I think it is usually unwise to use MANOVA, if MANOVA is used, it makes most sense to do so only when (a) the dependent variables are highly inter-correlated, (b) the hypothesis is genuinely multivariate, and (c) there is some reason not to just factor analyze the dependent variables first.If one's dependent variables are highly inter-correlated, it makes more sense to me to use either confirmatory factor analysis or path analysis to model how the variables are related.
The licensor does not allow you to create and distribute derivative works based on it.Theonly exception is that you can use parts of the article as a citation.The above rules are crucial and bound to the general license agreement that you can read at: http://creativecommons.org/licenses/by-nc-nd/3.0/ and http://creativecommons.org/licenses/by-nc-nd/3.0/legalcode Attached is a list of permanent repositories where you can find this article:Academia.edu@http://independent.academia.edu/ReviewofCommunicationResearchInternetArchive @ http://archive.org(collection"community texts") Social Science Open Access Repository @ http://www.ssoar.info/en.htmlTimothyR. LevineQuantitative Communication Research: Review, Trends, and Critique Review of Communication Research2013 , 1 (1), 69-84 | 2014-10-01T00:00:00.000Z | 2013-01-01T00:00:00.000 | {
"year": 2013,
"sha1": "4fcdf5ec9d7f255d1c521e5773ccd863b960d220",
"oa_license": "CCBYNC",
"oa_url": "https://rcommunicationr.org/index.php/rcr/article/download/7/71/451",
"oa_status": "GOLD",
"pdf_src": "CiteSeerX",
"pdf_hash": "4fcdf5ec9d7f255d1c521e5773ccd863b960d220",
"s2fieldsofstudy": [
"Psychology"
],
"extfieldsofstudy": [
"Sociology"
]
} |
233306880 | pes2o/s2orc | v3-fos-license | Graph Partitioning into Hamiltonian Subgraphs on a Quantum Annealer
We demonstrate that a quantum annealer can be used to solve the NP-complete problem of graph partitioning into subgraphs containing Hamiltonian cycles of constrained length. We present a method to find a partition of a given directed graph into Hamiltonian subgraphs with three or more vertices, called vertex 3-cycle cover. We formulate the problem as a quadratic unconstrained binary optimisation and run it on a D-Wave Advantage quantum annealer. We test our method on synthetic graphs constructed by adding a number of random edges to a set of disjoint cycles. We show that the probability of solution is independent of the cycle length, and a solution is found for graphs up to 4000 vertices and 5200 edges, close to the number of physical working qubits available on the quantum annealer.
We demonstrate that a quantum annealer can be used to solve the NP-complete problem of graph partitioning into subgraphs containing Hamiltonian cycles of constrained length. We present a method to find a partition of a given directed graph into Hamiltonian subgraphs with three or more vertices, called vertex 3-cycle cover. We formulate the problem as a quadratic unconstrained binary optimisation and run it on a D-Wave Advantage quantum annealer. We test our method on synthetic graphs constructed by adding a number of random edges to a set of disjoint cycles. We show that the probability of solution is independent of the cycle length, and a solution is found for graphs up to 4000 vertices and 5200 edges, close to the number of physical working qubits available on the quantum annealer.
I. INTRODUCTION
Many combinatorial optimisation problems arising in practical applications are notoriously hard to solve with classical methods [1]. Recently, quantum annealers have been considered as potentially faster alternatives for finding solutions to this class of problems in different domains. For example, in logistics they have been employed for job shop scheduling [2], traffic flow optimisation [3][4][5], and airport gate assignment [6]. In telecommunications quantum annealers have been used for satellite coverage [7], and in chemistry for protein folding [8]. In finance, use cases range from portfolio optimisation [9,10] to prediction of financial crashes [11].
Finding Hamiltonian cycles, i.e. cycles that visit each vertex exactly once, is another type of combinatorial optimisation problem having several applications, for example in kidney and lung exchange [12][13][14], house allocation [15], branch selection for cadets [16], and, more generally, good exchange [17]. However, so far quantum annealers have not been used to solve these problems. Here, we consider the partitioning of a directed graph into subgraphs containing Hamiltonian cycles. More specifically, we focus on the case where the cycle length is required to be at least three, which makes the problem NP-complete [1]. This problem is also known as the vertex 3-cycle cover decision problem for directed graphs (3-DCC) [18].
For solving the problem on the quantum annealer, we cast its cost function and the corresponding constraints as a Quadratic Unconstrained Binary Optimisation (QUBO) problem. We consider graphs containing disjoint cycles and random edges added and we analyse the probability of finding a vertex 3-cycle cover in a single run on the quantum annealer as a function of the size of the input graph as well as the number of edges added. We find solutions for input graphs up to 4000 vertices and 5200 edges, close to the number of physical working qubits available on the quantum annealer. We find that the dependence of the probability on the system size is stronger when the relative number of random edges added is higher, while it does not depend on the length of the cycles in the input graph. This paper is structured as follows. In Sec. II we introduce the relevant definitions and formally define the problem. In Sec. III we show the procedure used to rewrite the problem as a QUBO. In Sec. IV we describe the quantum annealing protocol and the procedure used to test the solutions found. In Sec. V we present the results obtained with D-Wave Advantage quantum annealer. In Sec. VI we outline possible future developments.
II. THE PROBLEM
The complexity of the problem of partitioning a graph into Hamiltonian subgraphs strongly depends on the constraints imposed on the size of the subgraphs considered. For example, when no conditions are imposed on the size of the subgraphs, standard matching techniques find a partition in polynomial time [19]. However, when the subgraphs of the partition are required to have a cardinality greater than or equal to K, with K ≥ 3, the problem becomes NP-complete [1]. In this paper we consider the K = 3 problem, that can be formulated as follows: given a directed graph without self-loops G = (V, E) where V is the set of vertices and E the set of edges, can the ver-arXiv:2104.09503v1 [quant-ph] 18 Apr 2021 tices be partitioned into disjoint sets V 1 , V 2 , ..., V k for some k such that each V i contains at least three vertices and induces a subgraph G that contains a Hamiltonian cycle? Figure 1 illustrates the problem for a graph composed of a set of N V = 7 vertices V = {1, . . . , 7}, and N E = 11 edges E = {(1, 2), (2,5), (2,6), (3,4), (4,3), (4, 7), (5, 1), (5, 6), (6, 3), (6, 7), (7, 6)} (Fig. 1a). A solution exists for this graph because it does have a partition into Hamiltonian subgraphs with one cycle of length three V 1 = {1, 2, 5}, and one cycle of length four V 2 = {3, 4, 6, 7} (Fig. 1b). We stress that, since the problem requires the cycle length to be at least three, the cycles of length two do not appear in the solution.
This problem can be tackled by associating to each edge (i, j) ∈ E a binary variable x ij ∈ {0, 1}, that equals 1 when vertices i and j are connected in the solution (see green arrows in Fig. 1b), 0 otherwise. We note that since the number N E of existing edges is in general smaller than the number N V (N V −1) of possible edges among all the vertices, associating variables x ij only to the existing edges makes the size of the problem as small as possible.
One can then formulate the problem as: subject to the following constraints: Constraint (2) guarantees that the number of edges is equal to the number of vertices. Constraint (3) guarantees that for every vertex there is no more than one outgoing edge, likewise does constraint (4) for the ingoing edges. Constraints (2)-(4) guarantee that the solution will be a partition into Hamiltonian subgraphs. Finally, constraint (5) ensures that two vertices can be connected to each other by maximum one edge, meaning that cycles of length two are forbidden.
III. QUBO FORMULATION
The optimisation problem described by Eqs. (1)- (5) can be rewritten using the QUBO formalism, which is suitable for a quantum annealer [20]. In the QUBO formalism we look for a configuration x, where x is a vector with components x ij , that minimizes the following cost function: The first term of Eq. (6) is and corresponds to Eq. (1). The second term of Eq. (6) is a penalty term where The quantities in Eqs. (9)-(11) implement the constraints of Eqs. (3), (4) and (5), given that they will be zero when the configuration x is allowed and positive when the constraint is violated, thus penalising forbidden configurations. Equations (9)-(11) are based on the fact that for any two binary variables y and z, the constraint y + z ≤ 1 is equivalent to y · z = 0. We note that it is not necessary to encode constraint (2) as a penalty term, since it has the same functional form as Eq. (1), and checking the solution found will be sufficient.
When translating the constraints into penalties one needs to choose the penalty constants a i , b j , c large enough compared to the strength of the term ij x ij . However, due to the hardware implementation, these cannot be chosen arbitrarily large. An optimal choice for the penalty constants is presented in Appendix A.
IV. IMPLEMENTATION ON A QUANTUM ANNEALER
A. Quantum annealing The constructed QUBO problem is solved on a D-Wave Advantage quantum annealer, containing 5436 physical working qubits. The starting point for the quantum routine used is a quantum state that corresponds to the ground state of a drive Hamiltonian This Hamiltonian is slowly changed to the problem Hamiltonian H 1 whose ground state represents the state with lowest energy for the QUBO problem. Its general form is that of an Ising-like Hamiltonian: The mapping of the QUBO problem in Eq. (6) to the Ising-like Hamiltonian in Eq. (13) is done by identifying the two states {0, 1} of the variables x ij with the two eigenstates of the σ z operator of the qubit . Since the physical qubits in the quantum processor are not fully connected to each other, each logical qubit is embedded into a chain of one or more physical qubits. To find such an embedding, we use the minorminer algorithm [21] provided by D-Wave, with its default parameters.
B. Protocol for solving the partitioning problem
In this section we present the steps for the protocol we use to solve the problem on the quantum annealer: (i) graph construction, (ii) problem submission, (iii) check of the output. if E from has exactly one element then 8: Add the vertex v from to CV 9: Add the edge e in E from to CE Assign v from := vto 18: until vto = vstart 19: if CV has exactly 2 elements then 20: return False
21:
else if CV has 3 elements or more then
22:
Remove from V the vertices contained in CV
23:
Remove from E the edges contained in CE 24: end if 25: until no vertices are left in V and no edges are left in E 26: return True (i) We start by generating a graph G 0 composed of n disjoint cycles of length L (Fig. 2a). This contains nL vertices and nL edges. A new graph G is generated starting from G 0 , by introducing noise, i.e. adding N noise new randomly chosen edges that connect the existing vertices (Fig. 2b). The so-constructed graph G will always admit G 0 as solution, even though additional solutions might also appear when the amount of noise is large.
(ii) The graph G is then transformed into a QUBO problem as explained in the previous sections and submitted to the quantum annealer. The annealing schedule is run 100 times, and the frequency of the final states obtained is computed.
(iii) Using Algorithm 1 we check if the lowest energy state (or states in the degenerate case) corresponds to a partition of G into Hamiltonian subgraphs containing cycles of length three or more. If that is the case, the probability P sol of finding a solution is equal to frequency of the lowest energy state (or the sum of the frequencies of the lowest energy states in the degenerate case), if not P sol = 0. We note that the solution check algorithm runs in polynomial time. The only step that is proportional to the size of the problem is line 6 in Algorithm 1, which is a O(N E ). That step is executed at maximum N V times, which makes the overall algorithm a O(N E N V ). To collect statistics on P sol , we repeat the steps (i) to (iii) 50 times and average P sol over the 50 repetitions to obtainP sol , that represents the average probability to find a solution with a single run (i.e. a single annealing schedule) on the quantum annealer.
V. RESULTS
We start by fixing the fraction p noise of the maximum allowed number of additional edges for the given number of vertices N V , and set N noise = round(p noise N V (N V − 2)). In the simplest scenario, i.e. p noise = 0, the quantum annealer easily finds a solution regardless of the problem size (grey points in Fig. 3a). Even when considering a problem with a very large number of vertices and edges, N V = N E = 5400, where the graph size is very close to the total number of working physical qubits available (5436), the probability to find a solution with a single run on the quantum annealer isP sol = 75(1)%. We observe thatP sol decreases with N V with a slope that strongly depends on p noise , and the quantum annealer finds a solution up to N V = 4200 for p noise = 0.5 × 10 −4 and up to N V = 3600 for p noise = 1 × 10 −4 (blue and orange points in Fig. 3a). Figure 3b shows that the single-run solution probabil-ityP sol does not depend on the cycle length. This can be explained by the fact that the dimension of the combinatorial space depends on the number of all the possible paths in the input graph, which is determined only by N V and p noise .
To further investigate the dependence on the combinatorial complexity, we now fix the number of vertices N V and vary the number of added edges N noise . To give an intuition on how the complexity of the problem scales with N noise , we can think of a classical approach based on Algorithm 1. Without any edges added, there is only one possible simple path. When we add one edge of noise, there will be one vertex with two outgoing edges: this bifurcation gives rise to two different simple paths. Each of them could be then fed into Algorithm 1 to check whether it is a solution. However, when the number of added edges increases, the number of simple paths to be checked can be up to 2 Nnoise , and therefore finding a solution with this classical procedure becomes exponentially hard.
In Fig. 4 we show the results obtained with the quantum annealer. For a small amount of added edges (N noise 100) the single-run solution probability is higher for the smaller system size explored (Fig. 4a). However, as N noise is increased, for a given number of added edges it becomes easier to find a solution for a system where the size is larger. We note that, regardless of the value of the single-run solution probability, the same problem can be submitted multiple times in order to find a solution at least once with arbitrarily high probability. Fixing the desired probability to 99%, the time needed to find a solution is given by [22] where the first term is the single-run time (i.e. the sum of the times used in the schedule of the quantum annealer a b power law fit NV = 1000 −34(3) 5.9(6) power law fit NV = 4000 −15(1) 2.6(2) exponential fit NV = 1000 −3.8(2) 0.0120(4) exponential fit NV = 4000 −0.8(3) 0.0035(3) Fig. 4b. For the exponential fit, the parameter b is much less than log 2, expected for the 2 N noise classical scaling described in section V. t anneal = 200 µs and t pause = 100 µs) and the second term is the number of necessary runs to find a solution with the desired probability.
In Fig. 4b we show the behaviour of the time to solution TTS as a function of the number of added random edges. We fit the large N noise behaviour of TTS with an exponential and power law functions. The fit parameters are reported in Table I. It is clear from the plots that for the range explored TTS is compatible with either fit. However, even in the exponential case, the scaling is much slower than the one of the classical procedure explained earlier, that scales as exp (N noise log 2).
VI. OUTLOOK
Possible extensions of the problem presented here can be considered for graphs where weights are assigned to the edges, or where different constraints on the cycle length are present.
We point out that, if self-loops are included in the construction of the problem and the constraint on the cycle length is lifted, our method could be used to compute the permanent of a matrix, since finding all the cycle covers of a graph is equivalent to computing the permanent of its adjacency matrix [23].
Additional information: The code used to generate the results presented in this paper is available at https://github.com/quantumglare/quantum_ cycle.
Further information can be requested at info@quantumglare.com.
Appendix A: Constraints
In order to properly set the penalty constants in Eqs. (9)-(11) we proceed with an analysis of the different cost terms in Eq. (6), that allows us to choose the penalty constants as small as possible.
We require the cost J(x a ) of an allowed configuration x a that satisfies all constraints to be lower than the cost J(x) of any configuration x that violates at least one constraint, i.e.
J(x) > J(x a ) ∀ x, x a . (A1) Let us first consider the constraint on the number of outgoing edges given in Eq. (3), whose corresponding penalty is given in Eq. (9). Any configuration that violates only that constraint can be decomposed as x = x a + x , where x is a vector whose only elements equal to 1 are those corresponding to the additional edges. From equation (A1) it follows that, for every vertex i, a i j,j >j Equation (A2) is satisfied by setting: where is an arbitrarily small positive constant, which makes a i an optimal choice. The number N x ,i of nonzero elements in x varies from 1 to the total number N out, i of outgoing edges from vertex i in the original graph. In terms of N x ,i Eq. (A3) becomes where in the denominator the round brackets denote the binomial coefficient. The maximum is achieved for N x ,i = 1, giving a i = 1 + for all vertices i of the graph that might violate the constraint. For all other vertices we simply set it to zero, leading to Likewise, for the constraint on the number of ingoing edges in Eq. (4), which corresponds to the penalty in Eq. (10), we set where N in, i is the number of ingoing edges to vertex i in the original graph. A similar reasoning for the constraint to forbid pairs in Eq. (5), which corresponds to the penalty in Eq. (11), leads to c = 2 + . (A7) | 2021-04-21T01:16:00.458Z | 2021-04-18T00:00:00.000 | {
"year": 2021,
"sha1": "44a6536f4542578a42eee34fb03b1a09437b3c1c",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "44a6536f4542578a42eee34fb03b1a09437b3c1c",
"s2fieldsofstudy": [
"Physics",
"Computer Science"
],
"extfieldsofstudy": [
"Computer Science",
"Physics"
]
} |
197437749 | pes2o/s2orc | v3-fos-license | High-resolution simulations and visualization of protoplanetary disks
A problem of mass flow in the immediate vicinity of a planet embedded in a protoplanetary disk is studied numerically in two dimensions. Large differences in temporal and spatial scales involved suggest that a specialized discretization method for solution of hydrodynamical equations may offer great savings in computational resources, and can make extensive parameter studies feasible. Preliminary results obtained with help of Adaptive Mesh Refinement technique and high-order explicit Eulerian solver are presented. This combination of numerical techniques appears to be an excellent tool which allows for direct simulations of mass flow in vicinity of the accretor at moderate computational cost. In particular, it is possible to resolve the surface of the planet and to model the process of planet growth with minimal set of assumptions. Some issues related to visualization of the results and future prospects are discussed briefly.
The Method
Extremely small temporal and spatial scales involved in the problem of accretion onto a protoplanet necessitate the use of nonuniform discretization in the vicinity of the accretor. In our study we used adaptive mesh refinement (AMR) method combined with a high-resolution Godunov-type advection scheme (amra, Plewa & Müller 2000). The AMR discretization scheme follows the approach of Berger and Colella (1989). The computational domain is covered by a set of completely nested patches occupying levels. The levels create a refinement hierarchy. As one moves toward higher levels, the numerical resolution increases by a prescribed integer factor (separate for every direction). The net flow of material between patches at different levels is carefully accounted for in order to preserve conservation properties of hydrodynamical equations. Boundary data for child patches are either obtained by parabolic two-dimensional conservative interpolation of parental data or set according to prescribed boundary conditions.
Hydrodynamical equations are solved with the help of the Direct Eulerian Piecewise-Parabolic Method (PPMDE) of Colella & Woodward (1984), as implemented in herakles solver (Plewa & Müller 2000). Simulations have been done in spherical polar coordinates in a frame of reference corotating with the protoplanet. herakles guarantees exact conservation of angular momentum 2 Ciecielag, Plewa, Różyczka Figure 1.
Breakup of the finest five levels of the refinement hierarchy.
which is particularly important in numerical modeling of disk accretion problems. The use of its multifluid option with tracer materials distributed within disk (not presented here) allows to identify the origin of the material accreted onto protoplanet. The amra code is written purely in FORTRAN 77 and has been successfully used on both vector supercomputers and superscalar cacheoriented workstations. Its parallelization on shared memory machines exploits microtasking (through the use of vendor-specific directives) or the OpenMP standard.
Simulation setup
The computational domain extends from 0.25 to 2.5 radii of the planet's orbit. We employ 7 levels with refinement ratios ranging from (2,4) to (4,4). The base level contains the protoplanetary (circumstellar) disk while the 7th level contains the planet and its immediate vicinity. The base grid consist of 128 × 128 cells uniformly distributed in r and θ. The effective resolution at the 7th level is 131072 × 524288 in r and θ, respectively. The topmost five levels are schematically shown in Figure 1. White lines are boundaries of the patches. There are 1, 1, 1, 1, 12, 4 and 49 patches at levels 1-7, respectively. The structure of the grid at level 7 is shown in Figure 2f with individual cell boundaries drawn
Physical model
The simulation is initialized with a Keplerian disk. Originally the disk has a mass of 0.01 M ⊙ , constant h/r ratio of 0.05 and surface density proportional to r −1/2 . The temperature is a fixed function of r throughout the simulation. There is no explicit viscosity in the disk. At the outer and inner boundary of the base grid the gas is allowed to flow freely from the computational domain. No inflow is allowed for. The accretion onto the planet is accounted for in a very simplified way. At every time step the mean value of the density within two planetary radii is calculated, and whenever it is higher then a preset value, the excess gas is removed. At t = 0 a planet of one Jupiter mass in inserted into the disk on a circular orbit. The radius of the orbit and the mass of the planet remain constant throughout the simulation. The disk is allowed to evolve for 100 planetary orbits. A gap is cleared in it, and a secondary, circumplanetary disk is formed.
The sequence of surface plots in Figure 2 shows the final structure of both disks (the surface density distribution is displayed). The red peak in Figure 2a is the unresolved image of the very dense circumplanetary disk. We have been able for the first time to see the details of the latter (Figures 2(c)-d). The streams of gas flowing across the gap from left and right edge of the frame (light blue) collide with the outer part of the circumplanetary disk. The collision regions (green wedges) bear strong resemblance to hot spots in cataclysmic binaries. In every region two strong shock waves are excited, one of them propagating into the stream, and the other into the disk. The shocked gas flows from the collision region along a loosely wound spiral towards the planet (( Figure 2e). This picture is significantly more detailed than the one recently published by Lubow, Seibert, & Artymowicz (2000). Streamlines of the flow around the planet are shown in Figure 3, and they are in good accordance with those of Lubow et al.
Our simulation is of preliminary nature, and its sole purpose is to demonstrate the capabilities of amra. Currently, we are improving the physics of the model. One of the problems we are going to attack is the calculation of the accurate value of the gravitational torque from the disk onto the planet in the phase preceding gap formation.
Visualization
To visualize the complicated amra output, we have chosen the AVS/Express environment for visual programming. It allows the user to quickly built simple applications employing standard library modules. Advanced users can develop their own, highly specialized modules and applications. Our amra-visualization application (visa) is partly based on modules written by Favre, Walder, & Follini (1999), which have been substantially modified, and partly on our own modules. A screenshot of visa is shown in Figure 4. The panel and the viewer are contained in the two topmost windows, while the bottom window contains the AVS/Express programming platform. Currently we are able to read the AMR data, extract components, perform mathematical operations on data sets and coordinates, extract any subset of levels or patches, and apply to them various visualization technique (e.g. 2-D plot, surface plot, isolines, slice). Streamlines can also be calculated. The application is still under development, and new options are being added. | 2019-04-14T01:30:29.724Z | 2000-02-01T00:00:00.000 | {
"year": 2000,
"sha1": "77078bc7cd4fe217edd92ade9e7ca22b03493148",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "a30be7219d07aebeaece7c5e286b0e6bd9834ebb",
"s2fieldsofstudy": [
"Physics",
"Geology"
],
"extfieldsofstudy": [
"Physics",
"Materials Science"
]
} |
2123828 | pes2o/s2orc | v3-fos-license | CU splitting early termination based on weighted SVM
High efficiency video coding (HEVC) is the latest video coding standard that has been developed by JCT-VC. It employs plenty of efficient coding algorithms (e.g., highly flexible quad-tree coding block partitioning), and outperforms H.264/AVC by 35–43% bitrate reduction. However, it imposes enormous computational complexity on encoder due to the optimization processing in the efficient coding tools, especially the rate distortion optimization on coding unit (CU), prediction unit, and transform unit. In this article, we propose a CU splitting early termination algorithm to reduce the heavy computational burden on encoder. CU splitting is modeled as a binary classification problem, on which a support vector machine (SVM) is applied. In order to reduce the impact of outliers as well as to maintain the RD performance while a misclassification occurs, RD loss due to misclassification is introduced as weights in SVM training. Efficient and representative features are extracted and optimized by a wrapper approach to eliminate dependency on video content as well as on encoding configurations. Experimental results show that the proposed algorithm can achieve about 44.7% complexity reduction on average with only 1.35% BD-rate increase under the “random access” configuration, and 41.9% time saving with 1.66% BD-rate increase under the “low delay” setting, compared with the HEVC reference software.
Introduction
High definition (HD) and ultra-high definition (UHD) video contents have become increasingly popular worldwide, thus the demand of video compression technologies that can provide higher coding efficiency over HD/UHD videos can be envisioned in near future. In view of this, high efficiency video coding (HEVC) standard is being developed by the Joint Collaborative Team on Video Coding [1], which is established by the ITU-T Video Coding Experts Group and the ISO/IEC Moving Picture Experts Group. HEVC outperforms H.264/AVC high profile by 35-43% bitrate reduction at the same reconstructed video quality [2]. HEVC inherits the well-known block-based hybrid coding scheme [3] used by previous coding standards, e.g., H.264/AVC, and extends the framework by introducing highly flexible quad-tree coding block partitioning. The quad-tree coding block partitioning consists of newly brought concepts of coding unit (CU), prediction unit (PU), and transform unit (TU). CU is the basic unit of region splitting used for inter/intra coding, which extends the traditional concept of macroblock (MB) based on a hierarchical structure with block size varying from 64 × 64 to 8 × 8 pixels. A CU is allowed to recursively be split into four smaller CUs of equal size. In this manner, a picture is represented by a content-adaptive coding tree structure comprised of CU blocks with different sizes. PU is the basic unit used for prediction process in a rectangular shape. One PU can be encoded with one of the modes in candidate set, which is similar to MB mode of H.264/AVC in spirit. The pixels in one PU share prediction information, e.g., modes, motion vectors (MV), and reference index. TU is the basic unit for transform and quantization. TU is defined in a similar way as CU, and its size varies from 4 × 4 to 32 × 32. As reported in [4,5], the flexible data structure representation (extending the MB size up to 64 × 64) introduced over 10% bitrate saving in comparison with the 16 × 16-based configuration in H.264/AVC, since the flexibility of block partitioning can effectively deal with the diversity of picture content.
However, the flexibility of block partitioning of HEVC imposes significant computational burden on encoder during seeking of the optimal combinations of CU, PU, and TU sizes. Thus, it is crucial for practical implementation of the new standard to reduce the complexity while maintaining the coding performance. Researches on accelerating the encoder of HEVC test model (HM) are emerging. A fast intra mode decision algorithm [6] was proposed, which made use of the direction information of the neighboring blocks to reduce the number of directions taking part in rate distortion optimization (RDO) process. To reduce the computational complexity of TU size selection, a fast algorithm for residual quadtree mode decision was proposed in [7]. Besides, the depth-first decision process for TU size selection in HM was replaced by a merge-and-split decision process, which also reduces unnecessary computation by using the inheritance property of zero-blocks and early termination schemes for non-zero blocks.
In this article, we focus on CU size selection for HEVC. A content-based fast CU decision algorithm was developed for HEVC TMuC (test model under consideration) [8], which analyzed the ratio of utilized CUs to total number of CUs in different depth in frame level and skipped the rarely used CUs with specified depths. Information of neighboring and co-located CUs was used to skip CUs in unnecessary depth in CU level. The algorithm investigated temporal and spatial correlations of CU depth, and designed different thresholds to control the number of CU depths to be evaluated. However, the correlations were data dependent and the ratio was affected by encoding configurations, such as the hierarchical depth in hierarchical prediction structure. Spatial correlation of CU depth as well as the probability that neighboring CUs were SKIP mode was considered in [9] to design an adaptive weighting factor, which was used to adjust the threshold in early terminating the following RD calculations of the current CU. In [10], a method for complexity controlling was proposed by limiting the number of coding decision tests and comparisons according to temporal correlations. All these related works explored the spatial correlations and/or temporal correlations of CU depth to eliminate specific CU depths with a trivial impact on RD performance. However, they were not robust enough due to diversity of the content. It is necessary to consider more statistics so as to get a more accurate and stable model to simplify the CU splitting.
In the field of accelerating the encoder of H.264/AVC as well as its extensions, various properties were investigated and employed to simplify mode decision. A nearly sufficient condition for early zero-block detection is constructed based on the analysis of prediction error to speed up the motion estimation of H.264/AVC JM reference software in [11]. It indicated that prediction error offered a valuable clue about encoder acceleration. Spatial and temporal correlations were exploited to predict the skip mode [12] to reduce encoder complexity. In [13,14], distribution of MV in an MB was chosen as a feature to predict the optimal mode other than performing exhaustive search over all modes. A hierarchical algorithm proposed in [15] categorized all type of modes into three levels which were triggered on by evaluating SAD (which is between current MB and its co-located MB), high-frequency energy in DCT domain, and RD cost of mode P-8 × 8. In [16], a fast mode decision algorithm named motion activity-based mode decision was proposed. It classified MBs into different classes by predefined thresholds and motion activity. Each class corresponded to different number of modes to be checked. Tiesong et al. [17] projected encoding modes onto a 2D map and an optimal 2D map was predicted using spatial and temporal information. Then, a priority-based mode candidate list was constructed based on the optimal 2D map and mode decision was performed starting with the most important mode in the candidate list with early termination conditions. In such a way, the number of modes to be evaluated was reduced and acceleration was achieved. Changsung and Kuo [18] presented a featurebased fast inter/intra mode decision algorithm. This algorithm computed three features regarding spatial and temporal correlations with which to determine inter or intra mode to use. The feature space were partitioned into three regions, i.e., risk-free, risk-tolerable, and riskintolerable regions by checking the RD loss due to wrong mode decision and the probability distribution of inter/intra modes. Depending on the region, mechanisms with different complexity were applied for final mode decision. Martinez-Enriquze et al. [19] analyzed the conditional pdfs for every mode and estimated the RD cost to decide the optimal mode. A fast stereo video encoding algorithm based on hierarchical two-stage neural network was proposed in [20]. Local properties of input data and predicted error were extracted as the input feature to train a neural network which was designed to predict the optimal partition mode. SVM were also introduced in the study of fast mode decision [21,22]. However, MBs were treated equally in the classification problem, and the RD performance of an MB was ignored. In general, these works exploited various mode-related features to predict the optimal mode or reduce the number of modes to be evaluated. The features included spatial and temporal correlations, the gradient or high-frequency energy, the RD cost of specific mode, motion activity, and local properties, such as the prediction error or SAD/sum of absolute transformed differences (SATD).
As shown in the previous researches, CU size selection process applying RD optimization can be unacceptably time-consuming for practical implementation, which will be further analyzed in Section 2. To solve this problem, we propose a method utilizing machine learning to accelerate the CU size selection process. With properly modeling the problem and applying machine learning algorithm, our method can accurately predict the optimal decision on CU splitting instead of exhaustive searching over all possibilities. In order to derive a more accurate model to predict the CU splitting decision, RD difference is introduced as weights in the SVM training procedure to alleviate the RD performance degradation due to misclassification. Furthermore, various features are extracted from input video as well as earlier encoded data and an optimal feature subset is derived by a wrapper feature selection algorithm.
The rest of the article is organized as follows. We briefly go through CU size selection process of HM, and present the motivation of the proposed algorithm in Section 2. In Section 3, we elaborate the modeling of the CU splitting problem and its solution based on a machine learning algorithm, i.e., SVM. Experimental results in Section 4 demonstrate the effectiveness of the proposed algorithm, and Section 5 concludes the article.
CU size optimization in HM
To adapt to the diversity of picture content, flexible quad-tree coding block partitioning is adopted into HEVC which enables the use of CU, PU, and TU. The concept of CU is analogous to MB in pervious standards, e.g., H.264/AVC. It is the basic unit for intra/inter coding and is always square in shape. Pictures are divided into many largest CUs (LCUs), and each LCU can be splitting into four equal-sized CUs which can be further recursively split up to the maximal allowable hierarchical depth. In such a manner, the LCU is constructed as a quad-tree of CU(s) with different size as it shown in Figure 1. At leaf node of the quad-tree, the CU can be encoded in SKIP, inter, or intra mode. The partitioning size of SKIP mode is 2N × 2N, which means that the PU size of SKIP mode equals to CU size; the CU encoded in inter mode can be treated as one PU or partitioned into several PUs, which is specified by partitioning mode: , and Part_nR × 2N; and the CU in intra mode can be treated as one PU with size of 2N × 2N, or partitioned into four N × N PUs. A simple example of PUs in one CU is shown in Figure 1, as highlighted by the green square. PU corresponding to different partition size is the basic unit to carry the prediction information. In order to match the boundaries of real objects in a picture, the shape of PU is not restricted to being square, e.g., 2N × N is allowed. TU is defined for the transform and quantization process. The shape of TU depends on PU. When PU is square, TU is also square and its size varies from 4 × 4 to 32 × 32 luma samples. When PU is non-square, TU is also non-square and takes a size of 32 × 8, 8 × 32, 16 × 4, or 4 × 16 luma samples. One CU may contain one or more PUs. As well one CU may contain one or more TUs which are arranged in quadtree structure as shown in Figure 1.
As explained in the previous paragraph, one LCU can be coded into a rather complex quad-tree to adapt to various video contents. Furthermore, CUs with different depths may be coded in different prediction modes, different partitioning modes, and different transform sizes. To derive the optimal CU-level coding parameters, an exhaustive search method is employed by evaluating the RD costs of all possible combinations of CU size, PU size, and TU size. The RDO of CU size is illustrated in Figure 2. It needs a total of 85 RD calculations when CU size varies from 64 × 64 to 8 × 8. Obviously, such RD-based optimization method introduces significant complexity on encoder. Actually, it is unnecessary to do an exhaustive search over all possible CU sizes, since there exist some CU sizes that do not result in much rate distortion improvement and it is possible to ac-celerate the encoder by early terminating the CU splitting decision process. As shown in Figure 3, "flat" or "homogenous" regions, e.g., the floor, are more likely to be encoded in large CUs. Areas containing moving objects or objects boundaries, e.g., the net and the basketball, are usually split into small CUs. Motivated by this observation, we model CU splitting decision as a binary classification problem.
Problem formulation
As the flexible representation of coding data introduces heavy burden on the encoder, we propose to early terminate CU splitting to avoid unnecessary trials. We model CU splitting as a binary classification problem, (i.e., a CU that is not split into four sub-parts is assigned a label +1, otherwise −1 is assigned,) and tackle the classification problem by SVM [23]. As a widely used machine learning algorithm, SVM is based on the idea of structural risk minimization (SRM) and it has successfully been applied to a number of real-world problems, such as face recognition, text categorization, and object detection in machine vision. The main idea behind SVM is to derive a unique separating hyperplane that maximizes margin between two classes. Given l training data points where {x i , y i } is the ith training sample, i.e., ith CU. x i is the input feature vector and y i is the class label indicating CU splitting or not. The membership decision rule is based on the function defined in Equation (2), where f(x) represents the discriminant function associated with the hyperplane.
where ϕ(·) is a nonlinear operator that maps the input x i into a higher-dimensional space and it is the kernel function.
Mathematically, this hyperplane can be constructed by minimizing the following cost function with constraints For a non-separable case, the classification problem is generalized by introducing slack variables ξ i and a user- defined regularization parameter C. Then the classification problem is to minimize the following quantity subject to The modified cost function in Equation (5) is the so-called structural risk, which balances the empirical risk (i.e., the training errors reflected by the second term) with model complexity (the first term) [24]. It has been proven that the solution to the optimization problem of Equation (5) under the constraint of Equation (6) is given by the saddle point of Lagrange function where α i and β i are Lagrange multipliers associated with the constraints in Equation (6).
The Lagrange multipliers are solved as maximizing subject to The decision function can equivalently be expressed as It is obvious from Equation (10) that the α i associated with training point x i expresses the strength with which that point is embedded in the final decision function. Notice that the nonlinear mapping ϕ(·) never appears explicitly in the training or the decision. In general, the kernel takes the form of linear, polynomial, radial basis function (RBF), or sigmoid. In this article, we use the RBF kernel, since it can handle the case when the relation between class labels and the input vector is nonlinear as well as linear. Furthermore, the model complexity of the RBF kernel is lower than polynomial, and RBF kernel has fewer numerical difficulties [25].
Proposed CU splitting early termination algorithm
The proposed CU splitting early termination algorithm is shown in Figure 4. At each CU depth, the encoder first performs rate and distortion calculation of SKIP mode and inter mode with Part_2N × 2N (denoted as inter 2N × 2N mode thereafter), meanwhile extracts required features, i.e., input vector x of SVM during the evaluation procedure. Then, an offline trained SVM CU splitting model is loaded, which predicts the class label of the current CU according to the extracted input features. Based on the predicted class label, the encoder will decide whether to perform RD trials on CU splitting. The off-line trained SVM model is optimized based on SVM procedure with weighting on training samples. The weights are proposed as the difference of RD costs due to misclassifications. It is obvious that as long as the CU splitting predictor is accurate, early terminating RD trials on CU splitting can reduce a lot of computational complexity while maintaining RD performance.
Off-line training and weights generation
In the field of machine learning, accuracy is one of the most important measurements for classification algorithms. However, in this scenario, not only the ratio of correct classification, but also the loss of RD performance introduced by misclassifications is important.
There exist some CUs that the RD cost difference between four sub-CUs coding and one CU coding are almost the same. Misclassification of such CUs results in negligible RD degradation. On the contrary, for CUs that four sub-CUs coding outperforms one CU coding greatly, misclassification does lead to much RD loss. Obviously, different CUs are of different importance. It is improper to treat samples with different RD performance equally in the training process, and the optimal hyperplane will be deviated by those "unimportant" samples, i.e., these samples are outliers. The desired SVM predictor should predict class label as accurate as possible and keep RD loss as low as possible. Based on this observation, we suggest introducing weights into the SVM training process, i.e., assigning different weights to training samples.
where the weights are defined as the percentage of RD cost increased due to misclassification, which is ; when the CU is actually encoded in one CU where C i (s) and C i (n) are RD cost of splitting the CU into four sub-CUs and RD cost of non-splitting CU, respectively. CU with little difference of RD cost is assigned a small weight, while CU with large difference of RD cost is assigned a large weight. Note that the weights are only needed in the training procedure, and not needed anymore when the trained model is used to predict the class label in the encoding process. Then the standard SVM optimization problem in Equation (5) and the solution of the problem is subject to The upper bounds of α i are bounded by dynamical boundaries C*W i instead of a constant value C. Then the CUs with larger difference when encoded into one CU and into four sub-CUs will affect the optimal hyperplane more by introducing a larger weight W i .
Feature selection
We introduce several representative features related to CU splitting. Selecting effective and relevant features is crucial for classification. Good features help reduce training time as well as utilization time, defy the curse of dimensionality to improve prediction performance, and reduce storage requirements [26]. To select the features that are useful to build a good predictor of SVM, there are usually two types of feature selection approaches, filters and wrapper approaches. In this article, we suggest using a wrapper method based on F-score [27]. Filter methods based on correlation or mutual information ranking [21] are easy to implement; however, selecting the most relevant variables is usually suboptimal for building a predictor, particularly if the variables are redundant. Wrapper method assesses a subset of features according to their usefulness to a given predictor, which is better in this scenario. However, the number of subsets is extremely large as the number of features increase, and thus exhaustive search is not proper. Therefore, we propose to rank all features first by F-score and perform a greedy search based on the ranked results. F-score, as define in Equation (16), is a simple metric that measures the discrimination of two sets of real numbers.
ð16Þ where xi ; xi þ; xi À: are the average of the ith feature of the input vector x of the whole, positive, and negative training samples, respectively. x k,i + is the ith feature of the kth positive sample and x k,i − is the ith feature of the kth negative sample. n + and n − are the total numbers of positive and negative training samples. The larger the F-score is, the more likely this feature is more discriminative. F-score is easy to calculate and is friendly to be coupled with SVM training process. The procedure of the wrapper approach is summarized in the following four steps: (1) Collect training samples by running the HEVC reference software HM6.0. (2) Calculate F-score of every feature in the training set and sort the features in descending order according to F-score. To setup a rich feature set, diverse features are introduced and evaluated. Furthermore, it is possible to eliminate the dependency on video content by considering as many features as possible and then optimizing the feature subset. The features we consider as potential candidates are summarized as follows.
Prediction error-related features, such as SATD and CBF, denoted as x std , x vrs , and x cbf . x std is defined as the SATD between prediction and original pixel values, and x vrs is the variance of four SATDs of sub-block. x cbf is the coded block flags (CBF) of the inter 2N × 2N mode. CBF indicates the complexity of the predicted error under specific quantization parameters (QP). As discussed in [11][12][13][14][15], these features are correlated with CU partitioning. CU depth information of the context [8], denoted as x sl , x sa , and x tp . x sl and x sa are the CU depth of leftneighboring and above-neighboring CU, respectively. x tp is the CU depth of the co-located CU. Since there is substantial correlation in spatial and temporal domain of video signal, such context provides very good information. Gradient magnitude of current CU [18], denoted as x gm . It is the summation of gradient of every pixel in the current CU by applying Sobel operator, which reveals the flatness of the CU. Motion consistency-related feature [13,14], denoted as x mc , which is defined as the variance of the MVs of four sub-blocks in inter N × N mode. Regions with inconsistent motion activities are more likely to be encoded in small CUs. RD cost difference between skip and inter 2N × 2N mode, denotes as x drc . If the skip mode is better than inter 2N × 2N, the CU is likely to be background and it maybe not necessary to partition the CU into smaller ones. On the contrary, if inter 2N × 2N mode is better, it may be better to apply smaller partition mode or smaller CU size. Side information in RD cost, denotes as x si . Small size motion partition provides good RD performance for those blocks with high motion activities or rich in content. However, more bits should be paid to signal the side information. Therefore, the percentages of side information in total RD cost of inter 2N × 2N mode give good indication of optimal CU size. Hierarchical structure-related feature, denotes as x hrc . For the hierarchical prediction structure in HEVC, small CU size is preferred for frames with low temporal depth and large CU size is more likely to be optimal for the frames with high temporal depth.
All the above-mentioned candidate features are evaluated and an effective feature subset is formed by the proposed wrapper approach based on F-score. The experimental results on feature selection are presented. Although some of the features are correlated, the wrapper method can select the useful feature to the predictor regardless of correlation, as discussed in [26]. The video sequences we use in feature selection are "Cactus", "BQMall", and "FourPeople" and the training samples are collected by running HM6.0 [28] under common test conditions. In Table 1, it presents the F-scores of different features in different CU depths. CBF information x cbf and side information in RD cost x si exhibit relative high F-score and give good information about CU splitting. In contrast, the F-score of x hrc is rather low and therefore is excluded from the input vector in the feature selection. Table 2 presents the feature subsets in selection procedure and its corresponding CV. The CV is nearly the same when feature number is greater than five. However, it takes more time to extract the features and the SVM predictor will become more complex as the number of features raises. It is a good choice to set the feature number as five, as shown in Table 2, considering the balance between accuracy and additional complexity introduced by feature extraction and SVM model predictor. The optimized feature subsets are [x cbf , x si , x tp , x drc , x std ], [x cbf , x si , x tp , x drc , x std ], and [x cbf , x si , x tp , x gm , x std ] for CU depth zero (CU 64 × 64), one (CU 32 × 32), and two (CU 16 × 16), respectively. Since the optimal feature subsets are different for different CU depths, the proposed CU splitting early termination models are trained separately for different CU depths. The overhead introduced by feature extraction is almost negligible, since most of them can be derived when calculating the RD cost of Skip and inter 2N × 2N modes.
Experimental results on the proposed CU splitting early termination algorithm
To verify the efficiency of the proposed CU splitting early termination algorithm, we conduct comprehensive experiments by comparing the proposed algorithm with HEVC reference software HM6.0. The encoding configuration exactly follows what is recommended in [29] and the test sequences in the experiments cover a variety of content. The sequences we use to train the SVM predictor model are "Cactus", "BQMall", and "FourPeople", denoted as TS1 (training set 1) and they are not used in performance comparison anymore. The offline training process is carried out by the SVM training software [30] and the proposed CU early termination algorithm is incorporated into HEVC reference software HM6.0.
To evaluate the performance of the proposed algorithm, two metrics are used in Tables 3 and 4: the average BD-rate (BDBR) [31] difference between the proposed algorithm and HM6.0, and the time reduction ratio which is defined as where T HM and T p are the total encoding time of HM6.0 encoder and the proposed encoder, respectively. The actual encoding time is measured on a workstation with a 2.93-GHz processor and 8 GB of RAM. In Tables 3 and 4, we present the RD performance and the computational complexity of the proposed algorithm and the anchor under "Random Access, main" and "Low Delay, main" configurations. Regarding complexity, the proposed algorithm achieves a maximum of 73.7% running-time reduction with respect to HM6.0 with an average of 44.7% under "Random Access, main" configuration, as shown in Tables 3 and 4. In Table 3, the column of "ΔT" is the average ΔT of 4 QP points. Concerning the RD performance, it loses 1.35% in terms of BD-rate on average, and a worst case of 1.8% for sequence "Traffic". The RD loss is not significant. For the "Low Delay, main" configuration as shown in Tables 3 and 4, the proposed algorithm behaves very similar to the "Random Access, main" case and it reduces the complexity by 41.9% with 1.66% RD-Rate loss on average. In Table 4, part of the experimental results under different QPs is listed. As can be seen from it, more complexity reduction is achieved in low bitrate scenario (i.e., using high QP values). In such cases, larger CUs are more efficient in RD performance than smaller CUs, and large CUs take a high percentage. The proposed algorithm accurately early terminates the RDO procedures on large CU size and avoids unnecessary RD calculations on small CU size. Therefore, greater complexity reduction can be achieved in low bitrate case than the high bitrate case.
To verify that different training set will not affect the performance of the proposed algorithm, additional experiment is conducted. Three different sequences ("ParkScene", "BasketballDrill", and "Johnny", denoted as TS2) are used to train the offline model which is to be used in the encoding process. The encoding configurations are the same as the previous experiments. The metrics used in Table 5 are the same with that in Table 3. As shown in Table 5, similar RD performance and complexity reduction are derived using a different training set.
Both the weighted SVM training algorithm and the wrapper feature selection algorithm have been designed to provide the ability to generalize. First of all, the weighted SVM is based on SRM principle as opposed to traditional empirical risk minimization principle employed by conventional learning algorithms. SRM minimizes an upper bound on the expected risk, which equips the SVM with great ability to generalize. Introducing RD difference as weights eliminates the influence of outliers. In other words, those training samples with little RD performance degradation due to misclassification are "almost excluded" by assigning small weights and more attention is paid to "important" samples. Second, large number of relevant features are evaluated and assessed. Diversity of features lowers the opportunity of dependence on training set. The feature selection algorithm chooses optimal feature subset based on CV error to ensure that the optimal subset is not dependent on a specific training set. Therefore, the algorithm performs stably.
Additional overhead of SVM classification
SVM classification imposes additional computational complexity on encoder. Some experiments are conducted to investigate the overhead. Table 6 presents the total time to predict class labels in column "Total SVM" and the total time to encode sequences with the proposed algorithm in column "Encode Time". As it shown in column "percentage", the computational overheads are not critical especially in the low bitrate cases, less than 5%. It costs a little more time to predict the class labels of CU 16 × 16 as there are more 16 × 16 CUs.
Conclusion
In this article, a CU splitting early termination algorithm is proposed. The CU splitting optimization in HEVC is formulized as a binary classification problem and is solved by support vector classification. In order to maintain the RD performance of CU splitting early termination algorithm, RD loss due to misclassification is introduced as weighting factor of training samples in the offline training procedure, with which the training method pays special attention to CUs which are prone to degrade RD performance when using a suboptimal partition. Furthermore, diverse features are considered such as the correlation between CUs both in spatial and temporal domains, prediction errors, motion activities, and RD cost of modes. To select the optimal feature subset, a wrapper feature selection approach is carried out. It embeds the model training into the selection process and simple greedy search is performed based on F-score ranking. In such a way, the proposed algorithm performs well and stably across different configurations and various video contents. Since the CU splitting early termination model is trained offline and the optimal feature subset is small, the proposed algorithm is computationally simple. Demonstrated by the experimental results, the proposed algorithm can achieve 44.7% reduction in computational complexity with 1.35% BD-Rate increase in "Random Access, main" configuration and 41.9% complexity reduction with 1.66% BD-Rate increase in "Low Delay, main" configuration. | 2017-08-08T12:32:49.489Z | 2013-01-09T00:00:00.000 | {
"year": 2013,
"sha1": "273fb3d683acc4c1f842e934a0cc598126c1413e",
"oa_license": "CCBY",
"oa_url": "https://jivp-eurasipjournals.springeropen.com/track/pdf/10.1186/1687-5281-2013-4",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "278c935fee1883754387657b28c79c17eb9c4797",
"s2fieldsofstudy": [
"Computer Science"
],
"extfieldsofstudy": [
"Computer Science"
]
} |
546119 | pes2o/s2orc | v3-fos-license | Identification of Five Serum Protein Markers for Detection of Ovarian Cancer by Antibody Arrays
Background Protein and antibody arrays have emerged as a promising technology to study protein expression and protein function in a high-throughput manner. These arrays also represent a new opportunity to profile protein expression levels in cancer patients’ samples and to identify useful biosignatures for clinical diagnosis, disease classification, prediction, drug development and patient care. We applied antibody arrays to discover a panel of proteins which may serve as biomarkers to distinguish between patients with ovarian cancer and normal controls. Methodology/Principal Findings Using a case-control study design of 34 ovarian cancer patients and 53 age-matched healthy controls, we profiled the expression levels of 174 proteins using antibody array technology and determined the CA125 level using ELISA. The expression levels of those proteins were analyzed using 3 discriminant methods, including artificial neural network, classification tree and split-point score analysis. A panel of 5 serum protein markers (MSP-alpha, TIMP-4, PDGF-R alpha, and OPG and CA125) was identified, which could effectively detect ovarian cancer with high specificity (95%) and high sensitivity (100%), with AUC =0.98, while CA125 alone had an AUC of 0.87. Conclusions/Significance Our pilot study has shown the promising set of 5 serum markers for ovarian cancer detection.
Introduction
Ovarian cancer represents the third most frequent cancer and is one of the leading causes of cancer death among females in the United States and Europe [1][2][3]. Most symptoms of ovarian cancer are vague and similar to those often experienced with more common, non-life-threatening health conditions; these might include abdominal swelling or bloating, pelvic pain or discomfort, lower back pain, loss of appetite or feeling full quickly, persistent indigestion, gas or nausea and changes in bowel or bladder habits. As a result, almost 80% of ovarian cancer patients are diagnosed at later stages.
Unfortunately, the 5-year survival rate for patients with clinically advanced ovarian cancer is only 15% to 20%, in striking contrast to a 5-year survival rate of over 90% for patients with stage I disease. Therefore, it is urgent to discover and develop biomarkers for ovarian cancer screening and early detection.
Currently, CA-125 and imaging are the 2 most common approaches for ovarian cancer screening tests. However, these 2 markers, either used alone or in combination, are not useful screening or diagnostic purposes due to low specificity and/or sensitivity. For example, serum CA-125 has been shown to have a sensitivity of >98% but a specificity of only 50-60% for early-stage disease [4][5][6].
Multiple studies have been reported to identify serum ovarian cancer biomarkers using multiplex antibody array technology [7][8][9]. Dr. Lokshin's group identified a group of 6 serum protein markers, including interleukin-6 (IL-6), interleukin-8 (IL-8), epidermal growth factor (EGF), vascular endothelial growth factor (VEGF), monocyte chemoattractant protein-1 (MCP-1), and CA-125, which displayed significant difference in serum concentrations between ovarian cancer and control groups with 84% sensitivity at 95% specificity [7]. Dr Gil Mor's group identified a panel of 6 biomarkers, CA-125, osteopontin (OPN), insulin-like growth factor 2 (IGF-II), macrophage migration inhibitory factor (MIF), leptin and prolactin, which demonstrated a sensitivity of 95.3% and a specificity of 99.4% for the detection of ovarian cancer [8]. Using human biotin-based antibody arrays, we screened the serum expression profiles of 507 proteins in serum samples from 47 patients with ovarian cancers, 33 patients with benign ovarian masses and 39 healthy, age-matched controls and identified significant differences in protein expression between normal controls and patients with ovarian cancer (P<0.05). By classification analysis and split-point score analysis of these 2 groups, a 6-marker panel of proteins, which consisted of interleukin-2 receptor alpha (IL2Rα), endothelin, osteoprotegerin (OPG), vascular endothelial growth factor D (VEGF-D) and betacellulin (BTC), can be used to distinguish ovarian cancer patients from normal subjects [9]. These studies strongly suggest that antibody array technology has shown great promise in the discovery and development of serum ovarian cancer biomarker profiles and strongly suggest that serum cytokine panels may be useful as biomarkers for early detection of ovarian cancers.
In this study, we used our 174-marker, sandwich ELISAbased antibody array panels to screen serum samples from 34 ovarian cancer patients and 53 normal healthy subjects in order to identify a serum protein marker panel for detection of ovarian cancer.
Results
Validation of 174-marker semi-quantitative cytokine arrays (Figures 1, 2) In this study, we applied antibody array technology to determine the expression profiles of 174 cytokines in the serum from ovarian cancer patients and age-matched healthy normal controls. Cytokines in this study included anti-inflammatory cytokines, proinflammatory cytokines, growth factors, angiogenic factors or chemotactic cytokines, among others. Panel A (left) shows strong intra-assay correlation (same sample assayed on the same glass slide, tested on the same day); Panel B (middle) shows strong inter-assay correlation (same sample assayed on different glass slides, tested on different days); Panel C (right) shows poor correlation between cancer and normal samples assayed on the same glass slides, tested on the same day. Some of these cytokines reportedly are altered in ovarian cancer patients from our own studies and literature, but our broad screen of 174 proteins also included many other types of markers as part of an "unbiased" approach of using highcontent, high-throughput cytokine antibody arrays to profile the cytokine levels from ovarian cancer patients' serum with the goal of identifying potential diagnostic biomarkers.
First, we further determined the reproducibility of the assay in the analysis of human serum using scatter-plot analysis. Intraslide reproducibility for the glass-slide-based arrays was assessed by testing replicate aliquots of the same samples with two sub-arrays printed on the same slide and assayed at the same time. The inter-slide reproducibility was determined using two different slides printed with the same arrays were assayed using duplicate aliquots of the same samples on two different days. The Pearson correlation coefficients for intraslide and inter-slide reproducibility were 0.923 (P<0.001) and 0.899 (P<0.001) respectively, suggesting high reproducibility of the assay. In contrast, the Pearson correlation coefficient for cancer vs. normal samples were 0.226 (P<0.005), suggesting that the cancer samples and normal samples are from two different populations.
Next, serum from a total of 34 ovarian cancer patients and 53 healthy controls were assayed for expression levels of 174 cytokines with the goal of discovering new diagnostic markers for ovarian cancer. These serum samples were mainly obtained from our collaborators and were age-and sexmatched (Table 1). Human Cytokine Antibody Arrays were used to profile expression patterns for 174 cytokines in all 87 patients' serum samples. The signal intensity is proportional to the expression level of an individual protein in each sample. The array data were then normalized based on the average positive control signal intensity of each array. The median signal intensities of every spot were then corrected for local background. To establish a signal threshold, signal intensity cut-off value was determined by+/-2SD of 10 buffer blank control signal intensities, where the arrays were incubate with blocking buffer instead of patient's serum samples. Any values exceeding the signal threshold were considered as real signals (i.e., a positive detection of the cytokine). Values lower than the signal cut-off were assigned a value of 1. If measured signal intensity values from all samples for a particular cytokine were 1, those cytokines were removed from the list for further analysis.
Identification of serum protein markers by artificial neural network analysis (Figure 3)
After normalization and filtration, the data were then subjected to artificial neural network (ANN) analysis. The signal intensity data for individual patients were randomly divided into the training set (N= 51) or prediction set (N=36). In prediction discovery phase, the training set was analyzed using leave-one cross-validation approach. Through this analysis, a total of 8 predictors were identified. These 8 predictors were then used to predict the disease status in prediction set. The correct agreement of predicted disease status using the 8-marker panel with clinical diagnosis in the training set and prediction set was 82% and 80% respectively.
Identification of 5-marker panel for detection of ovarian cancer (Figures 4 and 5)
Next, of these 8 markers, we chose 4, macrophage stimulating protein alpha (MSP-alpha), tissue inhibitor of metalloproteinases-4 (TIMP-4), platelet derived growth factor receptor alpha (PDGF-R alpha), and osteoprotegerin (OPG), for hierarchal cluster analysis using SPSS software. Using the 4-marker panel above, 83% of samples were correctly identified (95% of healthy controls and 62% of ovarian cancers). Finally, all 87 samples were analyzed by the above identified 4 serum markers plus CA125 using split-point score analysis. Using the cutoff score of 3, 100% ovarian cancer and 95% healthy control samples were correctly identified, giving the total correct agreement of 96.6%.
Since CA125 is the most widely used marker for ovarian cancer, we compared the AUC between CA125 alone to that of our 5-marker panel, as determined by ROC curves. CA125 alone has an AUC of 0.87. On the other hand, our newly identified 5-marker panel has an AUC of 0.98. Thus, our pilot study has identified a promising set of 5 serum markers for early detection of ovarian cancer.
Validation of 5-marker panel for detection of ovarian cancer with ELISA assay (Figure 6)
To confirm the multiplex detection of the array data, we performed single-target ELISA assays to quantitatively measure the expression levels of these cytokines individually, and these results were compared with the array data. The relative expression levels for proteins measured by the array and ELISA were similar (see Figure 6). All 4 markers (MSPalpha, TIMP-4, PDGF-R alpha, and OPG) identified by ANN analysis and split-point score analysis were confirmed by ELISA kits. Figure 6 shows representative data for two of these markers, MSP-α and TIMP-4.
Discussion
CA125 is one of the most important biomarkers for ovarian cancer. It is often used effectively for monitoring treatment response and detecting recurrence of ovarian cancer. However, CA125 alone is not a useful diagnostic marker for clinical application due to its low specificity; with a reference cutoff value of 35 IU/ml, CA125 showed limited specificity of 50-60% with the sensitivity of >98% for early-stage disease [10]. Elevation of CA125 was observed in only 50% of stage I ovarian cancer patients and increased to 90% or above in stage III and IV ovarian cancer patients [11].
Owing to the complexity and heterogeneity of ovarian cancer, it is unlikely that a single biomarker will be able to detect all subtypes and stages of the disease with a high specificity and sensitivity. By searching the literature and other source, Drs. Polanski and Anderson have compile a list of 1261 proteins believed to be differentially expressed in human cancer [12]. Among them, 260 candidate biomarkers are considered as "high-priority" because they have been implicated as potential cancer markers in multiple publications in the literature and because most of them have been reported to be detectable in serum or plasma. We included many of these biomarkers in our antibody-based biomarker screening.
Cytokines are a diverse group of proteins comprised of cytokines, chemokines, growth factors, interferons, adipokines and lymphokines and play many critical roles of physiological Table using five-marker split-point score to classify ovarian cancer patients. A cut-off score of 3 was used. doi: 10.1371/journal.pone.0076795.g005 and pathological processes. It is also well known that cytokines, chemokines, growth factors, angiogenesis factors, proteases, apoptotic factors, receptors, adhesion molecules and adipokines play important roles in cancer development, progression and metastasis. Growing evidence suggests that a complex cytokine network is involved in ovarian cancer. A number of autocrine and paracrine cytokine loops have been identified in ovarian cancer and influence the biology of this tumor. Detection of expression patterns of multiple cytokines can provide new insights on cancer biology, identify new molecular targets for cancer treatment and discover new biomarkers for diagnosis and prognosis of disease [13,14].
In this study, we have demonstrated the effectiveness of screening a semi-quantitative, sandwich-based antibody array detecting a panel of 174 markers in the serum of 34 ovarian cancer patients and 53 age-matched healthy controls to identify a panel of 5 serum protein markers, including CA125, that can effectively detect ovarian cancer with high specificity (95%) and high sensitivity (100%) with AUC of 0.98. These markers were validated with ELISA assay.
We observed that CA125 alone has an AUC of 0.87, on the other hand, our newly identified 5-marker panel has an AUC of 0.98, indicating improved efficiency when detection of CA125 is combined with other 4 putative protein biomarkers for detection of ovarian cancer (TIMP-4, OPG, PDGF-R alpha, and MSPalpha).
TIMP-4 belongs to the matrix metalloproteinase (MMP) superfamily. MMPs are essential elements in extraceullular matrix (ECM) degradation, including regulating the release of ECM-bound cytokines and growth factors, which leads to angiogenesis, cellular invasion and, eventually in many cancers, metastasis. These MMPs are tightly controlled and regulated by several TIMPs, several of which appear to play a critical role in tumorigenesis. Chegini's Lab has reported elevated expression of TIMP-4 in ovarian cancer tissues by IHC analysis, indicating its potential role in tumorigenesis of ovarian cancer [15]. OPG belongs to TNF superfamily and can be linked to the nuclear factor kappa-light-chain enhancer of activated B cells (NFκB) and tumor necrosis factor-related apoptosis inducing ligand (TRAIL) signaling pathways. OPG was first identified by its ability to regulate the homeostasis of bone remodeling. However, Piche's Lab reported that OPG can serve as such survival factor by protecting TRAIL-induced apoptosis in ovarian cancer cells, indicating its potential role in the development and progression of ovarian cancer [16].
PDGF-R alpha is a receptor in the PDGF superfamily. Serving as angiogenic growth factors, PDGFs play important role in cell growth, chemotaxis, angiogenesis and, in the context of cancer, reconstruction of tumor stromal microenvironments. Jakobsen's Lab reported that PDGF-R alpha showed higher expression in ovarian cancer tissues in comparison with adjacent normal tissues [17]. It was also reported that PDGF-R alpha is expressed more often in serous carcinomas than in endometriod and mucinous tumors [18], which is consistent with the findings of our study, in which the majority of tumors tested (29 of 34) were serous.
MSP is a growth factor involved in activating macrophage stimulating receptor-1 (MSTR1). The alpha chain of MSP (MSP-alpha) is secreted by cleavage of pro-MSP. There are reports showing that the MSP pathway plays an important role in tumor metastasis [19].
In summary, using 174-marker cytokine antibody array technology, we identified a panel of 5 serum protein markers which can detect ovarian cancer with both high specificity and high sensitivity, indicating its promising application in personalized medicine for ovarian cancer detection. Additionally, considering that a relatively small sample size (N<100) used in this investigation achieved an extremely high sensitivity (100%) and relatively high specificity (95%), we offer some hope that validation of multi-biomarker panel may someday be useful to screen for a deadly cancer that rarely gets diagnosed in its early stages.
Protein and antibody arrays have emerged as a promising technology to study protein expression and protein function in a high throughput manner. These protein and antibody arrays present a new opportunity to profile protein expression levels in cancer patients' samples and identify useful biosignatures for drug development and patient care. Our 5-marker panel could effectively distinguish ovarian cancers from healthy controls These 5 individual markers are not unique to ovarian cancers, as shown by their expression in other cancer types, including breast cancers [18][19][20][21][22], lung cancers [23], colorectal cancers [24], prostate cancers [25,26], hepatocellular carcinomas [27], pancreatic cancers [28], etc. Therefore, it will be very important and interesting to investigate whether this combination of 5markers can detect other cancer types as well. Two of most important components in biomarker discovery program are high quality of patients' samples and high-content screening and high-throughput technologies. Therefore, the combination of proven antibody-based detection technology and platforms from us and well-characterized pre-diagnostic samples from PLCO at National Cancer Institute (NCI) will provide a unique opportunity for biomarker discovery and validation [29,30]. Such investigations will not only serve to validate the specific biomarker panel identified in this study; they will help to validate the use of antibody arrays as a high-throuput approach to identify cancer biomarkers for disease screening and detection.
Materials and Methods
This protocol has been approved by sterling institutional review board. IRB ID: 3303. The review board is sterling independent service, Inc located in 6300 powers ferry road, suite 600-351, Atlanta, GA 30339. Written consent was obtained when collecting samples from both patients and healthy controls.
Ethical Statement
Written consent was obtained when collecting samples from both patients and healthy controls.
Sample collection
The serum samples from 34 patients diagnosed with earlystage (I and II) or late-stage (III and IV) ovarian cancers and 53 age-matched healthy controls included in the study were collected from the affiliated hospital, Sun Yat-sen University. Briefly, about 2 ml of venous blood was drawn from patients. Serum was collected and stored at -80°C until needed. Information about ovarian cancer diagnosis, staging, histology, grade and age was available to us, but the unique patient identifiers, such as name, address, day of birth, was not provided.
Antibody array technology
Semi-quantitative sandwich-based antibody arrays (RayBio ® Human Cytokine Array G-Series 2000) were developed as 3 distinct arrays (Human Cytokine Arrays G6, G7, and G8), each representing a unique set of 54 to 60 antigen-specific antibodies to detect a total of 174 serum markers on a glass slide matrix. A pair of antibodies is required to detect each analyte. Glass slides were printed as 4 or 8 identical sub-arrays consisting of spots of each antigen-specific apture antibody for that array. The corresponding detection antibodies were biotinlabeled and combined as a single cocktail reagent for later use. Printed slides were placed in chamber assemblies to allow for incubation of each sub-array with a different sample. After blocking each sub-array with a blocking buffer, sub arrays were incubated with serum samples. Following extensive washing to remove non-specific binding, the cocktail of biotinylated detection antibodies were added to the arrays. After extensive washing, the array slides were incubated with a streptavidinconjugated fluor (HiLyte Fluor™ 532, from Anaspec, Fremont, CA). The fluorescent signals were then visualized using laserbased scanner system (GenePix 4200A, Molecular Dynamics, Sunnyvale, CA) using the green channel. To increase the accuracy, two replicates per antibody were spotted, and the averages of the median signal intensities for both spots (minus local background subtraction) were used for all calculations. Through these improvements, we can get a coefficient of variation (CV) of about 10% using our glass slide platform.
ELISA analysis
ELISA was performed according to the RayBio® ELISA manual (RayBiotech, Inc., Norcross, GA, USA). In brief, precoated 96-well ELISA plates with captured antibodies were first blocked using a blocking buffer. Duplicate aliquots (100 microliter per well) of diluted sera and multiple dilutions (i.e., concentrations) of standard protein were loaded onto the ELISA plate. The plates were then incubated for 2 h at room temperature (RT). Unbound materials were washed out, and biotinylated anti-cytokine detection antibody was added to each well. The plates were incubated for 1 h at RT. After washing, 100 microliter of streptavidin-conjugated HRP reagent was added to the wells, and the plate was incubated for 30 minutes at RT. After extensive washing, color development was performed by incubation with HRP substrate. After adding stop solution, the optical density (O.D.) at 450 nm was determined for each well using a microplate reader, and the concentrations of the samples were determined by comparison to the standard concentration curves.
Data analysis
An adjusted t-test was used to test the significance between protein expression levels in ovarian cancer and healthy control samples. P values less than 0.05 were considered to be statistically significant.
To determine the signal threshold, signals from the arrays were measured in the absence of samples (using blocking buffer as a blank) and repeated 10 times. The signals generated using blanks were averaged and the standard deviation (SD) was calculated. Signals with values lower than the average blank signal +2xSD were considered as background.
The data was also analyzed using neural network. This powerful tool allows us the find common protein expression profiles to predict cancer. In phase one study, 80% of samples were randomly assigned to training set and the remaining 20% of samples were used as test set. The advantage of this approach is the success of prediction will become more accurate over time, as more data become available.
The data were also analyzed by split-point score analysis. The split point divides the sample space into two intervals, one for ovarian cancer and one for normal controls. The best splitpoint score of each marker was chosen to ensure the minimization of misclassified samples. For each marker, a score of 0 was assigned to a sample if it fell in the normal control interval for that marker; a score of 1 was assigned to a sample if it fell in the ovarian cancer interval. Overall, an individual was assigned a score as the sum of these assigned scores for N different markers. Therefore, the range of such score was between 0 to N. A given threshold (T) was chosen to optimally separate ovarian cancer from healthy controls, i.e. a given individual with a total score <T is predicted to have normal status, whereas an individual with a total score >T was diagnosed as ovarian cancer.
From the above data, we calculated the specificity, sensitivity, positive predictive value (PPV) and negative predictive value (NPV). The ROC was also determined. | 2016-05-17T21:44:36.404Z | 2013-10-08T00:00:00.000 | {
"year": 2013,
"sha1": "83809a2df4faf95d8bcc51eab1e774aaa9b30738",
"oa_license": "CCBY",
"oa_url": "https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0076795&type=printable",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "83809a2df4faf95d8bcc51eab1e774aaa9b30738",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
} |
261449454 | pes2o/s2orc | v3-fos-license | Evaluation of a Study Protocol of the Application of Humor Interventions in Palliative Care Through a First Pilot Study
Background: Humor and laughter might have an alleviating effect on pain threshold and enhance coping and building relationships. However, randomized controlled studies in palliative care have struggled with high percentages of attrition and missing values. Objectives: We aimed to evaluate a study protocol through a pilot study for the evaluation of a multistage humor intervention with psychological and physiological outcome parameters that may be applied successfully in a palliative care environment. Design: This pilot study utilized a pre–post design. The inclusion of a control group for the final study setting recruiting 120 patients is planned. Setting/Subjects: The study was a monocenter study in a clinic for palliative care in Germany. All patients were eligible for recruitment. Seven patients were recruited for the pilot study. Measurements: Interventions were developed using a humor training for psychiatric patients. Quantitative sensory testing for pain threshold testing and questionnaires on humor as a character trait, pain intensity, life satisfaction, and symptom burden were planned to be evaluated before and after three humor interventions. Results: The feasibility of the original study design was re-evaluated after pilot testing. Only two out of the seven patients were able to complete two interventions, requiring modification. Fewer questionnaires, less complex physiological testing, and reduction from three to two interventions were then planned. Conclusion: The initial planned research methodology must be adjusted for patients with high symptom burden. In the experimental group of the final study setting, the effects of one to two interventions will be evaluated measuring oxytocin levels in saliva and using standardized questionnaires to determine cheerfulness, life satisfaction and symptom burden, as well as assessing as-needed medication. Trial registration: DRKS00028978 German Registry of Clinical Studies.
Background
Defining humor presents a challenge due to its multifaceted nature with a wide range of perspectives and applications.Humor can be self-generated, appreciated, employed as a coping mechanism, convey aggressive content, be practiced as a cheerful and composed attitude toward life, and can be both a component of one's character and a situation-specific state.A definition that comes closest to what we aimed to foster in this study is the one by Ruch, 1 ''Humor is associated with a personality-based cognitive-emotional style of processing situations and life in general, characterized by the ability to find positive aspects even in negative situations (dangers, self-threats, etc.), remaining calm and composed, and even being able to smile or react with amusement, at least to some extent.'' Humor and health might be related. 2Scientific proof for this link is growing, and there are some indications of a beneficial effect of laughter and humor interventions for adult patients. 3A meta-analysis of randomized controlled trials of laughter and humor interventions described a significant decrease of depression, anxiety, and sleep quality in adults. 4Pinna et al. 5 and Linge-Dahl et al. 6 have summarized the limited studies exploring humor, health, and palliative care.They suggest that palliative care professionals are frequently using humor.Results from this review suggesting patients' coping, [7][8][9] relationship-building, 10 and psychotropic dose burden 11,12 may benefit.This helps the patients by gaining a different perspective of their own dying process. 13,14However, the systematic reviews described that standardized evaluations, including a control group, has only been implemented in one of the studies. 15Results are also limited as humor interventions during the last days of patients' lives are ethically problematic.
7][18][19] Humorous videos, 20,21 clown visits, [22][23][24] laughter yoga, 25,26 and other personalized interventions 27 have all shown benefit to some degree.Group and individual interventions 28,29 and the use of different kinds of humor [30][31][32][33] have been tested.Staff in palliative care institutions show a strong gatekeeper barrier toward new or potentially burdening experiences for their patients. 34Pallia-tive care professionals' use of humor and laughter within teams has also been documented 35 as strongly developed.
The reproducibility of humor interventions is challenging due to the subjective and context-dependent nature of humor.The perception and response to humor can vary significantly among individuals and cultural backgrounds, making it difficult to establish standardized protocols and consistent outcomes across studies.This issue has been acknowledged in the field, 4,36,37 emphasizing the need for rigorous methodology and replication studies to enhance the reliability and generalizability of results in humor intervention research.
9][40] High levels of attrition have been reported in various patient groups receiving palliative care services such as with advanced cancer, 41 heart failure, 42 and chronic obstructive pulmonary disease (COPD). 43Patients did not want to take part in studies with ''too much record keeping'' and reported being ''too tired or to sick'' (p.77; Ref. 36 ).Chen et al. 44 asked researchers from the field about their experiences and found that limited funding and work capacities, the challenging nature of the field, and discomfort in relation to the topic also create barriers.Preston et al. 45 suggested attrition in palliative care clinical trials should be expected.Missing values and attrition in the results should be carefully analyzed.
7][48][49] Positive psychology research rather focuses on outcomes such as life satisfaction and personality traits: for example, cheerfulness, playfulness, or preferred humor styles. 30,50Oxytocin might be used as an indicator of well-being. 51Radioimmunoassay (RIA) oxytocin has previously been described by de Jong et al. 52 as a potential analysis method.
Aim of the study
This pilot study aimed to explore a methodology to evaluate the psychological and physical effects of humor interventions on patients treated in a palliative care unit.We selected evaluation instruments that minimized patient burden and attrition.Enhanced cheerfulness is potentially influenced by humor interventions and was, therefore, included. 53ial design and study setting A pilot testing was performed to prepare a monocenter randomized controlled clinical trial.Assessment involved testing the effect of three humor interventions on patients.The evaluation encompassed life satisfaction, character strengths, cheerfulness, burden of symptoms, stress, pain sensation, order of as-needed medication, and pain threshold.The control group would receive standard palliative care.
Recruitment and randomization
Participants were recruited from the palliative care ward of the University Hospital Bonn.Participants had to be conscious, orientated, adequately alert to respond to the questionnaires, and had to speak German fluently.The inclusion criteria fulfillment for each patient was discussed with the ward staff.Potential participants were randomized to intervention group or control group using a simple randomization list constructed with the random number generation function in Microsoft Excel.The study was not blinded.To test for a medium effect with a > 0.5 and a power of 0.7 (Cohens d), 240 patients would be required: 120 each in the intervention and control groups.All participants had to provide written informed consent.If inclusion criteria were not met patients that had not completed the assessments could still receive a humor intervention as compassionate use, according to the mission of Humor Hilft Heilen (Humor helps to cure) is to provide humor interventions to anyone who wants to receive it.
Control group data collection was scheduled on alternate days, to avoid inadvertent contact with the humor interventions in progress.
Intervention visits and evaluation instruments
Data collection included measurements of character strengths, cheerfulness, symptom burden and well-being, life satisfaction, pain sensation, and pain threshold.
The humor intervention was based on the Humor Habits program from McGhee, 54 which has been adapted by Falkenberg et al. 55 for patients being treated in an inpatient psychiatric setting.It was planned to take place in three separate individualized sessions.Two trained humor coaches by the foundation Humor Helps to Cure (Humor Hilft Heilen) implemented the intervention.If possible the intervention was repeated on days three and five (or one week after the first intervention according to the availability of the humor coaches, see Fig. 1) following a multistage model. 55Each intervention was scheduled to take *30 minutes.The first intervention included the following elements: remembrance of a funny episode during childhood to find the patient's preferred humor style and then providing humor according to that style for the participant.
The second and third intervention focused on finding humorous aspects in the current situation, producing humor and applying humor in everyday life.Given that the processing speed of the elements per person could vary significantly, the allocation into first, second, and third intervention was tailored to the individual pace of the patients.The coaches used various requisites (such as musical instruments, pencils, and a folding rule) but mostly they communicated and used imagination to create humorous interactions.Both coaches were educated as hospital clowns and play at least one instrument; one studied at a circus school in Brussels, Belgium and is a trained actress, the other studied at the clown school Hannover and is a certified social worker.
After entering, the humor coaches always explored the mood of the patient first and then tried to find a matching tone to communicate.They asked every patient a couple of questions regarding the biography and a humorous anecdote from the patients' childhood to get to know the patient's preferred humor style.Subsequently they tried to find humorous aspects in the current situation using everything available in the room or finding something funny in the information the patient had given.If the patients were still at the palliative care ward the coaches prepared a second and potentially third visit based on the first visit.If it had not happened already, they encouraged the patient with tailored motivations to engage and produce humor themselves.
Unstructured field notes with time stamps were taken to document the interaction with and the reactions of the patients.Qualitative data analysis using MAXQDA software was planned for the field notes.Immediately after the intervention questionnaire assessment was repeated.
7][58][59] The STCI-T (30 items) and STCI-S (18 Items) consist of cheerfulness, seriousness, and bad mood scales, which are built from sum scores of 10 (STCI-T) and 6 (STCI-S) items, respectively.The investigator aided questionnaire completion by reading questions to patients or supervising the patients' reading and responses, depending on patient performance level.Symptom burden and well-being were assessed using the Minimal Documentation System for patients in palliative care (MIDOS). 60IDOS uses categorical scales, with 10 items on physical and psychological symptoms and one item on gen-eral well-being.Life satisfaction was measured using the Satisfaction with Life Scale (SWLS) 61
ASSESSMENTS:
Pain medication with 240 items and perception of pain using the Schmerz-Evaluations-Skala (pain evaluation scale) (SES) 63 consisting of 24 items were included.Measurement of the pain threshold using an extract of the quantitative sensory testing (QST) system. 64QST is a standardized method for testing perception-and pain-thresholds using different mechanical stimuli and by that, the functioning of the somatosensory system can be characterized.5][66][67] For this study, three out of the seven standardized tests were included.The mechanical detection thresholds (von Frey filaments and a 64-Hz tuning fork), mechanical pain sensitivity (Pinprick stimuli, brush, Q-Tip, cotton wool) and the pressure pain threshold, to reduce the burden on the participants.It was estimated that the three QST tests would take a maximum of 30 minutes.All tests and questionnaires added up to 328 items and a total duration of preintervention testing of more than one hour.The post-interventional status would take *30 minutes.
Information on as-needed medication administered before and after the interventions was extracted from the patients' medical records.This information aimed to determine whether observed differences in symptom intensity were related to medications.
The same test batteries were repeated before (STCI-S, SWLS, MIDOS, SES, and QST) and after the second and third intervention (Fig. 1).A semistructured interview was planned two days after the third intervention to explore the patient's experience and perceived intervention burden and benefit.The interview guideline was divided into three main categories with seven open-answer questions.Answers were documented on paper by the researcher who conducted the interview and the interventions.We planned to use MAXQDA for qualitative data analysis.
Ethics
This study was approved by the ethics committee of the University Hospital Bonn (No. 003/16).Every participant will be asked to give written informed consent before being included in the study.The informed consent document and committee approval letters are obtained.
Results of Pilot Study
Seven patients were recruited for a pilot study, but only three were able to complete the pain threshold mea-surement.Two agreed to complete the related questionnaires and take part in two interventions; one completed all the test instruments before and after the two interventions.This patient also agreed to the assessment of the pain threshold (QST) and questionnaires after the second intervention.The other patients did not consent to repeat QST or did not complete questionnaires.One patient agreed to the day seven interview.
All patients commented on the questionnaires as being too long, especially the SES to having a number of redundant questions and being difficult to understand after about half of the items.The participating researcher observed reduced levels of concentration and alertness toward the end of data collection and a negative mood swing after the completion of the SES.The application of QST was commented as very uncomfortable by the three patients who agreed to take part in the procedure.Patients complained that they had to fill out the same questionnaires before and after the intervention in all cases.
Limitations
The interventions were standardized to a limited extent and otherwise individualized for each patient, resulting in restricted methodological transferability and a low generalizability of the findings.
As previously outlined in the background section, humor encompasses a wide range of manifestations, making its definition and measurement challenging.This aspect further impacts the transferability of results.
The first challenge with the initiation of the study was to overcome staff's gatekeeper function, members of the clinical team voiced concerns a large portion of eligible patients had cognitive impairment and advanced disease that should preclude them from participation.We instituted *15-minute educational dialogue sessions during staff meetings to educate the clinical teams about the pilot study.Close cooperation with the senior physician and the lead nurse was maintained in the adaption process of the study protocol after the pilot testing.
The control group would be more meaningful if they received an intervention such as reading to them or showing a video, which uses the same amount of time and attention as the humor intervention.No patients from the control group were included in the pilot test.However, for the final study, the intention is to provide the best palliative care for all patients.This could potentially introduce bias due to additional attention given to the intervention group.
The functional status of different patients may vary significantly, due to differences in underlying diseases and stages of illnesses.Use of a staging system could help to standardize the impact of the disease.
As Blum et al. 36 previously reported, there is bias toward exclusion of patients with high symptom burden.This limitation affects the generalizability of the findings to all palliative care patients.
In addition, the transferability of results is constrained by the fact that the study was conducted in a single-center setting.Since the study was not blinded, there is also a risk of bias due to potential variations in the researchers' interactions with the intervention and control group.
Implementing interviews within the study framework was challenging due to temporal constraints.Patients who were fit enough to participate in data collection were often discharged home or transferred to a hospice within the seven-day study period.
Finally, patient expectations surrounding a humor intervention may have been a source of bias in our pilot study.One of the patients, for example, voiced a concern her physical and mental state may inhibit her sense of humor.Although this ultimately was not the case for her, such anticipation itself could affect outcomes.Therefore, future studied interventions will begin with a careful assessment of the patient's prior expectations and current situation to minimize potential bias introduction.
Discussion of changes after pilot testing
The literature on humor interventions in palliative care has primarily been focused on workshops and interventions for staff. 34,35However, humor interventions may have a meaningfully supportive role for patients receiving palliative care services.This pilot study supported literature findings 36 suggesting extensive research data collection is excessively burdensome for those facing serious illness.Higher symptom burdens and increased time obligations restrict these patients' capacity to participate in extended research-related activities.We considered the cognitive and physical limitations often experienced by this population when creating the pilot study protocol.
However, its results demonstrated more challenges than anticipated.Our pilot study supported the available literature 39,40 suggesting our single center would be unlikely to recruit sufficient statistical power.However, research on complex interventions 68 such as humor therapy may be difficult to evaluate in multicenter trials, as these interventions are provided by highly skilled specialists who would need to be trained in advance to maximize comparability between therapists and centers.It was determined the semistructured interview planned for two days after the third intervention (day seven) was excessively burdensome for this patient population.
We plan to involve our specialized homecare palliative care team (SAPV) in the study, as home-treated patients in our services often have more resources and are in healthier condition.This may facilitate the participation in interventions and more complete datasets.The palliative care inpatient consultation team in the hospital is working on transferring patients with palliative care needs earlier, so that we can reduce the proportion of patients in the terminal phase of dying who are being treated at the palliative care ward.This team is working toward early integration of palliative care, including earlier transfer to the palliative care unit for patients with complex problems and needs.This should lead to more patients receiving crisis intervention with subsequent transfer to other care settings and less imminently dying patients in the palliative care unit.This in turn should lead to a higher percentage of patients eligible for humor interventions.
Because of the high attrition rate in the pilot testing some instruments were removed from study setting.This study found hints that completing the SES 63 increased patients' negative mood.Therefore, when we had to decide on shortening data collection to reduce attrition, the complete scale was removed from the study setting.The QST 64 caused a significant physical burden and the testing elicited pain sensations in patients, who already suffered from disease-related pain to some extent.Therefore, we decided to exclude QST from the final study, as we deemed the additional burden as ethically inappropriate.
The VIA-IS, with its 240 items, was too long for patients in our palliative care unit to complete.Even though having a comprehensive profile on the character strengths of all participants would have provided valuable information, implementation was not feasible due to resource and ethical considerations.The interview was also hardly conducted due to discharge, illness progression, fatigue, or other reasons.These modifications reduced the preintervention assessment from *60 to 30 minutes and the post-intervention from 40 to 10 minutes.
The questionnaires that remained in the study protocol after pilot testing were STCI-T, STCI-S, SWLS, and MIDOS (Fig. 2) because the number of items of these instruments seemed manageable for the patients.We included the STCI-T in the study because it has significantly fewer items compared with the VIA-IS and allows for checking statistical equality between the intervention and control groups.The STCI-S, as the main variable for potential mood changes, had to be retained.
We included life satisfaction, measured by the SWLS, because it has been widely used in previous studies, consists of only five items, and enables us to compare our results with others' research.We kept the MIDOS for evaluating the burden of symptoms since patients found it less burdensome than the SES during pilot testing.Including this medical evaluation instrument in the test battery was valuable for our research concept.Finally, assessment of the effect of one to two humor interventions on 120 patients, evaluating life satisfaction, cheerfulness, burden of symptoms, stress, order of as-needed medication and oxytocin levels in saliva was planned.Potential alternative physiological parameter Oxytocin has been suggested as a potential indicator of well-being, as it is involved in social bonding, positive emotions, and stress regulation. 69,70Research has shown that higher levels of oxytocin are associated with enhanced social interactions and improved mental health outcomes. 71,72However, it is important to note that the relationship between oxytocin and wellbeing is complex, and further studies are needed to fully confirm its role as a valid indicator of well-being.
The laboratory regulations prohibit saliva collection for oxytocin measurements if patients have multiresistant infections.
After completing the questionnaire, a study nurse would collect saliva by having the patient chew on a cotton wool roll for at least 60 seconds.The sample would then immediately be placed on dry ice at À80°C and then stored in a refrigerator at À80°C.Samples would then be shipped to the laboratory by courier service every six months.The salivary oxytocin level could be analyzed before and immediately after the humor interventions.For each sample 300 mL of saliva would be evaporated (Concentrator, Eppendorf, Germany), and 50 mL of assay buffer would be added, followed by 50 mL anti-oxytocin rabbit antibodies.
The detection limit of the RIA is 0.1-0.5 pg/sample; the intra-and interassay variabilities were <10%.Plasma samples (0.5 mL) were kept at À20°C until extraction using heat-activated LiChroprep Ò Si60 (Merck) at 690°C for three hours.Twenty milligrams of LiChroprep Si60 in 1 mL distilled water are added to the sample, mixed for 30 minutes, washed twice with distilled water and 0.01 mol/L HCl, and eluded with 60% acetone.The evaporated extracts and evaporated saliva samples (0.3 mL) are analyzed for oxytocin together in a highly sensitive and specific RIA.
Conclusion
Our pilot study revealed some unanticipated barriers for participation and potential biases that could be minimized further.We were able to utilize these results to more efficiently develop a protocol for a vigorous study that will enhance participation and optimize outcome reliability.Patients receiving treatment in the palliative care unit have a limited remaining life span, thus slimming down the humor intervention with the reduction from three to two interventions and condensing the content represents one of the most crucial improvements resulting from the pilot testing.
it comprises five items whose sum score indicates current life satisfaction.All questionnaires are listed in the Supplementary Material.Assessment of humor as character trait using the Values in Action Inventory of Strengths (VIA-IS)62
20 FIG. 1 .
FIG. 1. SPIRIT flowchart of pilot test sequence intervention group.*Each intervention took 20-30 minutes.**Same data collection scheme as intervention 2 on day five. | 2023-09-02T15:21:56.143Z | 2023-08-01T00:00:00.000 | {
"year": 2023,
"sha1": "a0185c6b220a155d1a275dbac301a0d8ae11182f",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.1089/pmr.2023.0014",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "62b712b14d50dd690cddf676681d683d07a8fd44",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": []
} |
254906838 | pes2o/s2orc | v3-fos-license | Super Inequality: A General Theory of Mass Poverty
This study has set its goal to connect the dots and to build a unified general theory of inequality, which is capable of explaining the aggregate forces of historical, local, communal, national, and international systemic oppression and mass poverty at the same time. The super-super rich and super powerful are to blame and be held accountable for inequality, as they are the ones that have not only built the systems and forces that created inequality on an historical, intergenerational, and global scale but are also the ones maintaining those very systems and forces that create and worsen inequality and hence mass poverty on an appalling scale. The newly created theory of super inequality is aimed to gather and strengthen forces of science—and particularly, but not only, social policy—to be able, down the road, to tackle the perhaps greatest social problem of our time: inequality.
Introduction
Sometimes, one wonders, what is supposed to be, and how is the world supposed to be?Is there a formula, a law, a principle that can tell it all?An equilibrium, perhaps?Mathematicians set up and use formulas wherever and whenever they can.Physicists and other natural scientists are obsessed and awed by laws of this or that, for this or that.Philosophers may work and juggle with principles, ideals, and ideas.Economists are strongly favoring and trying to explain the economy and economic and economically meaningful actions with equilibria, or disequilibria, or change from the one to the other, and then yet to another equilibria.What they all have in common are theories, as they guide every scientific action, With the ravaging of the COVID-19 pandemic, the world of social policy has experienced another shockwave that yet again lets no leaf and no stone unturned.Poverty and inequality are rampant, governments are seemingly helpless-so it seems, and so they say, and more often than not, they do not want to talk about (and/or act upon) anything that has to do with the harsh and appalling realities people, families, children, the helpless, and the poor are exposed to.
People all over the world are desperate and desolate, we know.Hence, scientists are venturing on and starting new research projects and research understandings.This we can see now (cf.e.g., Kjaerum, Davis, & Lyons, 2021;Lupton & Willis, 2021;Wagenaar & Prainsack, 2021).
With the outside world (the system environment) getting more complex, the inside world (the welfare state systems themselves) also needs to try to find answers urgently; and, hopefully, there are appropriate solutions identified that are up to the task, to the necessitated degree and extent, and to take effect in the shortest possible timeframe.Not long time ago, this seemed impossible, really, and to some extent, in most places, it still may be.While this urge and push does not guarantee or make any social policy (including economic policy, health policy, and any other public policy) solutions speedier and more potent per se, it does increase the likelihood and perhaps the strength of a wind of change in how we press forward social policy science.The methodologies may change and/or increase, and the theories may change and/or increase.
As the title of this paper indicates, this study is about inequality, super inequality to be more precise.What is super inequality?Super inequality stands for superhigh levels of inequality in society, just like super-aging stands for super-high levels of aging in society.The inequality we are talking about here is systemic inequality.At the same time, this systemic inequality is also transgenerational that cuts across centuries and millennia-that is, certain geographical/societal manifestations of inequality and, hence, mass poverty are sustained throughout centuries and/or millennia.
On Super Inequality
Super inequality is a phenomenon as much as it is a process.We can grasp, see, understand, and measure different dimensions of systemic mass inequality.Systemic mass inequality is geographically, institutionally, and culturally imprinted onto and impregnated into cultures, societies, communities, and social and ethnic minorities alike.Systemic mass inequality feeds on itself (cf.Remington, Forthcoming a), as does the process that make the rich richer and the poor poorer.For instance, trickle-up economics is real, and truly and widely and constantly felt not only by billions but by even more in times of this COVID-19 pandemic.
The reality of trickle-up economics is opposed to the made-up idea (notion or fiction) of trickle-down economics (Ahmed, 1999;Aspalter, 2006Aspalter, , 2008)).It is perhaps best said with the following words: [T]he economic growth-oriented development paradigm and traditional welfare consider people as objects, consumers and recipients of services.These models do not call into question vulnerabilities, marginalization, and resources erosion of the poor which occurs through various forms of social, economic, gender injustices which are engendered by an extremely hierarchical society, by biased and distorted markets and by misgovernment by the state.… The economic growth-oriented paradigm of development is not neutral in its effect on the anti-poor biases of the community, society, market and state.… it has given very little to the poor and served disproportionately the interests of the rich and powerful … this model of development has not only accentuated erosion of the resources of the poor but also exacerbated natural resource depletion and environmental pollution.(Ahmed, 1999, pp. 42-43) To be sure, this is a global as well as a "historical"-that is, our storicalphenomenon.The cultural, economic, institutional, and political onslaught of colonialism on the local societies and populations over the past centuries have been prolonged and amplified over the course of our story, that is, our human story ("history").To provide one clear example, out of countless many, Awoyemi, Oluwatayo, & Oluwakemi (2012, p. 4) have reported that: [t]here are … reasons why inequality could have been socially embedded in Nigeria, one being the vestiges of past defective colonial economic policy.This relates to the concentration of socioeconomic and other development programmes in the urban centers, where white administrators and their allies-the Nigerian elites-were found, while the rural areas, where the majority of the Nigerians lived, were neglected.Thus, the pivotal development advantages, which the urban centers and city dwellers enjoyed in terms of education, employment opportunities and health facilities (to mention a few), set the skewed structure of development.In other words, the dichotomy between the urban and rural areas with respect to poverty distribution, income inequality, unemployment and level of education in part becomes explainable.
Towards a Theory of Super Inequality
The culture of competition and the culture of the survival of the fittest have been nurtured and implanted into the brains of billions over centuries.The culture of mutual aid and cooperation (cf.Kropotkin, 1902) has been pushed to the margins of societies and their concomitant cultures.There has been a push by neoliberal forces all over the globe for decades now to limit the actions of governments in general, in order to leave an ever-increasing room and playfield for the capitalistic and monopolistic/oligopolistic forces of financial, economic, and political elites, as it is them who control the world (including forefront and mid-rank politicians, judges, news anchors, chat show hosts, and media editors, cf.e.g., Chomsky, 2002;Herman & Chomsky, 2002).
The whole range of perverted and wicked influences of the elites on societies, their economies, politics, laws, regulations and practices are farthest-reaching.
In all its forms, through all its interfaces and means, inequality makes possible, and generates and sustains mass poverty and its processes of povertization.
There are many multiple matrices of inequality all of which join forces and even sustain and multiply one another.Life-time and intergenerational inequality is maintained through, for instance, the following: 1. Polarization of educational opportunities (cf.e.g., Bourdieu, 1973Bourdieu, , 2002;;Collins, 1979) 2. Polarization of health: that is, health and access to health care on the one hand (the elites and the upper middle classes) and disease, incapacitation, and disability on the other (the masses of the people, and this around the world) (cf.Our Word in Data [OWD], 2022) 3. Wealth polarization (cf.Piketty, 2014Piketty, , 2020) ) 4. Income polarization (cf.Kim & Aspalter, 2021) 5. Monopolization and privileged rewards associated with access to finance and power relations (cf.Collins, 1975;Murphy, 1988;Tilly, 1984Tilly, , 1998Tilly, , 2000Tilly, , 2001Tilly, , 2003;;Weber, 2019) 6. Exclusion and punishments delivered through taxation, fees, and social security financing; inflation and accompanying devaluation of savings, pensions and welfare benefits; and cold progression in taxation; plus exclusion from financing and economic opportunities (cf.Aspalter, Forthcoming a; In order to arrive at a general theory concerning all forms and pathways of inequality, one needs to create and develop a compound theory (a meta theory) that explains inequalities themselves, their constant maintenance, their wicked interlocking and intertwining effects as well as the policies that (a) created them, (b) failed them, and (c) address or, better, prevent and undo them.Hence, many theories need to be applied (work) together en group.For this, the new umbrella theory of super inequality has been set up in this paper to be able to work henceforth more-and/or more effectively-in terms of theoretical, philosophical, and empirical research, in order to yet once more start to turn the wheel of science, and hence creation of knowledge.
Apart from a new extended application (a post-Luhmannian understanding) of Luhmann's social system theory which sees the smallest pieces of communication as the atoms (or the matter) all things are made of, theory of super inequality is built on and around Foucault's (1975) sharpest analytical construct and theory of the "civil war" matrix, which is a matrix of power relations and power outcomes of the rich and powerful versus the masses (that is, the poor and not so poor, the marginalized and not so marginalized).
In this constantly ongoing and century-old matrix of power relations, everyone fights for themselves, and every group fights for itself.
The powerful and privileged ones come to dominate all essential ways to control culture, politics, public administration, judicial system, economy, media, plus social and moral affairs in society, in the community, in the family, as well as in one's private life.
These privileges of the elites also include and are maintained by party member privileges, professional privileges, tax and accounting privileges, occupational and educational hierarchies as well as exclusionary professional standards, testing, and licensing mechanisms, hiring requirements, secret or open dress codes, codes of conduct and behavior, special language and accents used, special sports and cultural preferences, and so forth (cf.e.g., Collins, 1975Collins, , 1979;;Murphy, 1988;Tilly, 1998).
The theory of super inequality is capturing not literally everything that is related to, but all (or the very most) of the essence that is in fact related to: systemic exploitation, systemic mass exclusion, systemic mass poverty, systemic oppression and alienation, systemic intimidation, systemic wearing down, plus systemic creation and maintenance of poverty, misery and disadvantage of the masses of people.
At the Core of the Theory of Super Inequality
We have, so far, assembled the heart of a unified general theory of inequality by looking at the joint theories of Luhmann, Foucault, and the findings of many other theorists and scholars such as Bourdieu, Tilly, Chomsky, Mohan, Fraser, Brady, Remington, and others.In addition, we have, for the first time, theorized the causal connection between inequalities of all forms-that are locked in time and inside culture, within geographical localities and political systems, all across the globe-and systemic mass poverty.This theory and finding does not (and is not designed to) apply to or describe those individual cases of poverty that are caused, for the most part or entirely, by individual reasons of poverty alone.
Narrowing down on the core of the heart of this new meta theory (or general theory, or grand theory if one would like to call it so), we in the following text will integrate the findings of Foucault (1969Foucault ( , 1975Foucault ( , 2010) ) while applying a post-Luhmannian approach, that is, an all-inclusive communication perspective that extends Luhmann's theory of social communication to all forms of communication (cf.Aspalter, 2007Aspalter, , 2010bAspalter, , 2020Aspalter, , 2021b;;Luhmann, 1984Luhmann, , 1998)).
Foucault is the foremost expert (that perhaps ever lived on earth) on systems of thought, from an historical, political, and cultural perspective.All of Foucault's theoretical insights hinge upon the central role of social discourse and knowledge.In Luhmann's terms, social discourse and knowledge are composed of nothing more and nothing less than bits and pieces of social communication.According to Luhmann (1984Luhmann ( , 1998)), bits and pieces of communication (words, ideas, and notions) are the atoms of society, as they are the atoms of social systems, all of which make up society as a whole.In addition to Luhmann's concept of social communication, which excluded private and internal forms of communication, feelings, and thoughts, Aspalter also incorporates them into his (more inclusive) version of a post-Luhmannian theory of social systems.
Therefore, for Aspalter, social communication is-when one is in private, or at any private moment, or with any private thought and feelings that one hasconstantly spun further and reproduced in perpetual motion by private thoughts and feelings, and thus constantly re-edited and rewritten.These include memories, fears, and aspirations (cf.Aspalter, 2007Aspalter, , 2010bAspalter, , 2020Aspalter, , 2021b)).These are then, subsequently, released (reentered) into the arena of social communication, by ensuing acts and forms of social communication, over the course of a lifetime seen from the perspective of the individual, and over the course of many centuries and millennia seen from the perspective of distinct (and mutually interacting) cultures and expressions/forms of civilizations.
The Concept of X-Inequality
Now in the following passage, we will be building on Aspalter (2021c), by drawing on Leibenstein (1966Leibenstein ( , 1976Leibenstein ( , 1978a, b), b), who revolutionized economic and behavioral economic thinking as he looked at people themselves, rather than collections of people and their actions.
Here, within our theory of super inequality, we also introduce a new form of thinking when looking at inequalities, that is, all sorts of inequalities on both societal and individual levels.A new concept can be introduced, which can best be described as the concept of X-inequality-which is, obviously (but yet in a new form), modeled after Leibenstein's most influential and most useful concept of X-efficiency that has been fully proved empirically on multiple fronts.
Therefore, we are here replacing the Leibenstein's concept of X-efficiency with a concept of "X-inequality," that we-for better theoretical understanding and possible practical applications-are dividing into (for now) two layers of inequality groupings (partitioning), to be able to grasp individual factors as well, on top of societal factors.
The first one, on societal macro and meso levels, may be coined as "societal X-inequality," which is composed of a variety of different "societal-ecological inequalities" (Figure 1).On the other hand, on societal micro-level, there is "individual X-inequality," which is, in turn, composed of "ecologically conditioned individual inequalities" (Figure 2).Thus, in doing so, here the overall concept of Leibenstein's X-efficiency has been divided into the following two layers (components), now being referred to as (a) "societal X-inequality," and (b) "individual X-inequality."Notes: (1) According to Leibenstein's finding, theoretically possible maximal equality (or efficiency, as in his case) is never possible (as is the potential production possibility curve in economics, cf.e.g., Scherer and Ross, 1990; in other words, in terms of general economics, for example, potential/theoretically possible gross domestic product (GDP) and actually possible GDP are always far apart from each other).There is a gap between theoretically possible maximal equality (or efficiency), as shown on top of Figure 1, and the level of equality (or efficiency) that, in the end, is actually achieved, as shown at the bottom of Figure 2. Hence, both figures need to be read and looked at together.(2) The overall gap (composed of societal and individual X-inequality) is different in different societies with different cultures and policies and resources in place.This gap is different for different communities, localities, and different groups of people (gender, ethnicities, etc.), at different phases of their lives, and in different life situations (e.g., single parents).(3) Leibenstein's efficiency gap also applies to different industries, also in different countries, which has been empirically tested and proven for multiple times by many researchers (cf.Aspalter, 2021c).( 4) The overall gap in our concept of X-inequality here-between levels of equality that are theoretically possible and that are actually achieved-has been split up into two dimensions: first, the societal dimension that looks at the negative, aggregate impact of societal-ecological inequalities, and second, the individual dimension that incorporates the negative, aggregate impact of ecologically-conditioned individual inequalities (cf.Figures 1 and 2).level of equality at "societal X-inequality" that is actually be achieved (at the moment, in this or that society or locality, and/or for this or that group of people) comparatively lower of equality (= higher level of inequality) highest level of equality (= lowest level of inequality) Correction notice (12/20/2022): At the time of publication, the diagram in Figure 1 contained formatting errors mistakenly introduced during typesetting.These errors have now been fixed.Any grand theory, or general theory, needs to be broken down into more workable levels, and elements-for researchers to be able to set up and test their theses, or set up and compare own (specific, or specialized field-related) theories.This is the function of the concept and theory of X-inequality, apart from demonstrative and integrative theoretical purposes.That is to say, this concept may provide the nodes and interfaces to interact and connect with other numerous theories and models that are out there, and that are yet to be formed and developed over time.Notes: (1) For the concept of "ecological rationality," or ecological (i.e., "of personal environment") influences on individual psychology and behavior, see the theory of Smith (2002Smith ( , 2003Smith ( , 2008)), cf. also Braun (2019).This concept of ecological rationality has been adapted and partially built into the concept of X-inequality on the individual level (which is depicted here in Figure 2) (cf.Aspalter, 2021c for more discussion on the development and insights of behavioral economics and behavioral social policy).(2) In the above (in Figures 1 and 2), the concept of X-efficiency has been replaced with a concept of equality; hence, we could talk about X-equality, instead of X-efficiency as done by Leibenstein.However, in order stress the problems with and all aspects connected to inequality, the concept of X-inequality has been introduced in its place (i.e., a reversed version of a possible X-equality concept).Nevertheless, the two figures shown above are depicting equality losses (instead of a single but aggregated efficiency loss as well in the case of Leibenstein's theory) along the way, from the very top of Figure 1 to the very bottom of Figure 2. Therefore, these equality losses parallel efficiency losses that have been theorized by Leibenstein before.concepts, Leibenstein's (1978a, b) X-efficiency and our concept of X-inequality, talk about lost levels of "positive" outcomes.These can be measured exactly now with empirical analysis, for example, by using Aspalter's Standardized Relative Performance (SRP) Index (Aspalter, Forthcoming a, b, 2006).the level of equality at "societal X-inequality" that is realized (at the moment, in this or that society or locality, and/or for this or that group of people) theoretically possible maximal equality at "individual X-inequality" level of equality at "individual X-inequality" that is actually achieved (at the moment, in this or that society or locality, and/or for this or that group of people) very low level equality (= very level of inequality) comparatively higher level of equality (= lower level of inequality) Correction (12/20/2022): At the time of the diagram in Figure 2 contained formatting errors mistakenly introduced during typesetting.These errors have now been fixed.
Henceforth, the concept and theory of X-inequality, as does the overall theory of super inequality, is set to serve the following four major functions: 1.To encourage, increase, and invigorate comparative research on topics related to and in the areas of mass povertization, systemic poverty, poverty of minorities and minority groups, the political economy of inequality through access, exclusion, and type of education and its contents; the political economy of inequality through access, exclusion, financing mechanism, incentive structures, and power relations in and through health, healthcare and long-term care systems and the relative and absolute absence thereof; the political economy of taxing and charging the poor and the near-poor; the political economy of inflation and cold progression in social security financing, taxation, and welfare benefits; and wealth, health, and happiness losses and systemic inequalities for different groups of people and communities over their lifetime and in different life situations and different geographic locations, and so forth.2. To provide understanding, foundation, rationale, and objectives for successive normative social policies as well as systemic and paradigmatic changes in social policy across the widest possible range of policies (and non-policies).3. To build further theories and facilitate the development of existing theories and paradigms.4. To guide and encourage empirical research across the globe for hidden and neglected issues, and neglected people, whoever they are and wherever they are, while also incorporating a whole and/or much wider picture-that is, far-reaching analytical perspectives on systemic inequality, systemic poverty, and systemic processes of povertization.
Concluding Thoughts
The concept of X-inequality, as introduced above, is merely an additional concept (or theory) within a new unified general theory of inequality, which has been dubbed the theory of super inequality.The term "super inequality" stands for super-high, super-massive, and super-consequential inequality.It has been used very recently in an article by Forbes (2022).Similar to the terms of super aging, super typhoon, or super volcano, the term "super inequality" does not imply a positive trait of the term but rather a universal high level and high impact of the phenomenon of inequality that is touching, determining, and changing people's life chances, life events, and life outcomes, as it does with regard to entire families, communities, genders, sexually diverse groups, ethnic groups, societies, countries, and world regions as a whole, over short as well as very long periods of time at the same time.
The importance and wide-and deep-reaching consequences and follow-on consequences of inequalities in all its forms and aspects have been receiving a much greater deal of attention in recent years, particularly because of the works of Piketty (2000Piketty ( , 2006Piketty ( , 2014Piketty ( , 2015Piketty ( , 2020)).There has been so far no one unified, or any general, theory of inequality.In trying to address this need for theorizing and theory-building, this paper, while using the overall method of theory-building, has drawn on a good number of theories plus empirical studies, to start building such a universal general theory of super inequality.The fundament for it has been provided by theorists, philosophers, and empiricists all the same, which should set it onto a much stronger footing.
Nevertheless, as Bottomore (1972, p. 37) has put it so correctly, a theory is only as good as it is fruitful, that is, useful for other researchers, for further research studies down the road, in the years and decades ahead of us.This is certainly just the beginning, as theories are not built in just a couple of years but rather in terms of decades.The theories proposed by Bourdieu, Chomsky, Esping-Andersen, Foucault, Fraser, Luhmann, Mohan, Sainsbury, and Tilly have had the benefits of time to grow into their final or most updated forms.
Figure 2
Figure 2 Integrating the impact of ecologically conditioned individual inequalities (or "equality barriers"): the impact of individual, family-and community-based, healthand psychology-based, gender-and sexuality-based, and age-and ethnicity-based inequalities ("equality barriers"). | 2022-12-21T16:13:50.880Z | 2022-12-19T00:00:00.000 | {
"year": 2022,
"sha1": "c6df88aa36073f4bc2a8b60858b241b8ae79a588",
"oa_license": "CCBY",
"oa_url": "https://journals.publishing.umich.edu/sdi/article/id/3700/download/pdf/",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "612e9458f1ebf1a363e11c2bfc82854cc73e9be9",
"s2fieldsofstudy": [
"Economics"
],
"extfieldsofstudy": []
} |
4044485 | pes2o/s2orc | v3-fos-license | Physical Exercise Modulates L-DOPA-Regulated Molecular Pathways in the MPTP Mouse Model of Parkinson’s Disease
Parkinson’s disease (PD) is characterized by the degeneration of dopaminergic (DA) neurons in the substantia nigra pars compacta (SNpc), resulting in motor and non-motor dysfunction. Physical exercise improves these symptoms in PD patients. To explore the molecular mechanisms underlying the beneficial effects of physical exercise, we exposed 1-methyl-4-phenyl-1,2,3,6-tetrahydropyrimidine (MPTP)-treated mice to a four-week physical exercise regimen, and subsequently explored their motor performance and the transcriptome of multiple PD-linked brain areas. MPTP reduced the number of DA neurons in the SNpc, whereas physical exercise improved beam walking, rotarod performance, and motor behavior in the open field. Further, enrichment analyses of the RNA-sequencing data revealed that in the MPTP-treated mice physical exercise predominantly modulated signaling cascades that are regulated by the top upstream regulators L-DOPA, RICTOR, CREB1, or bicuculline/dalfampridine, associated with movement disorders, mitochondrial dysfunction, and epilepsy-related processes. To elucidate the molecular pathways underlying these cascades, we integrated the proteins encoded by the exercise-induced differentially expressed mRNAs for each of the upstream regulators into a molecular landscape, for multiple key brain areas. Most notable was the opposite effect of physical exercise compared to previously reported effects of L-DOPA on the expression of mRNAs in the SN and the ventromedial striatum that are involved in—among other processes—circadian rhythm and signaling involving DA, neuropeptides, and endocannabinoids. Altogether, our findings suggest that physical exercise can improve motor function in PD and may, at the same time, counteract L-DOPA-mediated molecular mechanisms. Further, we hypothesize that physical exercise has the potential to improve non-motor symptoms of PD, some of which may be the result of (chronic) L-DOPA use. Electronic supplementary material The online version of this article (10.1007/s12035-017-0775-0) contains supplementary material, which is available to authorized users.
Introduction
Parkinson's disease (PD) is characterized by the degeneration of dopaminergic (DA) neurons in the substantia nigra pars compacta (SNpc). The clinical phenotype encompasses motor symptoms-including bradykinesia, rigidity, tremor, gait dysfunction, and postural instability-and non-motor symptoms such as sleeping disturbances, pain, or cognitive deficits that affect executive functions, attention, mood, and working memory [1][2][3]. Levodopa (L-DOPA), a precursor of DA, has been used since the 1960s to treat PD motor symptoms and is still considered the gold standard of therapy [4,5]. In recent years, physical exercise-including intervention strategies such as aerobic exercise (e.g., treadmill exercise, cycling, or dancing) or strength training (e.g., using a modified fitness counts program or progressive resistance exercising)-has been reported to improve DA signaling [6,7] and motor dysfunction [8][9][10][11], including bradykinesia [12,13], rigidity [14], and tremor [12]. Physical exercise has also been reported to improve less dopamine-dependent symptoms involving postural control such as turning performance [6] and instability [15], as well as cognitive function [2,16,17] in PD patients. Although these beneficial clinical effects of exercise on PD symptoms are evident, the underlying molecular mechanisms are not well understood. A better understanding of these processes may ultimately lead to a more efficient treatment of these symptoms, through directly targeting the underlying pathways.
Systemic administration of 1-methyl-4-phenyl-1,2,3,6tetrahydropyridine (MPTP) in mice results in the loss of nigrostriatal DA neurons, and is widely used to study the pathophysiological mechanisms underlying DA neuron degeneration in PD [18]. Moreover, similar to human PD, physical exercise improves motor behavior and reduces cognitive impairment in MPTP-treated mice [19][20][21][22][23]. To our knowledge, in this study, we elucidate for the first time the molecular pathways underlying the beneficial effects of exercise in PD, using the MPTP mouse model of PD.
Methods Animals
Six-month-old male C57BL/6J mice were housed, five to a cage, with ad libitum access to food and water and at a constant 12/12 h light/dark cycle (lights on between 07:00 and 19:00 h). Room temperature was controlled at 21°C and rooms were homogenously lighted by 60 LUX with controlled humidity. Following arrival, the mice were acclimatized to their new housing for 1 week, after which they were randomly assigned to one of four treatment groups: (1) saline-treated; (2) saline-treated with physical exercise; (3) MPTP-treated; and (4) MPTP-treated with physical exercise. MPTP-HCl (Sigma-Aldrich) dissolved in saline was administered via four intraperitoneal injections at 2 h intervals, amounting to a total administered dose of 70 mg/kg (free-base), which in a doseresponse pilot experiment was found as the highest tolerable dose, with a survival rate of 75%. The intended concentration of 80 mg/kg, based on similar previous experiments [7,24,25], resulted in a higher toxicity and death rate that could not be justified. The control mice underwent the same protocol using saline injections. Mice were allowed to recover from the injections for 2 weeks. All saline-treated mice survived the protocol, n = 14 for group (1) and n = 14 for group (2), whereas, due to MPTP treatment, group (3) and group (4) eventually consisted of n = 10 and n = 13 mice, respectively.
Physical Exercise
A forced chronic and aerobic physical exercise regimen was initiated 3 weeks following MPTP or saline treatment and was performed daily. Mice ran 30 min twice a day during a training period of 28 consecutive days in individual, horizontal lanes on a five-lane treadmill (Panlab Harvard Apparatus) at a speed of 20 cm/s (as used before in comparable experimental setups [7,24,26,27]). Automated short air puffs were used to stimulate the mice to keep running when drifting too far to the back of the lane. All mice were able to perform the physical exercise without any noticeable problems. Mice assigned to the groups without physical exercise were placed in their housing cage in the same experimental room, adjacent to the treadmill.
Behavioral Testing
Behavioral testing commenced 1 week before the physical exercise regimen started (week 0), and was repeated each week (similar to comparable experiments by others [7,[28][29][30]) during the exercise regimen (weeks 1-4): beam walk on the first, rotarod on the third, and open field on the fifth day of each week, in each case performed between 08:00 and 13:00 h. Prior to all behavioral tests, the animals of all four treatment groups were habituated to the experimental room for 1 h. Mice from different treatment groups were tested concurrently on the rotarod and in the open field.
Open Field
The mice were placed in a white plexiglass box (50×50×40cm) and video recorded from above for 30 min using EthoVision XT 7.0 software (Noldus Information Technology B.V., Wageningen, The Netherlands). Afterwards, the parameters Btotal walking distance^, Btotal movement time^, Bmean velocity^, and Bmean angular velocity^were calculated by the software, as described previously [31].
Rotarod
Mice were placed on the rotarod apparatus (IITC Inc., Woodland Hills, CA, USA) with a rod diameter of 32 mm and an increasing speed of 4 to 38 rpm in 300 s. Five mice were tested simultaneously on the rotarod and their latency to fall was measured (similar to [7,30,32]). On each testing day, each mouse performed one pre-trial and three trials, each with a maximum duration of 300 s and with a minimum of 1 h of rest between the trials. The pre-trial enabled the mice to habituate (again) to the rotarod and was not included in the results.
For each testing day, the latency times of the three trials were averaged per mouse.
Beam Walk
We assessed the motor coordination and balance by measuring the ability of the mice to transverse a narrow beam [28,33]. The mice were placed on a white plasticized iron rod (full length 80 cm, diameter 10 mm) suspended at 40 cm height and were trained to cross the beam to their home cage. Training of the mice occurred on the first day. During the training, the distance to cross was increased each time they successfully reached their cage, until they were able to reach their home cage over the full length of the beam. Each week, the mice were habituated again to the experimental setup by a pre-trial, which was followed by three trials in which the time was recorded how long it took for the mice to cross the full beam to reach their home cage; inter-trial interval was in all cases at least 1 h. For each testing day, the times of the three trials were averaged per mouse.
Immunohistochemistry
We performed immunohistochemistry to establish TH protein expression [24,34] in the DL, VM, SNpc, and VTA. Twentyfour hours following their last exercise training, mice were sacrificed by cervical dislocation, and brains were dissected and fixated in 4% paraformaldehyde in PBS solution for 3 h and subsequently cryoprotected by immersion in 30% sucrose for 24 h. After cryosectioning, DAB staining was performed on 20-μm-thick coronal slices, placed on gelatinized glass slides. For this, the sections were washed with PBS (3×10min), non-specific sites blocked with blocking buffer (2.5% normal donkey serum, 2.5% normal goat serum, 1% BSA, 1% glycine, 0.1% lysine, and 0.4% Triton X-100 in PBS) for 30 min and incubated with rabbit anti-tyrosine hydroxylase (TH, 1:1000; Pel-Freez Biologicals #P40101-0; lot no. 19335) for 16 h at 4°C. This was followed by 1 h incubations with biotinylated goat-anti-rabbit (1:200; Jackson ImmunoResearch; 711-065-152; lot no.117858) and avidinbiotin-peroxidase complex (A and B 1:800; Vectastain Elite ABC kit, PK-6100 Standard), with PBS washing steps in between. To visualize antibody binding, the sections with SNpc and ventral tegmental areas (VTA) were incubated for 30 min, and those with dorsolateral striatum (DL) and ventromedial striatum (VM) areas were incubated for 20 min in a DAB/ H 2 O 2 -solution potentiated by ammonium nickel sulfate. The sections were subsequently dehydrated and cover slipped. For each mouse, every sixth section throughout the DL, VM, SNpc, and VTA was included in the counting procedure, and for optimal comparison between groups, sections of different treatment groups were stained concurrently. Images were captured by a Leica DM6000 B microscope. TH-positive (TH+) cells were counted in the sections of the SNpc (−2.54 to −3.88 mm to Bregma [35]) and VTA (−2.92 to −3.88 mm to Bregma [35]), using a 20× magnification. The number of TH+ cells in each section (both the left and right side) was counted by a blinded assessor and averaged over the total number of sections per animal. DA fiber density was estimated in the DL (1.18 to −0.10 mm to Bregma [35]) and VM (1.54 to 0.62 mm to Bregma [35]) by quantifying the optic density (OD) with FIJI [36], using a 5× magnification. In both areas, the OD per section was determined by averaging the OD of ten separate areas within the striatal matrix (i.e., in-between the striosomes). Subsequently, the OD was normalized by subtracting the OD of the corpus callosum (CC) or anterior commissure (AC) for the DL and VM respectively, in the same section, and all sections were averaged per animal.
RNA Isolation and Sample Preparation
Twenty-four hours following the last physical exercise training, brains of 8-10 mice per group-that were sacrificed by cervical dislocation-were dissected, immediately frozen on dry ice, and stored at −80°C until further preparation. Specific brain areas, i.e., prefrontal cortex (PFC), DL, VM, VTA, SN, and pedunculopontine nucleus (PPN), were then cryopunched based on the stereotaxic atlas of the mouse brain [35] from 200-μm-thick coronal slices, using punch needles with a diameter of 0.5 and 0.75 mm (see Online Resource 1 for the estimated punching locations per area). All specimens were kept at −20°C during processing. For RNA isolation, punched samples were homogenized with a TissueLyser (Retsch GmbH) in 800 μL TRIzol reagent and RNA isolation was performed according to the manufacturer's instructions (Invitrogen). Total RNA concentration was determined with a Nanodrop TM ND-1000 spectrophotometer (Thermo Fisher Scientific Inc.), and RNA quality was visually assessed by 1% agarose gel electrophoresis. Genomic DNA was removed by treatment with DNase I in the presence of RNAsin (Thermo Fisher) in 5× FSB buffer and RNAse-free water. Subsequently, total RNA samples were stored at −80°C until further use. For each treatment group and brain area, RNA samples of six mice were pooled for RNAseq analysis.
RNA Sequencing and Data Processing
All RNA samples were subjected to RNA sequencing (RNAseq; HudsonAlpha Genomic Services Lab, Huntsville, AL) as performed before [37]. In short, total RNA concentration was estimated by Qubit 2.0 Fluorometer (Invitrogen, Carlsbad, CA, USA) and RNA integrity by using the Agilent 2100 Bioanalyzer (Applied Biosystems, Carlsbad, CA, USA). RNAseq libraries were formed from approximately 500 ng total RNA of each pooled sample, followed by poly(A) enrichment. RNAseq was performed using paired-end sequencing on Illumina HiSeqH2000 (Illumina, San Diego, CA, USA), at 50 base pairs, generating over 25 million paired reads per sample. Raw RNAseq FASTQ files were demultiplexed by bcl2fastq conversion software v1.8.3 (Illumina, Inc., San Diego, CA, USA) using default settings.
RNAseq data was analyzed using GeneSifter software (VizX Labs, Seattle, WA). RNAseq reads were mapped to the Mus musculus reference genome build 37.2, and for this, the reads were trimmed by 15 base pairs at the five-prime end. Subsequently, transcript abundance was calculated by estimating the reads per kilobase of exon per million mapped reads (RPKM), and normalization to the number of mapped reads was used for comparison of two mRNA sets. A t test was used for pairwise comparison and a likelihood ratio test to adjust for distribution probability.
qPCR Validation
The RNAseq results were validated by comparing expression levels of at least eight mRNAs/genes per area with their expression as established by qPCR. These genes were chosen randomly, although there was one requirement, namely that genes from all three comparisons of interest, i.e., the comparisons to assess the effect of MPTP (group 3 vs. group 1), physical exercise (group 2 vs. group 1), and physical exercise in the MPTP model of PD (group 4 vs. group 3), should be included. RNA from the same samples used for the RNAseq pools was reversetranscribed to cDNA with random primers using the RevertAid H Minus First Strand cDNA Synthesis Kit (Thermo Scientific, #K1632 lot no 00167909) according to the manufacturer's protocol. Three-step qPCR (95°C for 10 min, followed by 45 three-step cycles of 95°C for 5 s, 65°C for 10 s, and 72°C for 20 s, and the generation of melting curves from 70°C to 95°C; Rotor-Gene 6000 Series, Corbett Life Science Pty. Ltd.) was performed using the 2× SensiFAST SYBR No-ROX mix (Bioline lot no SF582-313209) and primers designed with NCBI Primer-Blast (www.ncbi.nlm.nih.gov/tools/primerblast/) a n d s y n t h e s i z e d a t S i g m a L i f e S c i e n c e s (The Netherlands) (For a complete overview of used primers, see Online Resource 10, Supp. Table 1). The housekeeping genes ACTB and YWHAZ were used as reference for normalization of gene expression. Based on the qPCR results, the minimum requirements to be included in the enrichment analysis-regarding fold change (FC) cut-off, maximum likelihood ratio value, and minimal RPKM value-were adjusted so that at least 90% of the expression changes could be validated by qPCR. As there was insufficient remaining RNA available to perform the complete qPCR validation for the PPN RNAseq data, the same cut-off values were used as for the other brain areas.
Overlap of MPTP-and Exercise-Regulated Genes
To determine the direct effect of exercise on MPTP-regulated genes, we looked at the overlap between the genes regulated by MPTP (group 3 vs. group 1) and the genes regulated by exercise in the MPTP model (group 4 vs. group 3). To quantify this overlap, we used the hypergeometric distribution test [38]: and determined the chance of observing exactly x overlapping genes from a total of n differentially expressed genes by exercise in the MPTP model, with a total of M genes that were differentially expressed by MPTP and a total of N genes detected with RNAseq. The number of unique genes detected with RNAseq in each brain area (N), consists of genes detected in both comparisons (group 3 vs. group 1 and group 4 vs. group 3), irrespective of their FC or expression p value. Of note, for all comparisons only protein-coding genes were considered.
Enrichment Analysis and Building of Molecular Landscapes
The Ingenuity pathway analysis software package (www. ingenuity.com) was used to identify enriched categories in the lists of differentially expressed protein-coding mRNAs in each of the brain areas [39]. Again, we focused on the three main comparisons of interest (see above)-i.e., the comparisons that assess the effect of MPTP, physical exercise, and physical exercise in the MPTP model of PD-in the six brain areas. Ingenuity assigns genes or rather their corresponding mRNAs/ proteins to functional (sub)-categories, i.e., Bcanonical pathways^and Bbiofunctions^, with the latter including Bdiseases and disorders^and Bmolecular and cellular functions^. In addition, Ingenuity generates a list of Bupstream regulators^, i.e., proteins or compounds that regulate multiple proteins/mRNAs from the input list. When possible, the program also calculates a z score that is based on the expression changes of the input mRNAs and that is a measure for the directionality of the upstream regulator, canonical pathway, or biofunction. A z score < −2 or > 2 is considered significant. For all analyses, only functional categories and upstream regulators with significant enrichment (i.e., Benjamini-Hochberg corrected p < 0.05) and containing at least two genes were taken into account. Proteins/mRNAs regulated by the top upstream regulators were analyzed in more depth to identify their relation to physical exercise-induced processes in the MPTP model of PD (i.e., the comparison of group 4 with group 3). Guided by the results of the Ingenuity enrichment analyses, an extensive literature search was performed for the (putative) roles of all proteins encoded by the differentially expressed mRNAs as well as their functional interactions, using the UniProt Protein Knowledge Base (http:// www.uniprot.org) and PubMed (http://www.ncbi.nlm.nih. gov/pubmed). Based on these findings and applying an approach similar to the one we used previously for genome-wide association and expression data [39][40][41][42], we then built molecular landscapes containing interacting proteins encoded by the mRNAs that are differentially expressed by physical exercise and are known to be regulated by the top regulators for each brain area. To complement these protein interaction cascades, we added a number of proteins that were not encoded by the differentially expressed mRNAs but that have been implicated in PD etiology through other lines of (genetic) evidence. In this respect, proteins encoded by familial PD candidate genes were included if they have at least one functional interaction with one or more other landscape proteins. Additional proteins were included when having at least two interactions with other landscape proteins. The molecular landscapes were drawn with Serif DrawPlus 4.0.
Statistics
Statistical comparisons of values between multiple treatment groups were carried out using a two-way ANOVA. For behavioral test data, with data at multiple time points, a linear mixed model was applied using SPSS (IBM, version 23), with Bweek^, Bphysical exercise^, and BMPTP^as fixed factors to calculate the main effects of the training period, physical exercise, and the interaction between physical exercise and MPTP. The main effect of MPTP in the behavioral tests was assessed using a pairwise comparison of saline-treated and MPTP-treated mice before the start of the exercise regimen. For pairwise comparison, an F test was used to determine if the distributions of the compared two groups have the same variance. Based on the F test, a Student's t test for equal or unequal variance was then used to evaluate the significance of the expression differences. For all comparisons, data are represented as mean with the standard error of the mean (SEM), and a p value < 0.05 was considered statistically significant.
The p values calculated with the hypergeometric distribution test were adjusted for multiple testing using the Bonferroni correction.
Results
In this study, we assessed the effects of physical exercise in the MPTP-treated mouse model of PD at the behavioral and molecular levels. At baseline, i.e., following recovery from MPTP treatment but before the exercise regimen started, MPTP-treated mice showed an increased total walking distance (p < 0.01), total movement time (p < 0.005), and mean velocity (p < 0.005), and a decreased mean angular velocity (p < 0.005) in the open field compared to saline-treated control mice. In contrast, their performance on rotarod and beam walk tests was not significantly different from controls (Fig. 1).
In Fig. 2, the effects of physical exercise during the course of the training period relative to baseline are shown for each of the four treatment groups. The beam walk task showed a clear training effect over time in all groups (main effect of Bweekp < 0.001), and the test performance was improved by physical exercise in both the MPTP-treated and saline-treated mice, without significant differences between the groups (main effect of physical exercise p < 0.05) and no significant interaction between physical exercise and MPTP treatment (Fig. 2a). Rotarod performance was also significantly improved by physical exercise (p < 0.01), but no improvement over time or interaction with MPTP treatment was found (Fig. 2b). Of the tested parameters in the open field (total walking distance, total movement time, mean velocity, and mean angular velocity), the mean angular velocity was increased (p < 0.001), and the total movement time showed a decreasing trend (p = 0.051) for all treatment groups over time during the exercise regimen (i.e., main effect of Bweek^). There was no significant (main) effect of physical exercise on any of the four tested open field parameters, only a trend towards a higher Bmean velocity^(p = 0.082). However, for all four open field parameters, significant interactions between physical exercise and MPTP treatment were found (p < 0.05). Physical exercise increased the walking distance and mean velocity of saline-treated mice, but not of MPTP-treated mice. Moreover, physical exercise increased the total movement time of saline-treated mice and decreased that of MPTP-treated mice. This opposite effect was also observed for mean angular velocity, i.e., a decrease by physical exercise in saline-treated mice and an increase in MPTP-treated mice ( Fig. 2c-f).
TH Depletion in the SNpc and Striatum Following MPTP Treatment
The number of DA neurons in the SNpc and VTA of each treatment group, as well as an estimate of DA fiber density in striatal target areas (DL and VM, respectively) was determined by immunohistochemistry for TH-the rate-limiting enzyme in DA synthesis. These measures were primarily taken to confirm and estimate the degree of neuronal loss due to MPTP treatment, but they may also provide some insight into whether exercise could affect these structural changes. MPTP significantly reduced the number of TH+ cells in the SNpc (p < 0.005), but not in the VTA. Pairwise comparison between the treatment groups revealed that the number of TH+ cells in the SNpc of MPTP-treated mice without and with physical exercise was reduced by 29 and 20%, respectively, compared to the saline-treated group without exercise (both p < 0.05; Fig. 3). There was no significant effect of physical exercise on the number of TH+ cells in either the SNpc or the VTA, and no interaction between MPTP and physical exercise. In Online Resource 2, the relative OD of TH+ fibers in the DL, the primary striatal target area of the SNpc, is shown. The OD of TH+ fibers was reduced by MPTP (p < 0.05), without a main effect of physical exercise or an interaction between MPTP and physical exercise. Pairwise comparison showed that MPTP decreased the density of TH+ fibers in MPTPtreated mice without exercise by 33% (p < 0.005) compared to saline-treated mice without physical exercise. There was a trend towards an increased TH+ OD by physical exercise in MPTP-treated mice, but this increase was not significant.
Online Resource 3 shows the OD of TH+ fibers in the VM, the primary striatal target area of the VTA. Although all treatment groups (physical exercise, MPTP, and MPTP + physical exercise) showed a reduced OD of TH+ fibers, The RNAseq data were obtained from pooled samples, and in order to validate these data, the mRNA expression levels in each of the investigated brain areas were determined in individual samples by qPCR. The results of the qPCR experiments (Online Resource 4) led us to adopt the following requirements for the inclusion of differentially expressed protein-coding mRNAs in the subsequent analyses: FC > 1.2, likelihood ratio < 0.05, RPKM > 5.
A Direct Effect of Physical Exercise on MPTP-Regulated Genes
The overlap between the protein-coding mRNAs that are differentially expressed due to MPTP alone and due to exercise in MPTP-treated mice is represented in Online Resource 5. In all brain areas, the probability of this overlap was calculated by using the hypergeometric distribution test, which showed that for all areas, the overlap is greater than would be expected based on random gene selection (p < 0.05). Further, in all areas, 82-99% of the overlapping mRNAs are regulated in opposite directions by MPTP and exercise. Enrichment analyses of mRNAs that overlap but are regulated in opposite directions are summarized in Online Resource 10, Supp. Table 2. The VTA and PFC show the most significant results, and are also the brain areas with the biggest absolute and relative overlap (i.e., the overlap in number and proportion of mRNAs). The analysis of the VTA displays a downregulation of the top regulator Binosine^, whereas the PFC and to a lesser extent also the DL show an increase in effect of dalfampridine and bicuculline.
RNAseq Data Analysis: Enriched Regulators, Pathways, and Biofunctions
Enrichment analysis of the differentially expressed mRNAs was performed for each of the brain areas examined to investigate the effects of MPTP (i.e., comparing the MPTP-treated group without exercise to the saline-treated mice without exercise), physical exercise (i.e., comparing saline-treated mice with exercise to saline-treated mice without exercise), and the effects of physical exercise in MPTP-treated mice (i.e., comparing the MPTP-treated mice with exercise to MPTP-treated mice without exercise). In Tables 1, 2, and 3, a short overview of the main effects-the top regulator(s), canonical pathway(s), and biofunction(s)-of MPTP, physical exercise, and physical exercise in MPTP-treated mice is provided for each brain area separately. A more elaborate overview of these enrichment analyses per brain area can be found in Online Resource 10, Supp. Tables 3-8.
In all brain areas examined, MPTP treatment affected a set of mRNAs that is involved in epilepsy, which is reflected by the presence of the epilepsy-regulating transcription factor CREB1, the convulsants bicuculline and dalfampridine, and the biofunction Bepilepsy^. Other regulators and related functional themes enriched within the mRNAs affected by MPTP are RICTOR and its regulation of ribosomal and mitochondrial proteins, as well as L-DOPA and DA receptor signaling (Table 1). Of note, in line with the MPTP-mediated decrease in TH expression in the DL and VM (not significant; see above), L-DOPA is a significant upstream regulator in both the DL (p = 2.85E-06; z = −2.635) and the VM (p = 9.97E-04; z = −1.234), but was not among the top 10 upstream regulators and is therefore not included in the (Supplementary) Tables.
Furthermore, in the various brain areas examined, physical exercise affected sets of mRNAs that are regulated by the upstream regulators CREB1, RICTOR, L- Table 1 Main effects of MPTP (MPTP-treated mice without physical exercise vs. saline-treated mice without physical exercise) per brain area For each of the effects, the corresponding z-score, a predicted direcƟon of the effect, is displayed as increased (z-score ≥ 2; ), no significantly predicted direcƟon (=) or decreased (z-score ≤ -2; V). "N/A": no significantly enriched canonical pathways or biofunc ons for a brain area (p≥0.05). DOPA, and dexamethasone. These regulators overlap to some extent with the upstream regulators for the MPTPregulated mRNAs as mentioned above. However, the top canonical pathways and biofunctions due to physical exercise are not epilepsy-related, but rather associated with Bmitochondrial dysfunction^and Bmovement disorder^( Table 2). The top regulators of the mRNAs differentially expressed due to physical exercise in MPTP-treated mice are L-DOPA, RICTOR, bicuculline/dalfampridine, and CREB1. The top canonical pathways and biofunctions enriched in exercised MPTP-treated mice are Bmitochondrial dysfunction^and Bprotein synthesis^in the VTA and DL, BG-protein signaling^, Bmovement disorder^, Bseizures and cytoskeleton dynamics^in the VM and are related to (cell) death in the PFC (Table 3).
Brain area Regulator(s) Canonical pathway(s) BiofuncƟon(s) SN
Of note, the predicted direction of effect of the top regulators RICTOR and L-DOPA is changed in the VTA, DL, and VM of exercised MPTP-treated mice compared to exercised saline-treated mice. More specifically, the predicted direction of effect of RICTOR is (strongly) decreased in the VTA and DL after exercise in saline-treated mice, but is strongly increased and has no significant predicted direction in the VTA and DL of exercised MPTP-treated mice, respectively. Table 3 Main effects of physical exercise in MPTP-treated mice (MPTP-treated mice with physical exercise vs. MPTP-treated mice without physical exercise) per brain area For each of the effects, the corresponding z-score, a predicted direc on of the effect, is displayed as very much increased (z-score ≥ 6; ), increased (z-score ≥ 2; ), no significantly predicted direc on (=), decreased (z-score ≤ -2; V) or very much decreased (z-score ≤ -6; VV). "N/A": no significantly enriched canonical pathways or biofunc ons for a brain area (p≥0.05). Table 2 Main effects of physical exercise (saline-treated mice with physical exercise vs. saline-treated mice without physical exercise) per brain area
Brain area Regulator(s) Canonical pathway(s) Biofunction(s)
For each of the effects, the corresponding z-score, a predicted direc on of the effect, is displayed as increased (z-score ≥ 2; ), no significantly predicted direc on (=), decreased (z-score ≤ -2; V) or very much decreased (z-score ≤ -6; VV). "N/A": no significantly enriched canonical pathways or biofunc ons for a brain area (p≥0.05). Table 13), and CREB1 (in the PPN, Online Resource 10, Supp. Table 14) were studied in greater detail and used to build molecular landscapes for each top upstream regulator in the various brain areas. Here, we provide a short description of each of these molecular landscapes. In Online Resource 10, all landscapes are described in full detail. Of note, in the PPN, L-DOPA is the top upstream regulator following physical exercise or MPTP treatment, but L-DOPA (p = 1.97E-02; z score = −1.964), although significant, was not among the top 10 upstream regulators following physical exercise in MPTPtreated mice.
The RICTOR-regulated mRNAs that are differentially expressed in the DL and VTA due to physical exercise in the MPTP-treated mice encode proteins that are specifically involved in three cellular systems: the complex I-V of Tables 11 and 12). These are complexes that regulate cellular energy, protein translation, and protein degradation, respectively (Online Resources 6 and 7). Of note, physical exercise and RICTOR have an opposite effect on the expression of all differentially expressed mRNAs in the mitochondrial electron transport chain in the DL, whereas physical exercise and RICTOR exert the same direction of effect (i.e., a decreasing effect) on the expression of electron transport chain mRNAs in the VTA.
In the PFC, 8 out of 9 mRNAs differentially expressed due to physical exercise in MPTP-treated mice and regulated by bicuculline/dalfampridine have been linked to epilepsy (Online Resource 10, Supp. Table 13). Immediateearly gene activation is one of the main processes regulated by these mRNAs e.g., via the early response genes/proteins FOS, FOSB, and NR4A1, which in turn are regulated by insulin and low-density lipoprotein. In Online Resource 8, an overview of the interactions of the proteins encoded by these mRNAs and their regulation by bicuculline/ dalfampridine and physical exercise is shown in a molecular landscape.
In the PPN, the proteins encoded by the mRNAs that were differentially expressed due to physical exercise in MPTPtreated mice and regulated by CREB1 have only a limited number of interactions in the built landscape (Online Resource 9). Nevertheless, a few functional themes such as vascular remodeling, neuropeptide signaling, lipid metabolism, epilepsy/immediate-early gene regulation, and calcium signaling were identified, with CREB1 as their central regulator (Online Resource 10, Supp. Table 14).
Discussion
This study aimed to explore the molecular mechanisms underlying the beneficial effects of physical exercise on motor functioning in the MPTP-treated mouse model of PD. After validation of the model, through demonstrating significant nigral neuronal loss following MPTP treatment, the effects of a fourweek physical exercise regimen on motor performance, and the accompanying molecular changes in multiple brain areas were assessed using behavioral tests and RNAseq analysis, respectively. The behavioral tests showed that physical exercise improved beam walk and rotarod performance in both MPTP-treated and control mice, but had a different and often opposite effect on the four tested open field parameters in these groups. Our RNAseq findings demonstrated that physical exercise in MPTP-treated mice mainly affects the expression of mRNAs involved in L-DOPA-mediated pathways in the SN and VM that regulate DA signaling, RICTORmediated pathways in the VTA and DL involved in energy metabolism and cellular stress [44,45], and bicuculline/ dalfampridine-mediated pathways in the PFC and CREB1mediated pathways in the PPN that are both a measure of neuronal activity [46,47]. To further elucidate the specific molecular mechanisms underlying the effects of physical exercise in MPTP-treated mice, the differentially expressed mRNAs regulated by these top regulators were integrated into molecular landscapes, depicting the main biological processes and signaling cascades affected.
Our animal model was validated by demonstrating a significant nigral DA neuronal loss following MPTP treatment. The observed moderate neuronal loss in the midbrain due to MPTP treatment, i.e., a 29% reduction of TH-positive neurons in the SNpc without a statistical significant loss in the VTA, is in keeping with earlier studies using a similar MPTP treatment regimen in 5-month-old mice showing 33% loss in the SNpc and no significant loss in the VTA [48]. Other studies, on 8-10-week-old mice, have reported a neuronal loss of 29-45% [49,50], but also of more than 50% loss in the SNpc [7,24,51]. Differences in level of neurodegeneration [52] and molecular effects [39] due to MPTP toxicity may be explained by MPTP dosing, age of the mice, and the duration between MPTP injection and sacrifice [48,52]. We used aged (6month-old) mice to better model age-dependent processes such as regulation of anti-oxidants [53], neuroplasticity, neurogenesis [54,55], and the immune response in PD [56,57]. To assess how exercise may boost any neuroplastic mechanisms of the injured basal ganglia, the physical exercise regime was performed within the recovery phase of striatal DA levels as reported in younger MPTP-treated mice [58], but after the acute neurotoxic (molecular) effects of MPTP [39,59]. We did not find a significant effect of physical exercise on the number of surviving DA neurons, but noted a trend towards an increased number of TH-positive neurons in the SNpc and an increased TH-positive fiber density in the DL and VM in MPTP-treated mice with physical exercise compared to MPTP-treated mice without exercise. From previous studies, it remains unclear whether physical exercise can protect against cellular loss in the MPTP-mouse model. Preservation of SNpc neurons by physical exercise has been described before [27,51], but the findings were inconsistent [7,22].
Regarding motor function, forced exercise has more effect than voluntary exercise in both PD patients [60] and mice [61], and it activates the same brain areas as anti-PD medication does [62]. In this study, the mice were able to perform the physical exercise without any noticeable problems, suggesting that their physical exercise regimen is comparable to the forced moderate aerobic exercise that has been shown to improve both motor and non-motor functions in PD patients [11,60,63,64]. MPTP treatment alone resulted in an increased activity in the open field, as reported before [48,52,[65][66][67], but did not affect the performance on beam walk and rotarod. It should be noted that the training effect on the beam walk as seen in all four treatment groups, especially in week 1 compared to week 0, may implicate the necessity for more extensive training of the mice before the beam walk task in week 0. Further, the effects of exercise on the motor performance included an improvement on the beam walk and rotarod in both saline and MPTP-treated groups. However, the effects of physical exercise on the open field parameters in salinetreated mice was either absent or opposite in MPTP-treated animals. These findings suggest that some effects of physical exercise may be dependent on the Bdisease-state^(i.e., salineor MPTP-treated). It could be argued, however, that the lack of effect of physical exercise in MPTP-treated mice on total walking distance and mean velocity (Fig. 2) may be due to their MPTP-induced hyperactivity (Fig. 1) that could have limited a further increase in motor performance due to physical exercise. This hyperactivity has been observed more often following MPTP treatment [48,65,66,[68][69][70] and may result from compensatory effects induced by e.g., brain areas of the mesolimbic pathway (see also below). Furthermore, the opposite effect of exercise on total movement time and mean angular velocity in MPTP-treated mice (Fig. 2) compared to the effect of MPTP alone (Fig. 1) suggests that physical exercise counteracts the effect of MPTP. This finding could have important translational value as axial symptoms in PD-such as hypokinetic rigidity which is reflected by reduced angular velocity [71][72][73]-are notoriously more difficult to treat by medication than appendicular symptoms.
The RNAseq analysis showed that the level of overlap between MPTP-regulated genes and physical exerciseregulated genes differed between the brain areas studied and was particularly high in the PFC and VTA. These data suggest that in the PFC and VTA, physical exercise influences the processes affected by MPTP more directly than in the other areas in which more indirect mechanisms may prevail. Nevertheless, in all brain areas examined, the majority of overlapping genes (82-99%) were regulated in opposite directions by physical exercise compared to MPTP, suggesting counteracting effects of physical exercise on MPTPregulated mechanisms. For example, the enrichment analysis of the overlapping genes in the PFC and DL (see Online Resource 10, Supp. Table 2) shows a predicted activation of the top regulators dalfampridine, bicuculline, and CREB1indicative for neuronal activation [46,47]-whereas these are inactivated by MPTP.
The roles of the PD-related brain areas examined in this study can be summarized in a simplified basal ganglia circuitry model, wherein PPN, SN, and DL are mainly involved in motor control, and the VTA, VM, and PFC contribute particularly to the regulation of (complex) behavior and cognition (Fig. 6) [74][75][76][77][78][79]. The top regulators-and to a lesser extent also the canonical pathways and biofunctions-regulated by physical exercise in the cognition-associated brain areas of MPTP-treated mice, showed highly significant predicted directions of effect, whereas these effects were less prominent in the motor-related areas. This implicates that, although physical exercise is able to improve motor function (as supported by the behavioral tests), it may also have strong effects on cognition and behavior. This is interesting from a therapeutic point of view, because non-motor symptoms in PD patientsincluding cognitive impairment, depression, pain, and sleep disorders-are usually less responsive to dopamine replacement therapy and therefore treatment options are limited [80][81][82]. It remains unclear, however, to what extent these motor and non-motor features of PD have truly discernible neuroanatomical or molecular substrates, as effects of changes in mRNA expression in the Bbehavioral areas^VTA, VM, and PFC on motor function of our animals cannot be excluded. For example, a recent paper reported that VTA-specific knockout of RICTOR in mice may affect cognition and mood, but also results in hyperactivity in the open field [83]. In addition, it has been suggested that during exercise, the mesolimbic pathway (including the VTA and VM) may provide a compensatory functional activation of the motor loop [84]. Furthermore, whereas L-DOPA is known to improve DL-mediated motor symptoms, it may impair VM function in PD patients [85,86]. Therefore, exercise may counteract L-DOPA-mediated pathways in the VM and as such improve VM functionality, which could in turn result in increased compensatory motor-loop activation. Finally, inhibition of GABAergic interneurons in the PFC by bicuculline increases the release of DA in the DL through the glutamatergic corticostriatal pathway [87][88][89] and may increase the locomotor activity of mice [88,90]. This is in line with the reduced TH expression we observed in the DL and the inactivation of bicuculline/dalfampridine-regulated pathways in the PFC following MPTP treatment, as predicted on the basis of the RNAseq analysis. Moreover, we found no significantly reduced TH expression in the DL of exercised MPTP-treated mice that, in contrast to MPTP-treated mice without exercise, showed a predicted activation of the bicucullin/dalfampridineregulated pathways in the PFC.
Almost five decades after its introduction [4], the DA precursor L-DOPA is still the gold standard for symptomatic treatment to alleviate the motor symptoms of PD [5]. It should be noted, however, that chronic high-dose L-DOPA use is associated with complications such as dyskinesias [91][92][93]. Moreover, the effects of L-DOPA on non-motor symptoms in PD are even less predictable and L-DOPA use may even lead to deterioration of these symptoms, e.g., impaired reversal learning or motor sequence learning deficits [94][95][96][97][98][99][100][101]. It has been suggested that these adverse cognitive effects of L-DOPA may be due to a higher L-DOPA demand in the motor systems compared to cognitive areas, resulting in a relative L-DOPA overdose in cognitive areas [102][103][104] e.g., the VM (see also above). Therefore, novel Badd-on^treatments that can enable low-dose L-DOPA use and/or reduce the adverse effects of (long-term) L-DOPA use are desirable. In this Fig. 6 Overview of the brain areas analyzed, and the top upstream regulators, and processes per area. The brain areas are shown in a simplified model of the basal ganglia circuitry. Green, red, and gray triangles depict positive (> 2), negative (< −2) or non-significant z scores, respectively, from the enrichment analyses of the physical exercise-regulated mRNAs in MPTP-treated mice. DL dorsolateral striatum, GPe globus pallidus external, GPi globus pallidus internal, PFC prefrontal cortex, PPN pedunculopontine nucleus, SNpc substantia nigra pars compacta, SNr substantia nigra reticularis, STN subthalamic nucleus, VM ventromedial striatum, VTA ventral tegmental area respect, our study suggests that physical exercise is an attractive add-on treatment for PD, and that exercise combined with L-DOPA treatment may be more beneficial than treatment of PD patients with L-DOPA alone [9,105]. Other findings that support this hypothesis include the reports indicating that physical exercise not only improves the motor symptoms of PD patients [8,9], but also L-DOPA-induced dyskinesias in PD patients [106] and animal models [107], and cognitive function in PD patients [2,16,17]. In this light, it is of note that L-DOPA use may result in alpha-synuclein-induced neuroinflammation [108] that very recently has been shown to be reduced by physical exercise [109][110][111]. Although the major pathways regulated in our study are not directly related to inflammation, L-DOPA-mediated pathways may affect alpha-synuclein regulation [108], and the RICTOR-regulated pathways may improve mitochondrial function and protein turn-over, i.e., processes that have been suggested to reduce alpha-synuclein-induced neuroinflammation [111].
Considering the above, it is worth noting that our landscapes revealed that physical exercise and L-DOPA regulate similar pathways in the SN and VM-often in an opposite direction-and that most of these pathways have been linked to sleeping problems (SN) and cognitive and/or motor dysfunctioning (VM) in PD. For example, the expression of clock proteins was affected by physical exercise and L-DOPA in the SN, a brain region known to be involved in the regulation of REM sleep [112,113] and causing circadian rhythm irregularities when damaged by MPTP [114,115]. Further, the use of L-DOPA can disturb REM sleep [116] and result in a delayed sleep onset in PD patients, which suggests an uncoupling of sleep and circadian regulation [117]. On the other hand, physical exercise can improve circadian rhythm regulation [118][119][120] and may therefore serve as a complementary therapy to strengthen circadian function in PD, as suggested earlier [121].
In the VM, both physical exercise and L-DOPA regulate DA, neuropeptide, and endocannabinoid signaling, but in opposite directions. L-DOPA treatment results in sustained DA signaling in the striatum and can disrupt DA and (endo)cannabinoid receptor crosstalk [122,123]. In contrast, physical exercise may rebalance DA signaling after sustained L-DOPA treatment (by reducing PPP1R1B activation) [107], attenuates depression-like behavior by decreasing the expression of neuropeptides [124] and activates the endocannabinoid system [125][126][127]. In turn, the endocannabinoid system modulates synaptic (DA) transmission in the striatum of PD patients [128][129][130], restores homeostasis following DA depletion [131,132] and exerts beneficial effects on cognition, mood, and nociception [126]. Therefore, physical exercise seems to exert a positive effect on the regulation of DA, neuropeptide, and endocannabinoid signaling. Moreover, these three signaling pathways are not only associated with L-DOPAinduced dyskinesia [133][134][135][136][137][138], a process that is mainly due to dysregulation in the DL, but are also involved in regulating VM-associated cognitive functions and behaviors [124,[139][140][141][142][143][144], supporting the notion that the anatomical and neurophysiological boundaries of the striatal domains regulating control of movement (DL) and (more) cognition-related processes (VM) may functionally overlap [145,146].
In summary, the molecular pathways that are regulated in the SN and VM by both physical exercise and L-DOPA can be directly linked to clinical features of PD. Interestingly, the overall effects of physical exercise on these pathways seem to particularly improve the motor and behavioral clinical phenotype, whereas (chronic) L-DOPA-treatment can also cause adverse effects. Moreover, to our knowledge, physical exercise exerts-although it may counteract some L-DOPAregulated pathways-no adverse effects on PD patients. To confirm the positive effects of physical exercise on cognitive function, future physical exercise studies in PD animal models and patients should include cognitive tests, e.g., the Y-maze, the water maze, or reversal learning tasks. Furthermore, these studies should aim at further elucidating the molecular pathways underlying physical exercise in relation to (chronic) L-DOPA treatment in animal models.
Taken together, our findings provide further evidence that physical exercise improves motor function in PD, while it also affects the regulation of non-motor brain areas of MPTPtreated mice. We found that physical exercise and L-DOPA exert opposite effects on molecular pathways in several PDassociated brain areas, including those involved in sleeping and cognitive function. Overall, the present study suggests that physical exercise has therapeutic potential, not only to improve motor function but it may also improve non-motor symptoms of PD-and perhaps even alleviate detrimental effects associated with (chronic) L-DOPA use.
Conflict of Interest
The authors declare that they have no conflict of interest.
Open Access This article is distributed under the terms of the Creative Comm ons Attribution 4.0 International License (http:// creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. | 2018-04-03T03:55:47.675Z | 2017-10-10T00:00:00.000 | {
"year": 2017,
"sha1": "80b9d7116c075a420a2e802b5ac8c9cb6455c615",
"oa_license": "CCBY",
"oa_url": "https://link.springer.com/content/pdf/10.1007/s12035-017-0775-0.pdf",
"oa_status": "HYBRID",
"pdf_src": "Adhoc",
"pdf_hash": "24fd6c61b037ec2df447d8725b76700a8309b8cc",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Medicine",
"Psychology"
]
} |
86201026 | pes2o/s2orc | v3-fos-license | CONTRIBUTION TO THE STUDY OF Himatanthus sucuuba: LATEX MACROMOLECULE, MICROELEMENTS AND CARBOHYDRATES
: The polymeric material in the latex of Himatanthus sucuuba (Spruce) Woodson was identified by spectroscopic methods as cis-polyisoprene (M n = 192; M w = 571; M w / M n = 2.97). ICP-MS analysis of microelements in the aqueous phase showed the most abundant to be Ca (354 pg/g) and Mg (250 pg/g). Carbohydrate analysis of the aqueous phase by HPLC-PAD showed arabinose, glucose, xylose, rhamnose and galactose to be the predominant saccharides.
INTRODUCTION
Himatanthus sucuuba (Spruce) Woodson (Apocynaceae) is a medium-sized tree growing on firm ground in the Amazon region, popularly known as sucuuba, sucuba or janaguba.Its wood is used for construction and other purposes.The trunk bark is popularly used for the treatment of gastritis, stomach ulcers and hemorrhoids (van den Berg, 1984), and as an analgesic (Elisabetsky et al., 1990).In the Peruvian Amazon it is reportedly used for hernias, boils and tumors (Perdue et al., 1978).
Fractionation of the bark extract, guided by antifungal bioassay, demonstrated strong activity against Cladosporium sphaerospermum (Silva et al., 1998).
Antiphlogistic and analgesic activities were also detected in in vivo tests, and associated with the presence of cinnamoyl esters of hydroxytriterpenes in the latex (Miranda et al., 2000).Apocynaceae (Tanaka, 1989).
Hevea brasiliensis latex, for example, contains 30 to 40 % (w/v) of isoprenoid material, and about 5 % (w/v) of nonisoprenoid substances in its composition.The latter are dissolved or suspended in the aqueous medium, or are adsorbed on the surface of the rubber (Tanaka, 1989).The non-polymeric material consists mainly of minerals, lipids, terpenes, proteins, carbohydrates and amino acids (Aik-Hwee et al., 1993;Moir, 1959).
The objectives of this report were to determine microelement and carbohydrate composition and characterize the polyisoprene in the latex of Himatanthus sucuuba.the prevention of disturbances of the digestive system and also in general of malignant diseases (Wood et al., 1995).
EXPERIMENTAL
On the other hand, the trace elements, especially manganese, iron, copper and zinc, which are also important to the pharmacological properties of this plant (Silva, 1997), were present in much lower quantity in the latex than in the bark (Silva, 1997).They were also lower in H. sucuuba than in other medicinal plants with healing properties (Pereira et al., 1998).
CONCLUSION
The polymer in the latex was identified as cis-polyisoprene, whose occurrence in many Several functions have been attributed to latexes and resins, including transport and storage of nutrients, regulation of water-balance, the capacity to store non-functional products of secondary metabolism and protection against pathogens.The occurrence of isoprenoid substances in latexes is already well established in the plant families Euphorbiaceae, Moraceae, Asclepiadaceae, Compositae, Guttiferae and Himatanthus sucuuba latex was collected within the city limits of Santarém, State of Pará, by Mr. Raimundo S. Carneiro.A voucher specimen was deposited at the Herbarium of the Institute of Biological Sciences of the University of Amazonas, Manaus, Brazil, registered under the number 5436.The pH measurement of the crude latex was made digitally (Hanna model 8417), calibrated with potassium hydrogen phthalate 0.05 M (pH 4.01) and KH 2 PO 4 0.025 M + Na 2 HPO 4 0.025 M (pH 6.86) buffer solutions.For the determination of metals, an aliquot of the latex in natura was freeze dried, submitted to acid digestion as described by Pereira et al. (1998), and analyzed by mass spectrometry with inductively coupled Plasma mass spectrometry (ICP-MS), using a Perkin-Elmer Sciex, model Elan 5000 Λ instrument, with semi-quantitative calibration and internal standard containing the elements Li, Mg, Na, P, K, Ca, Sc, Ti, V, Cr, Mn, Fe, Co, Ni, Cu, Zn, Ga, Al, Zr, Th, Si, Sr, Mo, Ag, Cd, Ba, Tl, and Pb.The analysis was made in duplicate.on a Bruker AM-200 spectrometer (200 and 50 MHz, respectively), using CDÜ3 as solvent and tetramethysilane (TMS) as internal reference.Chemical shifts are expressed in ppm.Samples were prepared as KBr pellets and their IR spectra obtained with a Nicolet spectrophotometer with Fourier transform Model Magna-IR 760.Wavelengths are expressed in reciprocal centimeters (cm -1 ).Gel permeation chromatography (GPC) was carried out with a Waters 600E apparatus using tetrahydrofuran (THF) as mobile phase; flow rate 1 ml/min, and a Waters styragel column with porosity of 500 Λ.Polystyrene standards used in the calibration curve were: A-500 (500 Da); A-1000 (950 Da); A-8 (2900 Da); A-7 (3600 Da); F-1 (9700 Da); A-5 (33000 Da); A-4 (111000 Da); A-3 (200000 Da); A-2 (465660 Da).Refractive Index and ultraviolet (200-400 nm) detectors (Waters Models 410 and 991, respectively) were used.High performance liquid chromatography, coupled to a pulsed amperometric detector (HPLC/PAD), was performed using a DIONEX Model DX-300 equipped with CarboPac PA-1 anionic exchange column.n-Butanol was added to latex to coagulate the polyisoprene.This mixture was filtered and the filtrate was subjected to liquid-liquid extraction (n-BuOH/^O).The aqueous fraction was analyzed by the sulfuric acid -phenol method (Dubois et al., 1956) and by HPLC, as of the polyisoprene indicates a unimodal distribution, with a median molecular weight (M n ) of 192 and a weighted average (M w ) of 571.The polydispersion index (M /M ) was 2.97, a value comparable to that wn of natural rubber from Hevea brasiliensis (Aik-Hwee et al., 1993).The infrared spectrum showed bands between 3036 and 2726 cm -1 , corresponding to axial deformations of the methyl, methylene and methine C-H groups.The absorption at 1664 cm -1 was attributed to the axial deformation of the double bond of the cispolyisoprene.The absorptions at 1376 cm -1 and 1216 cm -1 corresponded to the symmetrical angular deformations of the methyl group and the asymmetric out-of-plane deformation of the methylene group, in accordance with other data for cis-polyisoprene (Aik-Hwee plants accompanies that of other isoprene based secondary products, such as triterpenes and their esters.The most abundant elements, calcium and magnesium, can be considered as contributors to the antiphlogistic and antitumoral medicinal properties attributed by popular use to the latex.critical review of the manuscript and Mr. Raimundo S. Carneiro for the plant material collected.JRAS thanks CAPES for a fellowship during the development of the study and the Botanical Institute -SP for the carbohydrate analyses.Imprensa Nacional, vol VI, Ministério da Agricultura, Rio de Janeiro, Brasil, pp.154¬ 155.Dubois, M.; Gilles, A.; Hamilton, J.K.; Rebers, P.A.; Smith, S. 1956.Colorimetric method for determination of sugars and related substances.Anal.Chem., 28:350-355.Elisabetsky, E.; Castilhos, C. 1990.Plants used as analgesics by Amazonian caboclos as basis for selecting plants for investigation.
Table 1 :
Microelements from the latex of H. | 2019-03-30T13:13:38.084Z | 2003-03-01T00:00:00.000 | {
"year": 2003,
"sha1": "419b7e0e07be80d88dfdd74cb5ab57bfd9796b26",
"oa_license": "CCBYNC",
"oa_url": "https://www.scielo.br/j/aa/a/4hLBxYW4GTbwvfqCL6CyNyM/?format=pdf&lang=en",
"oa_status": "GREEN",
"pdf_src": "Anansi",
"pdf_hash": "52f3f1bfe0224a98a8d1e94e89e6d0353e9c9b1b",
"s2fieldsofstudy": [
"Chemistry"
],
"extfieldsofstudy": [
"Chemistry"
]
} |
253693495 | pes2o/s2orc | v3-fos-license | Continuous Production of Biogenic Magnetite Nanoparticles by the Marine Bacterium Magnetovibrio blakemorei Strain MV-1T with a Nitrous Oxide Injection Strategy
Magnetotactic bacteria (MTB) produce magnetosomes, which are membrane-embedded magnetic nanoparticles. Despite their technological applicability, the production of magnetite magnetosomes depends on the cultivation of MTB, which results in low yields. Thus, strategies for the large-scale cultivation of MTB need to be improved. Here, we describe a new approach for bioreactor cultivation of Magnetovibrio blakemorei strain MV-1T. Firstly, a fed-batch with a supplementation of iron source and N2O injection in 24-h pulses was established. After 120 h of cultivation, the production of magnetite reached 24.5 mg∙L−1. The maximum productivity (16.8 mg∙L−1∙day−1) was reached between 48 and 72 h. However, the productivity and mean number of magnetosomes per cell decreased after 72 h. Therefore, continuous culture in the chemostat was established. In the continuous process, magnetite production and productivity were 27.1 mg∙L−1 and 22.7 mg∙L−1∙day−1, respectively, at 120 h. This new approach prevented a decrease in magnetite production in comparison to the fed-batch strategy.
Introduction
Magnetic nanomaterials are among the most versatile tools available for manufacturing, medicine, and the environmental sciences. Magnetic nanoparticles are continuously making their way into the market in the form of recyclable nanocatalysts, controllable drug nanocarriers, and ultrasensitive nanosensors [1,2]. The nanomagnets used in these refined approaches must be produced with suitable quality and adequate quantity to assure effectiveness and safety. For this reason, the fabrication process of nanomagnets needs to be developed [3].
Several wet and dry chemical methods are characterized in existing literature for the synthesis of magnetic nanoparticles [4]. However, only a few have been adapted to a large scale [3]. Thermal decomposition and co-precipitation are the most explored chemical processes and often result in productions larger than the gram scale [3]. Simeonidis and colleagues [5] designed a continuous flow stirred tank process (5 L working volume) to synthesize iron oxide and sulfide nanoparticles by alkaline co-precipitation of Fe 2+ and Fe 3+ . This method resulted in productions of 0.33 kg/h for iron oxide and 1 kg/h for iron sulfide nanoparticles, with mean diameters of 18 and 20 nm, respectively [5]. However, the nanoparticles presented some aggregation and relatively low shape uniformity. Additionally, such processes generate a considerable volume of highly alkaline aqueous waste, which demands treatment steps before discharge. Park and colleagues designed the production of magnetite nanoparticles with very narrow (<5%) size dispersion, with a yield of 40 g per reaction [6]. The process was based on the thermal decomposition of iron duce magnetosomes in those cells is due to deletions in different sites of the magnetosome gene cluster [24]. In Mv. blakemorei strain MV-1 T , spontaneous non-magnetic mutant cells were also found in the culture [25]. These mutants did not produce several proteins present in the wild-type strain, and they lack the iron uptake system necessary for biomineralization [25]. When growing in a bioreactor, it may be a good strategy to avoid the stationary phase in order to prevent the loss of magnetosome synthesis ability by cells. To keep cells growing in a steady-state corresponding to exponential growth, it is necessary to have continuous growth, maintaining growth at a given rate indefinitely (e.g., maximum growth rate). For this, a fresh, sterile medium is inserted in the reactor vessel while the spent medium with metabolites and cell debris is being removed [26].
In this work, we execute the first reported, to date, cultivation of the marine magnetotactic vibrio Mv. blakemorei strain MV-1 T in continuous mode, using a chemostat strategy, where log phase conditions were maintained constant, including high-nutrient concentration and high magnetosome-producing cell density. To achieve this, we first carried out a fed-batch fermentation with supplementation of an iron source and N 2 O-the main final electron acceptor in microaerobic/anaerobic growth. Afterwards, we set up a gas injection regime to provide a sufficient input of N 2 O during cell growth. Lastly, we developed a chemostat continuous culture in which the productivity of magnetosomes was kept high and constant, diminishing the occurrence of late-growth-phase non-magnetic mutants. During all cultivation experiments, we performed measurements of succinate, nitrogen, and iron (II) concentrations in medium to monitor nutrient consumption and prevent depletion of carbon, nitrogen, and iron. The present paper illustrates the potential of chemostat strategy for the mass cultivation of MTB. This is especially important given the practical difficulties imposed by MTB cultivation (e.g., complex media, low productivity, etc.), as processes with longer high-productivity periods might be obtained from a single inoculum.
Fed-Batch
In our fed-batch experiments ( Figure A1), Mv. blakemorei strain MV-1 T growth was observed up to 120 h, with the highest specific growth rate observed between 24 h and 48 h (µ max = 0.05 h −1 , Figure 1a). However, the mean number of magnetosomes per cell showed its maximum value at 72 h, decreasing from this time on (Figure 1a and Table 1). Magnetite production, whose maximum value reached 72 h (p = 32.5 mg·L −1 , Table 1), decreased in subsequent intervals due to the decrease in the production of magnetosomes per cell ( Figure 1a). Maximum magnetite productivity was reached at 72 h (p = 1.65 mg·L −1 ·h −1 or 39.7 mg·L −1 ·day −1 , Table 1).
Consumption of N 2 O and Fe 2+ was more intense between 48 and 72 h. In the latter time, the average magnetosome number per cell was the highest (11 per cell). Three pulses of both N 2 O and Fe 2+ were needed to reach their initial concentration (Figure 1b). N 2 O and Fe 2+ supplementation increased both the production and productivity of magnetite at 72 h (32.5 mg·L −1 ·day −1 and 39.7 mg·L −1 , respectively); production, productivity and average number of magnetosomes per cell decreased from this time point (reaching 16.1 mg·L −1 , 12.6 mg·L −1 ·day −1 and 6 magnetosomes per cell, respectively; Figure 1a). Succinate was consumed throughout growth, and its concentration at 120 h was 62% of that at the time of inoculation (Figure 1b).
Carbon and nitrogen were consumed in more even rates throughout the process (Figure 1b). Consumption of nitrogen seems to be less correlated to magnetite synthesis than those of N 2 O, Fe 2+ and carbon. Hence, there is an indication that nitrogen demand was larger for cell growth and division than for magnetosomes synthesis. Furthermore, both carbon and nitrogen were present in relative abundance in medium throughout the batch, with 40 and 70% of their initial concentration remaining in medium at 96 h of cultivation ( Figure 1b). Thus, these essential nutrients were not limiting for cell growth and magnetosome production during early cultivation times.
Considering that pH substantially impacts both culture growth and magnetosome formation, our experimental set up ( Figure A1) relied on a strict pH control provided by an automated built-in system that injects either sterile NaOH or HCl in response to pH variations. During fed-batch, pH was kept within the range 7.0 ± 0.2, even after the supplementation of acidic FeSO 4 solution and N 2 O. Consumption of N2O and Fe 2+ was more intense between 48 and 72 h. In the latter time, the average magnetosome number per cell was the highest (11 per cell). Three pulses of both N2O and Fe 2+ were needed to reach their initial concentration (Figure 1b). N2O and Fe 2+ supplementation increased both the production and productivity of magnetite at 72 h (32.5 mg•L −1 •day −1 and 39.7 mg•L −1 , respectively); production, productivity and average number of magnetosomes per cell decreased from this time point (reaching 16.1 mg•L −1 , 12.6 mg•L −1 •day −1 and 6 magnetosomes per cell, respectively; Figure 1a). Succinate was consumed throughout growth, and its concentration at 120 h was 62% of that at the time of inoculation (Figure 1b).
Carbon and nitrogen were consumed in more even rates throughout the process (Figure 1b). Consumption of nitrogen seems to be less correlated to magnetite synthesis than those of N2O, Fe 2+ and carbon. Hence, there is an indication that nitrogen demand was larger for cell growth and division than for magnetosomes synthesis. Furthermore, both carbon and nitrogen were present in relative abundance in medium throughout the batch,
Nitrous Oxide Mass Transfer
According to our results, under the conditions of agitation rate in 100 RPM with a N 2 O flow of 0.5 L·min −1 , it takes 35 min to complete N 2 O saturation starting from zero N 2 O and~20% oxygen ( Figure 2). However, at 200 RPM, it takes 20 min. Using 300 RPM slightly decreases the time for N 2 O saturation to 18 min ( Figure 2). From consumption analysis during growth, we can see that almost 80% of N 2 O was consumed in the exponential growth phase. As continuous culture aims at extending the exponential phase conditions, we calculated a N 2 O injection regime that kept N 2 O at a level of at least 75% of saturation, considering that the consumption of 80% between 48-72 h, 25% of N 2 O would be consumed within 8 h. Thus, a regime of N 2 O injection to saturation level every 8 h would keep the concentration at least 75% of saturation during continuous cultivation. Considering the N 2 O concentration before sparging and a kLa value of 0.48 at 200 RPM, N 2 O can be replenished within 1.9 min. Thus, agitation was set to rise from 100 to 200 RPM during 5 min while N 2 O was sparged in medium to ensure total replenishment. This helped N 2 O transfer from the gas to liquid phase while agitation's negative effect on growth was minimized.
N2O flow of 0.5 L•min −1 , it takes 35 min to complete N2O saturation starting from zero N2O and ~20% oxygen ( Figure 2). However, at 200 RPM, it takes 20 min. Using 300 RPM slightly decreases the time for N2O saturation to 18 min ( Figure 2). From consumption analysis during growth, we can see that almost 80% of N2O was consumed in the exponential growth phase. As continuous culture aims at extending the exponential phase conditions, we calculated a N2O injection regime that kept N2O at a level of at least 75% of saturation, considering that the consumption of 80% between 48-72 h, 25% of N2O would be consumed within 8 h. Thus, a regime of N2O injection to saturation level every 8 h would keep the concentration at least 75% of saturation during continuous cultivation. Considering the N2O concentration before sparging and a kLa value of 0.48 at 200 RPM, N2O can be replenished within 1.9 min. Thus, agitation was set to rise from 100 to 200 RPM during 5 min while N2O was sparged in medium to ensure total replenishment. This helped N2O transfer from the gas to liquid phase while agitation's negative effect on growth was minimized. Oxygen dislocation was also studied in mass transfer experiments. This was done to ensure anaerobic conditions, as we cannot guarantee the feed medium was completely free of oxygen. The time and agitation chosen for N2O injection were enough to remove all detectable oxygen from medium ( Figure 2).
Continuous Growth
According to the fed-batch experiments data, medium components were consumed in different rates and only the essential nutrients were examined. These limitations are hard to overcome because of the lack of deeper knowledge on bacterial nutritional demands and physiology. Thus, one strategy to support further growth of strain MV-1 T in bioreactors would be the addition of whole fresh culture medium ( Figure A2). In fed-batch cultivation, the average number of magnetosomes per cell decreases from 72 h ( Figure 1a and Table 1) while the proportion of non-magnetic cells increases (Table 1). Here, the loss of the ability to synthesize magnetosomes occurs in later times of growth, although iron continues to be replenished. This supports the idea that the decline in overall magnetite production was due to deletions in the genome region known as MGC. Oxygen dislocation was also studied in mass transfer experiments. This was done to ensure anaerobic conditions, as we cannot guarantee the feed medium was completely free of oxygen. The time and agitation chosen for N 2 O injection were enough to remove all detectable oxygen from medium ( Figure 2).
Continuous Growth
According to the fed-batch experiments data, medium components were consumed in different rates and only the essential nutrients were examined. These limitations are hard to overcome because of the lack of deeper knowledge on bacterial nutritional demands and physiology. Thus, one strategy to support further growth of strain MV-1 T in bioreactors would be the addition of whole fresh culture medium ( Figure A2). In fed-batch cultivation, the average number of magnetosomes per cell decreases from 72 h ( Figure 1a and Table 1) while the proportion of non-magnetic cells increases (Table 1). Here, the loss of the ability to synthesize magnetosomes occurs in later times of growth, although iron continues to be replenished. This supports the idea that the decline in overall magnetite production was due to deletions in the genome region known as MGC.
In this sense, the implementation of continuous culture would help to maintain highmagnetosome producing cells, and thus directly influence the production of magnetite. Few to no non-magnetic mutants were observed in a culture of Ms. gryphiswaldense strain MSR-1 with inoculation before medium saturation, even after multiple uninterrupted passages [24]. After medium saturation, non-magnetotactic mutants reached 0.5% of the total population [24].
Iron, amino nitrogen, and succinate are taken up from media by growing cells at different rates, as observed for batch experiments (Figure 1). Measurements during continuous culture are intended for monitoring if substrates are being kept at sufficient levels to support constant cell density, or if any of the nutrients are being depleted in a rate greater than replenishment through fresh media inlet. The latter scenario would require us to make further adjustments in our media and operational conditions (i.e., dilution rate, separate nutrient injection, etc.).
From the results of fed batch growth, the point of 72 h was chosen to initiate continuous culture and intermittent N 2 O injection based on the maximum production of magnetite. The largest Fe 2+ and N 2 O consumption and highest magnetite concentration took place between 48 h and 72 h (Figure 1a,b). The dilution rate employed was equal to 70% of maximum growth rate (µ max ) during the exponential phase.
After the implementation of continuous culture, there was a slight raise in cell density (72-96 h- Figure 3a). Then, a slight decrease (96-120 h) in this parameter followed by stabilization (120-168 h) with small fluctuations was observed (Figure 3a). Nutrient concentrations also stabilized between 72-120 h, remaining approximately constant until the end of cultivation (Figure 3b). These phenomena characterize the onset of the steady state. than replenishment through fresh media inlet. The latter scenario would require us to make further adjustments in our media and operational conditions (i.e.: dilution rate, separate nutrient injection, etc.).
From the results of fed batch growth, the point of 72 h was chosen to initiate continuous culture and intermittent N2O injection based on the maximum production of magnetite. The largest Fe 2+ and N2O consumption and highest magnetite concentration took place between 48 h and 72 h (Figure 1a,b). The dilution rate employed was equal to 70% of maximum growth rate (µ max) during the exponential phase.
After the implementation of continuous culture, there was a slight raise in cell density (72-96 h- Figure 3a). Then, a slight decrease (96-120 h) in this parameter followed by stabilization (120-168 h) with small fluctuations was observed (Figure 3a). Nutrient concentrations also stabilized between 72-120 h, remaining approximately constant until the end of cultivation (Figure 3b). These phenomena characterize the onset of the steady state. A substantial increase in the average number of magnetosomes per cell and a sharp decrease in the proportion of non-magnetic cells (Figures 3a and 4c A substantial increase in the average number of magnetosomes per cell and a sharp decrease in the proportion of non-magnetic cells (Figures 3a and 4c,d, Table 1) occurred when comparing the same time intervals from the fed batch. Although production of magnetite in 72 h was slightly smaller than that in fed-batch, intense production was maintained until later times ( Figure 4 and Table 1), reaching 26.1 mg·L −1 at 168 h. Extended exponential phase conditions and avoided conditions of late growth phases prevented the occurrence of nonmagnetic mutants. The total magnetite produced here was 104.2 mg, considering the process time and bioreactor working volume.
We only employed one dilution rate (D = 0.035 h −1 ), which was equivalent to 70% of the growth rate in the exponential phase (24-72 h), and one residence time. However, an expanded study testing the effects of different Ds and longer cultivations with more residence times on magnetosome production must be performed. when comparing the same time intervals from the fed batch. Although production of magnetite in 72 h was slightly smaller than that in fed-batch, intense production was maintained until later times ( Figure 4 and Table 1), reaching 26.1 mg•L −1 at 168 h. Extended exponential phase conditions and avoided conditions of late growth phases prevented the occurrence of nonmagnetic mutants. The total magnetite produced here was 104.2 mg, considering the process time and bioreactor working volume. We only employed one dilution rate (D = 0.035 h −1 ), which was equivalent to 70% of the growth rate in the exponential phase (24-72 h), and one residence time. However, an expanded study testing the effects of different Ds and longer cultivations with more residence times on magnetosome production must be performed.
Discussion
Species of Magnetospirillum genus require higher concentrations of oxygen for growth, and lower ones for the formation of magnetosomes [16]. For that purpose, fine process control is required to maintain oxygen concentrations in a narrow range, or culturing strategies in which the two antagonistic conditions are satisfied. In the case of the Mv. blakemorei strain MV-1 T , the greatest production of magnetosomes occurs with a final electron acceptor other than oxygen, in this case, nitrous oxide [22]. The simple supple-
Discussion
Species of Magnetospirillum genus require higher concentrations of oxygen for growth, and lower ones for the formation of magnetosomes [16]. For that purpose, fine process control is required to maintain oxygen concentrations in a narrow range, or culturing strategies in which the two antagonistic conditions are satisfied. In the case of the Mv. blakemorei strain MV-1 T , the greatest production of magnetosomes occurs with a final electron acceptor other than oxygen, in this case, nitrous oxide [22]. The simple supplementation of this gas would already be enough to increase the production of magnetosomes without the need for complex process controls such as those described for species Magnetospirillum [16,18,19]. This relative simplicity increases the potential of the use of Mv. blakemorei strain MV-1 T and its magnetosomes in biotechnological applications.
Silva and colleagues [21] developed a fed-batch cultivation with solely the supplementation of iron in 24-h pulses. In that research, two supplementations were made until 120 h, with the first taking place at 72 h, when the iron level was at 30% of initial concentration. Silva et al. [21]. The reduction of overall magnetosome production is, in part, explained by the de-acceleration of the growth rate (Figure 1a).
Regarding the role of the carbon source on magnetosome yields, different studies [21,22] show that succinate is known to sustain the heterotrophic cell growth of MV-1 T in microaerobic/anaerobic conditions. However, no previous research has described the kinetics of succinate consumption in MV-1 T . Thus, it was essential to gain knowledge on succinate uptake in relation to cell growth and magnetosome formation. We also investigated succinate consumption to evaluate any eventual requirement of carbon supplementation during fed-batch. Furthermore, in chemostat cultivation, succinate monitoring ensures that carbon source concentrations are kept constant in order to support cell density. Although the carbon source was not fully depleted in our fed-batch experiments, it is known that reduced availability of carbon source causes growth de-acceleration [27]. The accumulation of toxic products and inhibitors in later phases also influences cell growth negatively [28]. In fact, genes encoding secondary metabolites with antimicrobial activity have been identified in strain MV-1 T [29]. Another possibility may have been the limitation of other non-measured components present in the media, such as mineral solution or specific amino acids.
There is little information on the energy consumption involved in magnetosomes' synthesis or the physiology of MTB in general. Mv. blakemorei MV-1 T is capable of microaerophilic and anaerobic growth on oxygen and nitrogen oxides, respectively [22]. Despite metabolic versatility, anaerobic reduction of N 2 O by strain MV-1 T yields the greatest number of magnetosomes per cell among all electron acceptors [22]. In our results, we can see that the consumption profiles of Fe 2+ and N 2 O were strongly related, with both components showing the greatest consumption between 48-72 h (Figure 1b). This coincidence might indicate a synergism between energy and material demands as Fe 2+ was converted into Fe 3 O 4 while N 2 O was reduced for ATP production for biomineralization. In fact, it is known that the biomineralization process is strongly dependent on energy availability [30,31]. In Ms. magneticum strain AMB-1, the reduction of nitrogen oxyanions provides energy necessary for magnetosome vesicle formation [30]. Furthermore, energy metabolism and magnetosome synthesis are controlled in an integrated manner at genetic level [31].
Understanding gas mass transfer kinetics is crucial for designing larger scales of a given bioprocess [32]. For continuous culture of Mv. blakemorei, measurement of N 2 O concentration during continuous growth is of little significance, as autoclaved fresh medium is added. Because of this, we have developed a strategy for intermittent N 2 O injection during continuous growth. Yield cultivation of Magnetospirillum species in bioreactors often demands strategies to keep oxygen concentrations under strict control because oxygen is required for cell growth, but anaerobiosis leads to better magnetite production [16][17][18]. Oxygen-control strategies require the online measurement of oxygen through sensitive probes and rapid monitoring of oxygen depletion due to consumption. In this sense, the strategy for sole N 2 O injection for Mv. blakemorei strain MV-1 T cultivation is simpler, because the maximum growth rate and greatest number of magnetosomes per cell occur under the same conditions.
Another advantage is that solubility of nitrous oxide in fresh and sea water is greater than that of oxygen, making N 2 O more favorable. However, this gas is not freely available in the atmosphere as oxygen is, making the gassing process more onerous. Among gases already used for bioreactor cultivation of MTB, N 2 O is the most expensive, with a cost per m 3 twice that of argon (Table 2). In this sense, an improved injection strategy also helps diminish gassing costs. Mass transfer measurements are most commonly made for studying oxygen transfer from gas bubbles to medium in aerobic processes [33]. However, the optimization of anaerobic processes also relies on knowledge and improvement of mass transfer [34]. All strategies for the improvement of mass transfer in submerged cultures are based on the optimization of gas flow and impeller stir rate [35]. The chemostat culture reported here was developed based on the fed-batch results and represents an initial achievement, and will probably provide a valuable tool not only for magnetosome production, but also for the study of cell physiology and metabolism. Prolonged chemostat experiments will provide enough time for mutation occurrence and accumulation of mutations, and will probably generate information on experimental evolution dynamics under different cultivation conditions [40]. Particularly for MTB, MGC is highly unstable, and this instability may generate distinct subpopulations even within a single-strain culture [25]. Although we have not performed molecular studies to verify that the loss of magnetosome production was due to MGC deletions during fed-batch experiments, some hypotheses can be drafted from our results. First, after sequential generations of cultivated cells in a constantly oxygen-free environment, there may be a reduction in cell reliability on magneto-aerototaxis. The described phenomenon, combined with fluctuations in the availability of essential nutrients and final electron acceptors, seems to induce physiological states in which non-magnetic individuals are favored.
Using chemostat experimental platforms may be interesting for simulating selective pressures on biomineralization and magnetotaxis. Physiological and metabolic adaptation to culture conditions usually takes place in periods longer than those of exponential phases in batch cultures [40]. The lack of knowledge on the adaptation of MTB to rapid changes in environmental conditions and the effect of these changes in cell growth and magnetosome formation [41] could be filled by studying MTB grown in chemostats. Several studies, however, have examined the influence of chemical parameters (e.g., pH, oxygen and iron concentrations, etc.) on biomineralization and whole-cell physiology [41,42]. Those studies examined one set of predetermined conditions in cells cultivated in batch cultures. Alternatively, chemostat would provide a more in-depth mechanistic view of metabolic switches in MTB in response to controlled changes in the environmental parameters during a single cultivation experiment.
The presence of the membrane is a major advantage of magnetosomes in economic terms when compared with artificially-coated synthetic nanoparticles [15]. On the other hand, long cultivation times and milligram-scale production are limitations of the biotechnological production of magnetic nanoparticles. Batch cultures of MTB take around 50-120 h to reach cost-beneficial magnetite concentrations, compared with a few hours in chemical syntheses. In this sense, continuous culture enables microbial magnetite to be produced high at concentrations for extended periods and prevents idle time (e.g., for washing, sterilizing, lag phase) between batches.
Bacterial Cells
Mv. blakemorei strain MV-1 T cells were anaerobically cultivated in an optimized medium [21] in 50 mL flasks for 48 h at 28 • C before being used in fermentation experiments.
Bioreactor Cultivation
Volumes corresponding to a final cell concentration of 10 8 cell/mL were inoculated in a 5-L (2-L working volume) bench bioreactor (Minifors, Infors HT-Basel, Switzerland) containing fresh growth medium. Cultivation parameters were set as follows: pH 7.0 (pH is automatically and strictly adjusted during cell growth, either in batch or fed-batch modes, by injection of sterile 1.0 N NaOH or HCl), 100 RPM stir rate, 28 • C and non-detectable oxygen. An anaerobic condition was achieved by purging sterile nitrogen and into fresh medium until the oxygen sensor reading reached zero. After that, the medium was purged with nitrous oxide (N 2 O) for 15 min. First, cultivations were carried out in fed-batch mode, generating data for continuous cultivation.
In the fed-batch ( Figure A1), supplements of iron (10 mM FeSO 4 ) and N 2 O (0.25 vvm) were given every 24 h, starting at the end of the exponential phase so that initial concentrations of both were re-established. One mL of the medium sample was collected every 24 h for analysis of optical density of N 2 O, iron, free nitrogen, and carbon for observation by transmission electron microscopy.
Continuous cultivation started as batch mode until it reached the exponential phase. Then the feeding of fresh medium began, simultaneously to the withdrawal of grown medium ( Figure A2). The influx and efflux of medium had the same flow rate, which was determined by the exponential growth rate. In this experiment, the dilution rate, calculated as D = F/V, where D is dilution rate; F is volumetric flow rate (mL/h) and V is total medium volume (mL). D was equal to 70% of growth rate. FeSO 4 (10 mM) was added to the feeding medium, whereas N 2 O was purged in intervals of 8 h for 15 min.
Growth Analysis
The cell density was measured by optical density at 600 nm in a spectrophotometer (Biospectro SP-22, Curitiba, Brazil). Cell concentration was obtained from the optical density (1.09 × 10 10 cells.mL −1 corresponds to an OD value of 1.0). The specific growth rate (µ) was calculated as where X 2 and X 1 are cell density in instants 1 and 2 and t 2 − t 1 is the interval between these two instants. µ is expressed in h −1 .
Transmission Electron Microscopy
Cells and magnetosome chains were observed by transmission electron microscopy (FEI Morgagni, Hillsboro, OR, USA). The mean number of magnetosomes per cell was determined by the average number of magnetosomes in 30 cells for each sampling point. The concentration of magnetite at each time was determined by the measurement of the magnetosome diameter, which was then used to calculate its volume using the iTEM software suite (Olympus Corporation, Tokyo, Japan).
Nutrient Determination
Succinate was measured using high-performance liquid chromatography (HPLC-Agilent 1260, Santa Clara, CA, USA) with a chromatographic column (Aminex HPX-87H, Bio-Rad, Hercules, CA, USA) of 300 mm × 7.8 mm coupled to a refractive index detector (column temperature of 65 • C). The operating conditions were: sample volume of 20 µL, mobile phase of 0.005 M H 2 SO 4 , flow rate of 0.6 mL·min −1 , and column temperature of 65 • C.
The concentrations of iron and free nitrogen were determined by colorimetric methods. For the iron analysis, a kit was used (Kit Analisa 438-Belo Horizonte, Brazil), following the manufacturer's instructions. Quantification by the kit was based on the reaction of the iron with the ferrozine reagent. For the nitrogen analysis, the colorimetric method of ninhydrin [43] was used with the final absorbance measured in a spectrophotometer (Biospectro SP-22, Curitiba, Brazil) at 575 nm.
The amount of dissolved N 2 O was analyzed by potentiometry using a specific electrode (Unisense, Aarhus, Denmark) coupled to a signal amplifier (PA2000-Unisense, Aarhus, Denmark) to read the current intensity.
The determination of mass transfer of N 2 O and O 2 was carried out by measurements of the concentration of these gases [33]. In the first step, a volume of artificial seawater (ASW) saturated with oxygen (previously purged with compressed air) was diluted with gas-free ASW (previously boiled and vacuum-cooled) to provide 2000 mL of a solution at approximately 10% oxygen saturation. This solution was transferred to the bioreactor vessel and purged with N 2 O at a flow rate of 0.5 L·min −1 under stirring rates of 100, 200, and 300 RPM. The initial concentration of O 2 and the elapsed time to reach zero reading by the sensor were recorded. In addition, the N 2 O saturation concentration (when no increase was detected by the sensor reading) and the time required to reach it were also measured. These data were used to calculate the mass transfer coefficient of N 2 O and O 2 as kLa = (ln(C* − C L ))/t (2) where (C* − C L ) refers to the variation of the concentration of each gas in the experiment. t is the time in minutes.
Gassing Costs
Gas prices have been consulted on BOC-Linde Plc (www.boconline.ie/shop, accessed on 1 May 2020). Prices were listed as of 1 May 2020.
Conclusions
The results presented here showed that in fed-batch culture, the supply of N 2 O and Fe 2+ led to the highest production of magnetite in strain MV-1 T in the bioreactor (32.5 mg·L −1 ). However, a decrease in global magnetosome production per cell and an increased number of non-magnetic cells negatively affected magnetite productivity in later cultivation phases. Due to the high demand of N 2 O for magnetosome production, and the necessity for a continuous flow of growth media, a regime for nitrous oxide injection was developed. Our pulse strategy led to an adequate supply of final electron acceptors for continuous magnetosome production. Thus, a continuous culture was designed to maintain a high activity of magnetosome formation in bacterial cells for extended periods. Despite that maximum reached production was lower than that in fed-batch, productions as high as 26.1 mg·L −1 were maintained until 168 h.
The next steps of our research will aim at improving magnetite yields and, consequently, reducing the production costs of prismatic magnetite magnetosomes. Two possible strategies could be supplying other nutrients not examined here (e.g., mineral solution), and varying dilution rates in continuous culture. A deeper understanding of bacterial physiology through genome examination, focused on metabolism, will certainly benefit from chemostat culture and might help developing novel media compositions and cultivation strategies.
Patents
The results obtained in this study have been registered under the patent number BR10202001583 held in Brazil. | 2022-11-20T16:12:33.676Z | 2022-11-01T00:00:00.000 | {
"year": 2022,
"sha1": "36c68648ba0db787e89f08319c881227bcb14f8a",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/1660-3397/20/11/724/pdf?version=1668766241",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "c89902878f37b586b316a721ac4bd1a1aee32a4a",
"s2fieldsofstudy": [
"Environmental Science",
"Biology",
"Engineering"
],
"extfieldsofstudy": [
"Medicine"
]
} |
222316988 | pes2o/s2orc | v3-fos-license | Multiple polypoid colonic metastases from rectal adenocarcinoma with signet ring cells features: a case report
Background Multiple polypoid colonic metastases are very rare which mainly originated from gastric carcinoma or melanoma. For rectal cancers, liver, lung and peritoneum are the most common metastatic sites. Here we present an unusual case with rectal adenocarcinoma and metachronous multiple colonic polypoid metastases. Case presentation A 53-year-old man who underwent radical resection for rectal cancer 2 years ago was admitted to our department for an elevation of CEA level of 18.4 ng/ml. Colonoscopy revealed ten ivory rubbery colonic polypoid lesions (about 5 mm in diameters) in the large bowel which were confirmed as signet ring cell carcinomas (SRCC) by biopsy, but full-body contrast enhanced CT and PET-CT showed no other suspicious lesion. Seven weeks later, a laparoscopic total colectomy was performed and more than 50 polypoid lesions were observed throughout the mucosal surface of the large intestine which were confirmed as metastatic SRCC by postoperative pathological examination. All the 34 paracolic lymph nodes retrieved were involved. After 4 months, diffuse abdominopelvic and multiple bone metastases were identified by CT and the patient died of the disease 1 month later. Conclusion Here we present an unusual case of multiple colonic polypoid metastases of rectal adenocarcinoma. For SRCC that is prone to have disseminated micrometastases, colonic ‘polyps’ may be the early noticeable sign of undetectable and extensive tumor spread. Instead of surgical resection of ‘the confined disease in colon’, systemic treatment maybe a more appropriate choice.
Background
Multiple polypoid colonic metastases from other distant gastrointestinal carcinomas are extremely rare, only a few cases have been reported in the literatures [1][2][3][4][5][6][7]. Among these cases, colonic metastases were mainly originated from gastric carcinoma or melanoma, with signet ring cell features being the most common histological type.
Signet ring cell carcinoma (SRCC) is a rare subtype of colorectal cancer, accounting for about 1% of all rectal cancers [8,9]. Patients with SRCCs often present with distinct clinical features and poor prognosis [10,11].
The most common sites of metastases from rectal cancer are the liver, lung, and peritoneum. Here, we report an unusual rectal cancer case with metachronous multiple colonic polypoid metastases.
Case presentation
A 53-year-old man with progressive abdominal pain and distention was admitted to our tertiary care center on October 15, 2013. Colonoscopy revealed a rigid circumferential neoplasm in the rectum about 8-9 cm from the anal verge, with an increased CEA level of 10.4 ng/ ml. In addition, two sessile polys (2-3 mm) in descending colon were also found and removed by forceps during colonoscopy, which were pathologically confirmed as adenomas. MRI showed a circumferential thickening of the bowel wall (maximum thickness, 14 mm), suggesting the diagnosis of malignant tumor (cT3N0Mx). After three times of fruitless biopsies, which invariably revealing "inflammation", a transrectal incisional biopsy under the general anesthesia was performed and poorly differentiated adenocarcinoma was diagnosed. The patient received the neoadjuvant chemoradiotherapy (long-term radiation of 50 Gy/25f with concomitant capecitabine) and followed by the radical abdominoperineal resection surgery. The postoperative pathology showed poorly-differentiated adenocarcinoma with signet ring cell features (staged ypT3N1b). Thereafter, the patient received six cycles of adjuvant single-agent chemotherapy (capecitabine), owing to his refusal of the recommended intensified regimens.
During regular follow-up, the patient was diagnosed with thrombocythemia and began to take hydroxyurea 1 g twice a day. Two years after surgery, the CEA level increased significantly from 2.9 to 18.4 (ng/mL). Colonoscopy revealed about ten polypoid lesions (about 5 mm in diameter), and the biopsies proved to be SRCC ( Fig. 1). Imaging examinations including full-body contrast enhanced CT, PET-CT and gastroscope were then performed, with no other primary or metastatic foci being identified. The patient refused further systemic chemotherapy, but insisted on surgery. Two months later, a laparoscopic total colectomy with permanent ileostomy was performed. By examining the gross specimen, more than 50 polypoid lesions were scattered throughout the colonic wall, ranging from 2 to 10 mm in diameter (Fig. 2a). Pathology showed multifocal signet ring cell carcinomas with some lesions involving the whole layer of colonic wall (Fig. 2b-c), as well as a large number of metastatic nodules throughout the mesocolon. All the 34 paracolic lymph nodes retrieved were involved. Immunohistochemically, the metastatic tumors were CEA (focal+), CK7 (−), CK20 (+), MUC2 (+), MUC5AC (−), MUC6 (−), E-Cadherin (+), MLH-1 (+), MSH-2 (+), MSH-6 (+), PMS-2 (+), Ki-67 index: 70%. After 4 months, diffuse abdominopelvic and multiple bone metastases were detected through the CT and the patient died of the disease 1 month later.
In additional investigation by targeted next generation sequencing using a 1021-gene panel, somatic mutations of ALK, EPHB2, ERBB4, GRIN2A, PTPRD, TP53 were detected in tumor tissue.
Informed consent documents were provided by patients' surrogates.
Discussion and conclusion
Polypoid colonic metastases are extremely uncommon, which has been previously reported less than 10 cases [1]. In this case, the pathologic examination demonstrated the same histological type between the colonic polypoid lesions and the original rectal cancer. Also, other malignancies including the gastric cancer were excluded by the gastroscope and PET-CT. Besides, the immunohistochemical exam of the colonic metastases showed CK7 − / CK20 + , which indicated the gastrointestinal origin [12,13]. Therefore, we conclude that this is an extremely rare case of multiple polypoid colonic metastases from the rectal carcinoma with signet ring cell features.
Patients with SRCC often have distinct clinical characteristics and poor outcomes [11]. Evidence indicated that in comparison with well to moderately differentiated colorectal adenocarcinoma, SRCC was prone to have widespread peritoneal metastases, instead of common liver or lung metastases [14,15]. SRCC was believed to be associated with distinctive molecular features. Study have shown that tumor cells of SRCC have impaired expression of adhesion molecules including E-cadherin and β-catenin [16], which may attribute to the loose appearance and aggressive behavior. In this case, we found disseminated macroscopic metastases confined to the mesocolon and bowel wall during surgery. Whether these lesions spread via uncommon route, such as hematogenous or lymphatic spread through the submucosal or mesenteric vasculatures, is still elusive.
As the colonic polypoid metastases implied the wide spread of the disease, most of the previous cases were treated with a non-operative approach using systemic Fig. 1 Colonoscopy images showing multiple "polyps" throughout colon chemotherapy [1,5,6] or best supportive care [7], and the outcomes were diverse and poor.
In 2015, Hugen et al. [17] found that adjuvant chemotherapy could improve the survival of stage III colorectal signet ring cell carcinoma. A recent study also revealed that, compared with surgery alone, chemotherapy alone or combined with surgery may further improve the prognosis of SRCC patients with peritoneum metastases [18]. In this case, unfortunately, the unusual metastatic pattern and the false-negative imaging findings misguided the surgeons to perform an unbeneficial colectomy surgery. We suppose that the feature of micrometastases in SRCC make it difficult to identify the wide spread of the disease by imaging examinations, even in patients with heavy tumor burden.
In summary, the presence of multiple polypoid colonic metastases may represent a very rare condition of extensively systemic spread of SRCC. Even without evidence of involvement of other organs, we think there is no role for colectomy in this condition. Systemic treatment with intensified chemotherapy and new targeted therapies may bring a glimmer of hope for patients with such refractory cancer.
Learn more biomedcentral.com/submissions
Ready to submit your research Ready to submit your research ? Choose BMC and benefit from: ? Choose BMC and benefit from: | 2020-10-14T14:08:00.374Z | 2020-10-14T00:00:00.000 | {
"year": 2020,
"sha1": "d8b87a7a5e03e6958a46124e38f17228da160d4a",
"oa_license": "CCBY",
"oa_url": "https://bmcgastroenterol.biomedcentral.com/track/pdf/10.1186/s12876-020-01493-8",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "d8b87a7a5e03e6958a46124e38f17228da160d4a",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
119317861 | pes2o/s2orc | v3-fos-license | The Role of the Jacobi Identity in Solving the Maurer-Cartan Structure Equation
We describe a method for solving the Maurer-Cartan structure equation associated with a Lie algebra that isolates the role of the Jacobi identity as an obstruction to integration. We show that the method naturally adapts to two other interesting situations: local symplectic realizations of Poisson structures, in which case our method sheds light on the role of the Poisson condition as an obstruction to realization; and the Maurer-Cartan structure equation associated with a Lie algebroid, in which case we obtain an explicit formula for a solution to the equation which generalizes the well known formula in the case of Lie algebras.
Introduction
Realization Problem for Lie Algebras. Any Lie group G carries a canonical 1-form with values in the tangent space to the identity g, φ ∈ Ω 1 (G; g), known as the Maurer-Cartan form of G. Actually, the Lie group structure is encoded, in some sense, in the 1-form and its properties; this is in fact Cartan's approach to Lie's infinitesimal theory. The two main properties of the Maurer-Cartan form are: it satisfies the so-called Maurer-Cartan structure equation 1 and it is pointwise an isomorphism (the latter is often phrased as the property that the components of the 1-form with respect to some basis form a coframe). The Maurer-Cartan structure equation reveals a Lie algebra structure on g. Of course, the resulting Lie algebra is the same one obtained in the more common approach of using invariant vector fields.
Conversely, if we begin with an n-dimensional Lie algebra g, we can formulate the following problem, known as the realization problem for Lie algebras: find a g-valued 1-form φ ∈ Ω 1 (U ; g) defined on some open neighborhood U ⊂ g of the origin such that φ is pointwise an isomorphism and satisfies the Maurer-Cartan structure equation (1) dφ A solution to the problem induces a local Lie group structure on some open subset of U (see [9], p. 368-369) and, therefore, we can think of this realization problem as the problem of locally integrating Lie algebras. A solution to this problem is obtained by supposing that the Lie algebra integrates to a Lie group, and pulling back the canonical Maurer-Cartan form on the Lie group by the exponential map. This produces the following g-valued 1-form φ ∈ Ω 1 (g; g), whose defining formula refers only to data coming from the Lie algebra and not from the Lie group: (2) φ x (y) = 1 0 e −t adx y dt, x ∈ g, y ∈ T x g. This formula defines a solution to (1), as can be verified directly, and since it is equal to the identity at the origin, it is pointwise an isomorphism in a neighborhood of the origin. See [7,11] for more details. We now make the following observation: neither Equation (1) nor Formula (2) rely on the Jacobi identity; they make perfect sense if we replace the Lie algebra with the weaker notion of a pre-Lie algebra, namely a vector space g equipped with an antisymmetric bilinear map [·, ·] : g × g → g. However, (2) is a solution of (1) if and only if g is a Lie algebra, which is not difficult to show. This leads to the natural question: what is the precise role of the Jacobi identity? Put differently, at what point in the integration process does the Jacobi identity appear?
In Section 1 we present a 2-step method for solving the realization problem for Lie algebras which answers this question. The method can be outlined as follows: • Step 1 (Theorem 1.2): we formulate a weaker version of the realization problem, which admits a unique solution given any pre-Lie algebra. • Step 2 (Theorem 1.4): we show that the solution of the weak realization problem is a solution of the complete realization problem if and only if the Jacobi identity is satisfied. Two nice features of the method are: • Step 1 produces an explicit formula for a solution, • Step 2 gives an explicit relation between the Maurer-Cartan structure equation and the Jacobi identity. Loosely speaking, one is the derivative of the other.
Similar Phenomenon: Poisson Realizations. There is a striking similarity between the phenomenon we just observed and a phenomenon that occurs in the story of symplectic realizations of Poisson manifolds. Recall that a Poisson manifold (M, π) is a manifold M equipped with a bivector π which satisfies the Poisson equation [π, π] = 0 (of course, the Poisson equation is equivalent to the condition that the induced Poisson bracket satisfy the Jacobi identity). A symplectic realization of a Poisson manifold (M, π) is a symplectic manifold (S, ω) together with a surjective submersion p : S → M that satisfy the equation It was shown in [6] that for any Poisson manifold (M, π), a symplectic realization is explicitly given by the cotangent bundle T * M equipped with the symplectic form together with the projection p : T * M → M . Here, ω can is the canonical symplectic form and ϕ t is the flow associated with a choice of a contravariant spray on T * M . See [6] for more details. As in the realization problem of Lie algebras, we make the following observation: neither Equation (3) nor Formula (4) depend on the Poisson equation; they make perfect sense when replacing π with any bivector. And as before, there is the natural question as to the precise role of the Poisson equation in the existence of symplectic realizations, a question which was raised in [6] (see last paragraph of the paper).
An explicit relation between the symplectic realization equation and the Maurer-Cartan structure equation was observed by Alan Weinstein [12] in his pioneer work on Poisson manifolds. Weinstein showed that, locally, (3) is equivalent to a Maurer-Cartan structure equation associated with an infinite dimensional Lie algebra, and exploited this to prove the existence of local symplectic realizations by using a heuristic argument to solve this Maurer-Cartan structure equation, producing an explicit local solution of the type (4).
In Section 3, we apply our method to solve the Maurer-Cartan structure equation which Weinstein formulated. As with Lie algebras, we do this by identifying a weaker version of the equation that admits a unique solution given any bivector, not necessarily Poisson, and proceed to show that the solution is a local symplectic realization if and only if the bivector satisfies the Poisson equation. We obtain an explicit relation between the Poisson equation and the symplectic realization condition, thus pinpointing the role of the Poisson equation in the problem of existence of local symplectic realizations.
The Lie Algebroid Case. In addition to local symplectic realizations, we believe that our method can be adapted to various other situations which generalize or resemble the classical Lie algebra case. One important generalization, which we treat in Section 3, is the realization problem of a Lie algebroid. Although extra difficulties do arise, it is remarkable that the procedure continues to work in this case, despite the fact that the simple to handle bilinear bracket of a Lie algebra is replaced by a more cumbersome bi-differential operator. This is largely facilitated by the presence of certain flows, known as infinitesimal flows, associated with time-dependent sections of the Lie algebroid.
As we noted in "Step 1" above, our method produces an explicit solution. In the Lie algebra case, this is the well known Formula (2), whereas the formula we obtain in the Lie algebroid case does not appear in the literature to the best of our knowledge (see Theorem 3.3). Having this explicit formula at hand can prove to be useful; in particular, one can attempt to use it to explicitly integrate Lie algebroids locally (as an indication of feasibility, in [3] a symplectic realization of a Poisson manifold was used to integrate the associated Lie algebroid to a local symplectic groupoid, see also discussion in subsection 3.1).
Final Remark. We would like to end the introduction with a historical remark and to briefly describe our motivation for reopening this classical problem. The Maurer-Cartan structure equation originates in the work ofÉlie Cartan [1,2] under the name of "Structure Equations". In his work on Lie pseudogroups, Cartan associates the equation with a Lie pseudogroup, and subsequently extracts out of the equation the Lie pseudogroup's "structure functions", i.e. its infinitesimal data. The reverse direction, the problem of finding and classifying the solutions to the structure equations associated with a given infinitesimal data, is known as the realization problem, two special cases of which we discussed above (the Lie algebra case and the Lie algebroid case).
This work arose as part of a larger project aimed at understanding Cartan's original work on Lie pseudogroups in a global, more geometric and coordinate-free fashion, and in particular, the realization problem. Since Cartan's realization problem involves infinitesimal structures that fail to satisfy the Jacobi identity, we first tried to understand the role of the Jacobi identity in the integration process of structures for which the Jacobi identity is satisfied, namely Lie algebras and Lie algebroids. The method and the results that we came across and that we are presenting here seemed to have relevance beyond the realization problem itself, and we, therefore, decided to present it in an independent fashion.
The Maurer-Cartan structure equation of a Lie algebra
In this section, we present the 2-step method for solving the realization problem for a Lie algebra which was outlined in the introduction. Let us first recall the necessary definitions. Definition 1.1. A pre-Lie algebra is a vector space g equipped with an antisymmetric bilinear map [·, ·] : g × g → g. A Lie algebra is a pre-Lie algebra that satisfies the Jacobi identity: [ Associated with a pre-Lie algebra is the adjoint map ad : g → End(g), ad x (y) = [x, y], and the Jacobiator The space of g-valued differential forms on g is denoted by Ω * (g; g). This space is equipped with the de Rham differential d : Ω * (g; g) → Ω * +1 (g; g) and with a bracket, [·, ·] : Ω p (g; g) × Ω q (g; g) → Ω p+q (g; g), that plays the role of the wedge product on g-valued forms and is defined by the analogous formula: (6) [ω, η](X 1 , ..., X p+q ) = σ∈Sp,q sgn(σ)[ω(X σ(1) , ..., X σ(p) ), η(X σ(p+1) , ..., X σ(p+q) )], where S p,q is the set of (p, q)-shuffles.
Of course, given any open subset U ⊂ g, we also have the space of g-valued forms Ω * (U ; g) on U equipped with a differential and a bracket, defined in the same manner. Given any φ ∈ Ω 1 (U ; g), the Maurer-Cartan 2-form associated with φ is defined by: and the Maurer-Cartan structure equation is Note that in the last equation, and throughout the paper, we identify the tangent spaces of a vector space with the vector space itself without further mention.
Recall the realization problem for Lie algebras: find a 1-form φ ∈ Ω 1 (U ; g) on some open neighborhood U ⊂ g of the origin such that φ is pointwise an isomorphism and satisfies the Maurer-Cartan structure equation.
We now present our method for solving this realization problem.
1.1.
Step 1: We show that a weaker version of the realization problem admits a solution given any pre-Lie algebra. We accomplish this by imposing a boundary condition which transforms the equation into a simple ODE that can be easily solved.
Theorem 1.2. Given any pre-Lie algebra g, the equation admits a solution in Ω 1 (g; g) which is pointwise an isomorphism at the origin (and thus on some open neighborhood of the origin). Moreover, if we impose the boundary condition then the solution is unique and is given by the following formula: To get a "geometric feel" of the equations, note that (7) is the restriction of the Maurer-Cartan structure equation to all two dimensional subspaces of g, and (8) is the condition that φ restricts to the identity on all one-dimensional subspaces.
Proof. First note that (8) implies that φ 0 = id, and in particular, φ is pointwise an isomorphism at the origin. Let φ ∈ Ω 1 (g; g) be a solution of (7) and (8). We will show that φ must be of the form given by (9), which implies uniqueness. Conversely, as we will explain at the end of the proof, reading the steps in the reverse direction will imply that (9) is a solution, thus proving existence.
Fix x, y ∈ g. The solution φ satisfies where, in the last equality, we have used that (8) Thus for a φ that satisfies (8), (11) is equivalent to which is equivalent to Integrating from 0 to t ′ : Setting t ′ = 1 proves that φ coincides with (9) . Next, we show that φ defined by (9) is a solution. Note that φ x (x) = 1 0 e −t adx x dt = 1 0 x dt = x, and thus (8) is satisfied. Equation (9) is equivalent to (13) which is a solution of (12), and since φ satisfies (8), it is a solution of (11). In particular, setting t = 1 implies that (MC φ ) x (x, y) = 0, and thus (7) is satisfied.
1.2.
Step 2: By obtaining explicit equations relating the Maurer-Cartan 2-form with the Jacobiator, we show that the solution obtained in the previous step is a solution of the Maurer-Cartan structure equation if and only if the Jacobiator vanishes. Theorem 1.4. Let g be a pre-Lie algebra and φ ∈ Ω 1 (g; g) the solution of (7) and (8).
or, more precisely, Proof. Equations (14) and (15) imply that M C φ = 0 if and only if Jac = 0. Let us derive these equations. Fix x, y, z ∈ g. We will compute with t ∈ (0, 1), in two different ways.
2) On the other hand, In the fourth equality, we have used (7) and (8). In particular, (8) or equivalently, Integrating from 0 to 1 produces (15), while multiplying both sides of the equation by 1 t 2 , taking the limit of t to 0 and using the fact that (MC φ ) 0 (y, z) = 0 (see (10)) produces (14).
Remark 1.5. The method we present here was inspired by the method used in [11] (see sections 1.3-1.5) to compute the differential of the exponential map of a Lie group and to derive the Baker-Campbell-Hausdorff formula of a Lie algebra.
The Maurer-Cartan structure equation and Local Symplectic Realizations of Poisson Structures
In this section, we apply the method from the previous section to the problem of existence of symplectic realizations of Poisson manifolds. The role of the Poisson equation becomes manifest, in the same way that the role of the Jacobi identity was made manifest in the Lie algebra case.
Equivalently, a pre-Poisson manifold is a manifold M equipped with an R-bilinear an- , and vice versa. The Poisson equation is equivalent to the Jacobi identity, i.e. to the condition Jac = 0, where Jac is the Jacobiator associated with {, } (defined as in the previous section).
By the Leibniz identity, a function f ∈ C ∞ (M ) induces a vector field X f ∈ X(M ), the Hamiltonian vector field associated with f , by the condition X f (g) = {f, g} for all g ∈ C ∞ (M ), or equivalently, X f (g) = π(df, dg) for all g ∈ C ∞ (M ).
Poisson manifolds can be localized, i.e. if (M, π) is a Poisson manifold and U ⊂ M is an open subset, then (U, π| U ) is a Poisson manifold.
A symplectic realization of a Poisson manifold (M, π) is a symplectic manifold (S, ω) together with a surjective submersion p : S → M such that p is a Poisson map, i.e. the bivector ω −1 induced by the symplectic form ω is p-projectable to the bivector π, that is to say, dp(ω −1 ) = π. A local symplectic realization of (M, π) around a point m ∈ M is a symplectic realization of (U, π| U ), where U is some open neighborhood of m.
In the problem of existence of local symplectic realizations it is enough to consider Poisson manifolds of the type (O, π), where O ⊂ V is an open subset of a vector space V . The following proposition was proven by Alan Weinstein ([12], section 9). To be more precise, Weinstein proved it for the case that (O, π) is a Poisson manifold; however, the arguments do not rely on the Jacobi identity and the proposition also holds for the case that (O, π) is a pre-Poisson manifold.
Here ξ and ζ are interpreted as linear functionals on V , X ξ is the corresponding Hamiltonian vector field and ϕ X ξ its flow.
Then, the 2-form dφ is symplectic on some neighborhood U ⊂ O × V * of the zero-section and, writing p : O × V * → O for the projection, Weinstein's remarkable observation was that the symplectic realization condition can be locally rephrased as a Maurer-Cartan structure equation. This equation lives in the space Ω * (V * ; C ∞ (O)) consisting of differential forms with values in C ∞ (O), where a 1- , is smooth for all ζ ∈ V * , and similarly for higher degree forms. This space is equipped with the de Rham differential d defined as usual, and a bracket {, } defined as in (6) (with the Lie bracket replaced by the Poisson bracket); thus, one can make sense of the Maurer-Cartan 2-form associated with a 1-form φ ∈ Ω 1 (V * ; C ∞ (O)): Weinstein proceeded to show that if (O, π) is a Poisson manifold, then the 1-form given by (18) satisfies the Maurer-Cartan structure equation, thus proving the existence of local symplectic realizations. Of course, the fact that the Poisson bracket satisfies the Jacobi identity is used in the proof, but its precise role is somewhat obscure, appearing as a "mere step" in the calculation (see [12], p. 547).
The following two theorems shed further light on the role of the Jacobi identity as an obstruction in this problem. The first of the two theorems, an analog of "Step 1" of the previous section, demonstrates how close dφ induced by (18) is from being a symplectic realization, regardless of the Jacobi identity.
Moreover, it is the unique solution of (19) together with the boundary condition Proof. The proof is essentially the same as the proof of Theorem 1.2. One must only make the following adjustments: • g with V * (and acccordingly x, y with ξ, ζ), • the Lie bracket [, ] with the Poisson bracket {, }, • e t ad ξ with (ϕ t X ξ ) * , and while making the last of the three adjustments, one notes that derivatives of matrix valued functions of t become derivatives of flows.
The next theorem, an analog of "Step 2" of the previous section, gives an explicit relation between Jac and MC φ which translates into a precise relation between the failure of the Poisson equation and the failure of dφ from being a symplectic realization. Of course, it follows that if the Poisson equation is satisfied, then dφ is a symplectic realization. a solution to (19) and (20), then or more precisely, Proof. The proof is essentially the same as the proof of Theorem 1.4 after making the necessary adjustments as in the proof of the previous theorem, and using the fact that by the Leibniz identity, the vanishing of the Jacobiator on linear functions implies that it vanishes.
Remark 2.6. Theorems 1.2 and 1.4 are in fact special cases of theorems 2.4 and 2.5.
Recall that a linear Poisson structure on the vector space g * is a Poisson bracket on C ∞ (g * ) satisfying the property that it restricts to a Lie bracket on the linear functions g ⊂ C ∞ (g * ).
This defines a one-to-one correspondence between linear Poisson structures on g * and Lie algebra structures on g. In the case of linear Poisson structures, the Hamiltonian vector field on g * associated with an element x ∈ g = (g * ) * is simply the transpose (ad x ) * of the linear map ad x : g → g. The flow of (ad x ) * is the transpose of the linear map e t adx , and the pullback by the flow is precisely e t adx . This implies that the solution (18) takes values in g ⊂ C ∞ (g * ), and it follows that theorems 2.4 and 2.5 for linear Poisson structures coincide with theorems 1.2 and 1.4.
The Maurer-Cartan structure equation of a Lie algebroid
In this section, we generalize our method from the Lie algebra case to the Lie algebroid case. We will begin by recalling the basic definitions and discussing the realization problem for Lie algebroids, after which we will state and prove theorems 3.3 and 3.5 which generalize theorems 1.2 and 1.4. Associated with a pre-Lie algebroid is the Jacobiator tensor Jac ∈ Hom(Λ 3 A, A), defined at the level of sections by
− → M is a vector bundle A over M equipped with a vector bundle map (the 'anchor') ρ : A → T M and an antisymmetric bilinear map (the 'bracket') [·, ·] : Γ(A) × Γ(A) → Γ(A) satisfying,
and easily checked to be C ∞ (M )-linear in all slots. The notions of A-connections, A-paths, geodesics and infinitesimal flows that appear in the context of Lie algebroids remain unchanged when we give up on the Jacobi identity and pass to pre-Lie algebroids. We will assume familiarity with these notions, and otherwise refer the reader to the appendix (and to [4] for more details).
Let A → M be a pre-Lie algebroid equipped with an A-connection ∇. To every point a ∈ A we associate the unique maximal geodesic g a : I a → A that satisfies g a (0) = a. We denote its base curve by γ a : I a → M . Let A 0 ⊂ A be a neighborhood of the zero-section such that g a is defined up to at least time 1 for all a ∈ A 0 . On A 0 we have the exponential map exp : A 0 → A, a → g a (1), and the target map τ = π • exp : A 0 → M . Let Ω * π (A 0 ; τ * A) be the space of foliated differential forms (foliated with respect to the foliation by π-fibers) with values in τ * A. Throughout this section we will use the canonical identification between the vertical bundle of A 0 and the pullback of Given a vector bundle connection ∇ : X(M ) × Γ(A) → Γ(A), we define the Maurer-Cartan 2-form associated with an anchored 1-form φ ∈ Ω 1 π (A 0 ; τ * A) to be The differential-like map d τ * ∇ and bracket on Ω * π (A 0 ; τ * A) are defined in the usual way (see appendix). The anchored condition implies that MC φ is independent of the choice of connection (Proposition A.2). The auxiliary connection ∇ should not be confused with the A-connection ∇, which is part of the data we fix.
Of course, given any open subset U ⊂ A 0 , we have the space of forms Ω * π (U ; τ * A) equipped with a differential-like operator and a bracket in the same manner, and anchored 1-forms have associated Maurer-Cartan 2-forms. The realization problem for Lie algebroids can now be stated: find an anchored 1-form φ ∈ Ω 1 π (U ; τ * A) on some open neighborhood of the zero-section of A 0 such that φ is pointwise an isomorphism and satisfies the Maurer-Cartan structure equation:
Remark 3.2.
A solution of the Maurer-Cartan structure equation can also be interpreted as a Lie algebroid map: a 1-form φ ∈ Ω 1 π (A 0 ; τ * A) can be viewed as a vector bundle map from the Lie algebroid T π A 0 → A 0 (the vertical bundle, a Lie subalgebroid of T A 0 → A 0 ) to the Lie algebroid A → M covering τ , the anchored condition on φ is equivalent to the vector bundle map commuting with the anchors, and φ satisfies the Maurer-Cartan structure equation if and only if the vector bundle map is a Lie algebroid map (see [5] or [8] for more details). From this point of view, the Maurer-Cartan structure equation is a special case of the generalized Maurer-Cartan equation for vector bundle maps between Lie algebroids which commute with the anchors studied in [8] (section 3.2).
As in the case of Lie algebras (see introduction), one can find a solution to the realization problem by assuming that the Lie algebroid integrates to a Lie groupoid and pulling back the canonical Maurer-Cartan 1-form on the Lie groupoid by the exponential map. The resulting formula will not depend on the Lie groupoid, and one can verify directly that the formula is indeed a solution, and, therefore, not have to require that the Lie algebroid be integrable.
Let us explain this in more detail. Let G ⇒ M be a Lie groupoid with source/target map s/t. The canonical Maurer-Cartan 1-form φ MC ∈ Ω 1 s (G; t * A) is a foliated differential 1-form on G (foliated with respect to the foliation by s-fibers) with values in t * A. It is defined precisely as in the case of Lie groups: the difference being that the right multiplication map R g −1 is only defined on s −1 (s(g)). For this reason, the resulting form is foliated. The Maurer-Cartan form satisfies the anchored property ρ((φ MC ) g (X)) = (dt) g (X) and the Maurer-Cartan structure equation d t * ∇ φ MC + 1 2 [φ MC , φ MC ] ∇ = 0 (for more details, see [8], section 4). The exponential map Exp := Exp ∇ : A 0 → G on a Lie groupoid requires a choice of an A-connection ∇ on A, where A 0 is as above. Such a choice induces a normal connection on each s-fiber and the exponential map is then defined in the usual way. This choice of an A-connection also gives rise to an exponential on the Lie algebroid, as we saw above, and the two satisfy the following relations: If we pull back the Maurer-Cartan form by the exponential map, the resulting form will be an element of Ω 1 π (A 0 ; τ * A). It will be anchored as a result of (24). It is now not difficult to verify that the fact that φ MC satisfies the Maurer-Cartan structure equation on the Lie groupoid implies that Exp * φ MC satisfies the Maurer-Cartan structure equation on the Lie algebroid, i.e. satisfies (21).
In the following two theorems we will obtain a solution by taking a different path, namely by generalizing our method from section 1. The first theorem is the generalization of "Step 1": a weaker version of the realization problem which admits a unique solution for any pre-Lie algebroid. The theorem gives an explicit formula for a solution to the realization problem of Lie algebroids. In Corollary 3.4 we show that our solution coincides with Exp * φ MC .
admit a solution in Ω 1 π (A 0 ; τ * A) which is pointwise an isomorphism on a small enough neighborhood of the zero section of A 0 . Moreover, if we impose the boundary condition then the solution is unique and can be described as follows: let ξ : [0, 1] × (−δ, δ) × M → A be a smooth map such that ξ t ǫ = ξ(t, ǫ, ·) is a section of A and ξ t ǫ (γ a+ǫb (t)) = g a+ǫb (t) for all (t, ǫ) ∈ [0, 1] × (−δ, δ), and let ψ ξ0 be the infinitesimal flow associated with the time dependent section ξ 0 (see appendix). The solution is given by Proof. Equation (27) implies that a solution φ is equal to the identity on the zero section of A and thus pointwise an isomorphism on a small enough neighborhood of the zero section. Let φ ∈ Ω 1 π (A 0 ; τ * A) be a solution of (25), (26) and (27). In this proof we show that φ must be given by (28). The remaining arguments are precisely as in the proof of Theorem 1.2.
In the second equality we have used Lemma A.1 to commute the pullback with d τ * ∇ and in the last equality we have used (29) which is equivalent to (27). The two terms in the final expression are covariant derivatives of paths, which make sense because γ a is the base curve of the curve t → φ ta (tb) and γ ǫ is the base curve of ǫ → g a+ǫb (t ′ ).
To compute ( 1 2 [φ, φ] ∇ ) t ′ a (a, t ′ b), let ξ be the map as in the statement of the theorem and let η be a time dependent section of A satisfying η t (γ a (t)) = φ ta (tb). ( In the last equality, we have used the defining property (37) of the infinitesimal flow for the first term, ρ(ξ t ′ 0 (γ a (t ′ ))) = ρ(g a (t ′ )) =γ a (t ′ ) for the second term, and for the third term, where we have used the anchored property (26) in the second equality. Thus for φ that satisfies (27), (30) is equivalent to where we have used the characterization (36) of covariant derivatives of curves. Applying ψ 1,t ′ ξ0 to both sides and using the product rule, the latter equation is equivalent to d dt ψ 1,t ξ0 η t (γ a (t)) = ψ 1,t ξ0 d dǫ ǫ=0 ξ t ǫ (γ a (t)).
Integrating t ′ from 0 to 1, and using the definition of η and the property ψ 1,1 ξ0 = id, we obtain (28). Proof. We saw already in the text preceding the last theorem that the 1-form Exp * φ MC ∈ Ω 1 π (A 0 ; τ * A) is anchored and satisfies the Maurer-Cartan structure equation, and, in particular, it satisfies (25). Moreover, the initial condition (27) is satisfied since it is precisely the relation (23) when written out explicitly. The corollary now follows from the uniqueness assertion in the theorem.
The second theorem is the generalization of "Step 2" from section 1. It shows that the solution from the previous theorem is indeed a solution of the realization problem.
Theorem 3.5. Let A be a pre-Lie algebroid and φ ∈ Ω 1 π (A 0 ; τ * A) a solution of (25), (26) and (27). Choose A 0 to be small enough so that φ is pointwise an isomorphism. Then M C φ = 0 if and only if Jac = 0, or more precisely, where ξ is a time dependent section of A satisfying ξ t (γ a (t)) = g a (t) for all t ∈ (0, 1).
Proof. The proof goes along the same line as the proof of Theorem 1.4. As in Theorem 1.4, we will compute in two different way, where t ∈ (0, 1), x ∈ M, a ∈ (A 0 ) x , y, z ∈ A x and ∇ some vector bundle connection on A. 1) Consider the map f : (0, 1) × (−δ, δ) 2 → g, f (t, ǫ, ǫ ′ ) = t(a + ǫb + ǫ ′ c). Recall that γ a is the base curve of the geodesic g a that satisfies γ a (t) = τ (ta).
where the final expression is the covariant derivative of the curve t → (MC φ ) ta (tb, tc) covering γ a . 2) Since φ ∈ Ω 1 π (A 0 ; τ * A) is a pointwise isomorphism, it induces a linear map φ −1 : Γ(A) → X(A 0 ). Let ξ be as in the statement of the theorem, let η b and η c be time dependent sections of A satisfying η t b (γ a (t)) = φ ta (tb) and η t c (γ a (t)) = φ ta (tc) and let σ be a time dependent section of A satisfying σ t (γ a (t)) = (MC φ ) ta (tb, tc). Letã,b,c be time dependent vector The second equality is a slightly messy yet straightforward computation. It involves expanding MC φ with respect to the chosen connection, using the choices we made above of time dependent sections, and using (25), (27) and (26). In particular, it is used that (25) implies that: In the last equality we express the bracket [ξ t , σ t ] using the infinitesimal flow, see (37).
After equating the two expressions obtained, using characterization (36) of covariant derivatives of curves and applying ψ 1,t ξ , (33) becomes tc) . The remaining arguments are identical to Theorem 1.4.
3.1. The Poisson Case vs. the Lie Algebroid Case. Given the well known relations between Poisson manifolds and Lie algebroids, it is natural to wonder as to the relation between the instances of the Maurer-Cartan structure equation associated with these structures, i.e. as to the relation between Section 2 and Section 3 of this paper. Let us briefly touch upon this.
In one direction, any Lie algebroid A → M induces a Poisson structure on the total space of the dual vector bundle A * → M known as a linear Poisson structure (see [10]). This generalizes the construction of a linear Poisson structure on the dual of a Lie algebra. At the level of the associated Maurer-Cartan structure equations, it is not hard to verify that, locally and under obvious identifications, the Maurer-Cartan structure equations as well as the solutions are one and the same on both sides of this correspondence. In particular, trivializing A and computing the 1-form (28) will produce the same result as obtained by computing the 1-form (18) associated with the induced trivialization of A * . This is, of course, a generalization of the case of a Lie algebra which was discussed in Remark 2.6.
In the opposite direction, any Poisson manifold (M, π) induces a Lie algebroid structure on the cotangent bundle T * M → M , as originally shown in [3]. In that same paper, the authors prove that the local symplectic realization constructed by Weinstein in [12] (and discussed in section 2 above) has a canonically induced local symplectic groupoid structure on its total space whose associated Lie algebroid is (the restriction of) T * M → M . This same phenomenon occurs at the level of the Maurer-Cartan structure equations. Using the notation of Section 2, the local solution of the Maurer-Cartan structure equation associated with the Poisson manifold (O, π), with O ⊂ V , induces a local solution to the Maurer-Cartan structure equation associated with the Lie algebroid T * O = O × V * π − → O by differentiation of the coefficients, or more precisely, by the map Note that whereas in the Lie algebroid case we are able to obtain a "wide" solution, i.e. on an open neighborhood of the zero section of T * M → M , in the Poisson case we only obtain a local one around a point in M . It would be interesting to further investigate the relation given by (35) to see if a "wide" solution of the Lie algebroid case induces a "wide" solution of the Poisson case, thus producing yet another proof for the existence of global symplectic realizations.
Appendix A. Facts on (pre-)Lie Algebroids
In this appendix, various notions are recalled which are needed in section 3 for the formulation of the Maurer-Cartan structure equation on a Lie algebroid and its solution. For more details, the reader is referred to [4]. Note that all the notions that appear here and that are presented in [4] do not require the Jacobi identity and are therefore valid for pre-Lie algebroids as they are for Lie algebroids.
Let A → M be a pre-Lie algebroid (see section 3 for the definition). An A-connection on a vector bundle E → M is an R-bilinear map ∇ : Γ(A) × Γ(E) → Γ(E) satisfying the connection-like properties For the remainder of the appendix, let A → M be a pre-Lie algebroid equipped with an A-connection ∇. Note that there will be two different connections that will play a role in this appendix (and in section 3): an A-connection ∇ on A that is part of the data, and an auxiliary vector bundle connection ∇ on A that is used to write down the Maurer-Cartan structure equation globally, and which is not part of the data.
A.1. Time Dependent Sections. A time dependent section ξ of A is a map ξ : If ∇ : X(M )×Γ(A) → Γ(A) is a vector bundle connection, then given a base curve γ : I → M and a curve u : I → A covering γ, the covariant derivative (∇γu)(t) = ((γ * ∇) ∂ ∂t u)(t) can be characterized using time dependent sections as follows: choose a time dependent section ξ of A satisfying ξ t (γ(t)) = u(t) for all t ∈ I, then We will also use time dependent sections to express the bracket of a pre-Lie algebroid in a Lie derivative-like fashion, as one does for the bracket of vector fields. This involves the notion of an infinitesimal flow. Let ξ be a time dependent section of A and ρ(ξ) the corresponding time dependent vector field on M . Let ϕ t,s ρ(ξ) denote the flow of ρ(ξ) from time s to t. The infinitesimal flow, is the unique linear map satisfying the properties ψ u,t ξ • ψ t,s ξ = ψ u,s ξ , ψ s,s ξ = id and d dt t=s Defining the pullback of sections by the infinitesimal flow as (ψ t,s ξ ) * (α)(x) = ψ s,t ξ α(ϕ t,s ρ(ξ) (x)) for all α ∈ Γ(A), x ∈ M , the previous equation can be expressed in the more familiar form For more on infinitesimal flows and their global counterparts, flows along invariant time dependent vector fields on Lie groupoids, see [4].
Let g be an A-path with base curve γ, and let u : I → A be another curve covering γ. The covariant derivative of u with respect to g is the curve ∇ g u : I → A, which is defined in analogy to the usual covariant derivative described above: choose a time dependent section ξ of A satisfying ξ t (γ(t)) = u(t) for all t ∈ I, then A geodesic is a curve g : I → A satisfying the geodesic equation ∇ g g = 0. Geodesics are A-paths. Given any point a ∈ A, there is a unique maximal geodesic g a : I a → A satisfying g a (0) = a with domain I a . The base curve of g a will be denoted by γ a . Geodesics satisfy the following basic property: (38) g sa (t) = sg a (st), ∀ a ∈ A, s, t ∈ R, t ∈ I sa , which can be easily verified by checking that the curve t → sg a (st) satisfies the geodesic equation and then by noting that by uniqueness it must be equal to g sa since at time 0 it takes the value sa.
Let A 0 ⊂ A be a neighborhood of the zero section such that g a is defined up to time 1 for all a ∈ A 0 . The exponential map is defined as exp : A 0 → A, a → g a (1). The point π(exp(a)) ∈ M will be called the target of a and τ = π • exp : A 0 → M the target map.
A.3. The Maurer-Cartan 2-Form. Let Ω * π (A 0 ; τ * A) denote the space of foliated differential forms on A 0 (foliated with respect to the foliation by π-fibers) which take values in τ * A.
The map d ∇ squares to zero if and only if the connection is flat. If M has a foliation F and Ω * F (M ; E) are the foliated forms, then the map d ∇ descends to a map of foliated forms d ∇ : Ω * F (M ; E) → Ω * +1 F (M ; E). We will need the following property whose proof is elementary and will be left out: Lemma A.1. Let E → M be a vector bundle equipped with a connection ∇ and let f : N ֒→ M be a submanifold. Then the following property holds: for any φ ∈ Ω * (M ; E). If N and M are foliated and f is a foliated map, then the property holds for φ ∈ Ω * F (M ; E). In our particular case, the induced pull-back connection τ * ∇ on the vector bundle τ * A → A 0 induces a differential-like map d τ * ∇ : Ω * π (A 0 ; τ * A) → Ω * +1 π (A 0 ; τ * A). A 1-form φ ∈ Ω 1 π (A 0 ; τ * A) is said to be anchored if ρ • φ = dτ , or more explicitly, if ρ(φ a (b)) = (dτ ) a (b) for all a ∈ (A 0 ) x , b ∈ A x (where we are using the canonical identification T a A 0 ∼ = A x ).
The sum of these two equations vanishes if φ is anchored.
We call the 2-form given by (39) the Maurer-Cartan 2-form and denote it by MC φ . | 2015-12-14T21:51:35.000Z | 2015-05-29T00:00:00.000 | {
"year": 2015,
"sha1": "643fcb91ee391e1c5de4ee5808562b3727a528e4",
"oa_license": null,
"oa_url": "http://msp.org/pjm/2016/282-2/pjm-v282-n2-p13-s.pdf",
"oa_status": "BRONZE",
"pdf_src": "Arxiv",
"pdf_hash": "643fcb91ee391e1c5de4ee5808562b3727a528e4",
"s2fieldsofstudy": [
"Mathematics"
],
"extfieldsofstudy": [
"Mathematics"
]
} |
51955606 | pes2o/s2orc | v3-fos-license | High-Field Hole Transport in Strained Si and SiGe by Monte Carlo Simulation : Full Band Versus Analytic Band Models
Monte Carlo results are presented for the velocity-field characteristics of holes in (i) unstrained Si, (ii) strained Si and (iii) strained SiGe using a full band model as well as an analytic nonparabolic and anisotropic band structure description. The full band Monte Carlo simulations show a strong enhancement of the drift velocity in strained Si up to intermediate fields, but yield the same saturation velocity as in unstrained Si. The drift velocity in strained SiGe is also significantly enhanced for low fields while being substantially reduced in the high-field regime. The results of the analytic band models agree well with the full band results up to medium field strengths and only the saturation velocity is significantly underestimated.
INTRODUCTION
The progress in epitaxial growth techniques of unstrained and strained SiGe layers have led to intensified efforts to explore the potential performance enhancements in SiGe based devices.In particular, the practical usefulness ofp-MOSFETs with a channel consisting of strained Si [1] or strained SiGe [2] has been recently demonstrated.
Since field effect devices operate in the low-field and in the high-field regime, reliable modeling of hole transport is important for both cases.How-ever, previous publications on hole transport in strained Si and SiGe covered only the low field regime [3] or were restricted to strained SiGe and electric field strengths below 20 kV/cm [4].Hence, there is a clear need for investigations of high-field effects like velocity saturation where the consideration of the full band structure is often necessary for accurate results.On the other hand, for devices with realistic Germanium profiles full band Monte Carlo simulations still involve an un- manageable computational burden (e.g.prohibitive memory requirements) and analytic band structure approximations have to be used instead.The aim of this paper is therefore twofold: On one hand, we perform for the first time full band Monte Carlo simulations for strained Si and SiGe in the high-field regime.On the other hand, we present a simple analytic hole band model and evaluate its range of validity.
2. MODEL DESCRIPTION mobility obtained with the parabolic fits to the full band model.This condition yields for 0 in the unstrained case (mll m+/-_-= m) w2/5 3/5 rn ,,cond,,,DOS Finally, using this mass rn the nonparabolicity factor c is obtained from fitting the DOS up to eV.A similar procedure applies in the strained case.
The full band model for strained Si or SiGe is obtained by nonlocal empirical pseudopotential calculations including spin-orbit interaction [9].For the analytic band structure we neglect the warping of the three valence bands v 1,2, 3 and use a simple parametrization according to E(1 + avE) ---mll,---mll,v m_,---- with E c-e0,v because of the feasibility of this formula for applications.The scattering mechan- isms included are optical phonons and acoustic phonons in the isotropic and elastic equipartition approximation.In SiGe both Si-type and Ge-type phonons are considered and alloy scattering is taken into account with the alloy scattering potential adjusted to drift mobility measurements in unstrained SiGe [10].Exactly the same coupling constants are used with the full band and the analytic band model.The parameters c, mll, and m+/-, are adapted to the full band structure for the purpose of transport applications.The starting point is therefore the expression for the Ohmic drift mobility which involves for the scattering processes used only the Density Of States (DOS) and the square of the group velocity averaged over an energy surface ,})2 [11].Then a parabolic expression is used for each band to determine the masses mDOS and mcond by adjusting the DOS and V2, respectively, up to about 40 meV above the band edge to the respective full band results.For a good transport description the mobility of the analytic band model in Eq. (1) must equal the
VERIFICATION
In Figures 1, 2, 3 and 4 drift mobilities and drift velocities resulting from the full band and the analytic band model are compared with experi- mental data in the case of unstrained Si and Ge because an accurate reproduction of the experiments is essential in view of the increased importance of details of phonon and band models in the strained case.Overall good agreement is achieved.Especially the full band model in Figure 2 reproduces accurately the anisotropy of the velocity-field characteristics as well as the satura- tion drift velocity of Ref. [7].Within the isotropic band approximation (unstrained case) also the >. 10 E ! !<111> Exp., aef. [8] _" E ! 1 <100> full band analytic band model yields surprisingly good results and only significantly underestimates the drift velocity above 50 kV/cm.retained in strained Si, the drift velocity at lower fields is considerably improved due to the en- hanced population of the light hole band.In contrast, the saturation velocity is reduced in strained SiGe, but there is still a substantial improvement up to intermediate fields.But please keep in mind that no realistic estimate of the corresponding device performance can be based on 1.0 300 K ' strainedSi .. ELECTRIC FIELD (kV/cm) FIGURE 6 Results of the analytic band model for the velocity-field characteristics at 300 K in strained Si grown on a Si0.v Ge0.3 substrate, in unstrained Si and in strained Si0.6 Ge0.4 grown on a Si substrate.
Figure 5 alone because advantages like the possibility of modulation doping have to be considered for this purpose as well.The analytic band model again underestimates the drift velocity above 50 kV/cm and somewhat overestimates anisotropy.
FIGURETemperature dependence of Ohmic drift mobility for holes in unstrained Si: comparison of fidl band model, analytic band model and experimental results.
FIGURE 3
FIGURE 3 Velocity-field characteristics of the full band model for unstrained Ge at 77 and 220 K in comparison with experimental results. | 2018-08-14T20:37:27.083Z | 1998-01-01T00:00:00.000 | {
"year": 1998,
"sha1": "21cf38bcc3e0d0ff3289b5e9f927e5512b815654",
"oa_license": "CCBY",
"oa_url": "https://downloads.hindawi.com/archive/1998/065181.pdf",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "21cf38bcc3e0d0ff3289b5e9f927e5512b815654",
"s2fieldsofstudy": [
"Physics",
"Engineering"
],
"extfieldsofstudy": [
"Computer Science"
]
} |
18446812 | pes2o/s2orc | v3-fos-license | Procedural Complications of Spinal Anaesthesia in the Obese Patient
Background. Complications of spinal anaesthesia (SpA) range between 1 and 17%. Habitus and operator experience may play a pivotal role, but only sparse data is available to substantiate this claim. Methods. 161 patients were prospectively enrolled. Data such as spread of block, duration of puncture, number of trials, any complication, operator experience, haemodynamic parameters, was recorded and anatomical patient habitus assessed. Results. Data from 154 patients were analyzed. Success rate of SpA in the group of young trainees was 72% versus 100% in the group of consultants. Trainees succeeded in patients with a normal habitus in 83.3% of cases versus 41.3% when patients had a difficult anatomy (P = 0.02). SpA in obese patients (BMI ≥ 32) was associated with a significantly longer duration of puncture, an increased failure ratio when performed by trainees (almost 50%), and an increased number of bloody punctures. Discussion. Habitus plays a pivotal role for SpA efficiency. In patients with obscured landmarks, failure ratio in unexperienced operators is high. Hence, patient prescreening as well as adequate choice of operators may be beneficial for the success rate of SpA and contribute to less complications and better patient and trainee satisfaction.
Background
Ever since the introduction of spinal anaesthesia more than a century ago, complications have been part of the technique; failed or insufficient block, headaches, nausea, vomiting, and pain around the injection site are common minor complications [1,2]. The technique of spinal anaesthesia (SpA) is considered a basic skill, however, one that first has to be mastered. According to literature, the incidence of failed or partially failed SpA ranges between 0.5 and 17% [3][4][5]. The incidence of postdural puncture headaches (PDPHs) ranges between 0,7 and 11% based on the type of needle used [6,7], and transient neurologic syndromes can still be observed after SpA with an incidence of 0-7% [8].
As with many other procedures in medicine, intuition suggests that procedure-specific experience of the operator should be beneficial and reduce complications. However, there is only sparse data available to demonstrate that this is the case for SpA [9,10].
Furthermore, with an increasing number of severely obese patients in western society, anesthesiologists aremore than ever-faced with patients where the individual habitus causes a challenge to perform a seemingly simple basic skill like SpA because it relies on identifiable anatomical structures termed "landmarks." These can be completely obscured in the obese patient [11,12].
The aim of this study was to evaluate the impact of the individual patient habitus on the success rate of SpA and the incidence of immediate complications related to SpA in the context of operator experience.
Methods
After approval from the Ethics Committee of the Medical Faculty of the University of Muenster (protocol 2009-459-f-S), 161 patients planned for elective orthopedic or vascular surgical procedures of the lower limb under SpA were 2 Anesthesiology Research and Practice enrolled in the study. Informed consent was obtained from each patient.
Operators were divided into two groups (n = 5/each group). Group T were anesthetic trainees with ≤1 year experience in anaesthesia and group C anesthetists with ≥5year experience in anaesthesia and >150 previously performed SpA as well as ongoing regular exposure to SpA.
Exclusion criteria were as follows: Besides demographic data, we recorded the following characteristics: number of puncture trials, change of spinal segment, bleeding from the introducer or the spinal needle, duration of procedure, paresthesias during puncture, spread of sensory and motor block, failed or partially failed SpA as well as hemodynamic changes in blood pressure and heart rate. A relevant hypotensive episode was defined as a systolic blood pressure <85 mmHg or a decrease in systolic pressure >30% below the initial systolic blood pressure. An anaesthesiologist with >20-year experience as an anesthetic consultant assessed each patients' spinal anatomy based on palpation as well as X-rays, when available. Patients were divided into an "easy" and a "difficult habitus for SpA" group. The habitus was considered "difficult" when no spinous processes were palpable at the L3-L5 level and above, which could be used as landmarks to guide the operator to identify a midline. Furthermore, in patients with lumbar scoliosis and subsequent longitudinal rotation of the spinous processes towards the concave side as identified by X-ray, the habitus was considered "difficult." All patients were attached to standard monitoring (noninvasive blood pressure, electrocardiogram, and peripheral oxygen saturation). An intravenous access was established, and an infusion of 1000 mL of a balanced electrolyte solution (Sterofundin-ISO, B.Braun, Melsungen, Germany) was started. Patients were then turned into a lateral position, and after usual sterile preparations SpA was performed with a 25-gauge pencil point spinal needle (PenPoint, B.Braun, Melsungen, Germany). A standard introducer needle was used to facilitate spinal needle puncture. Once a free flow of cerebrospinal fluid (CSF) was obtained, the color of CSF was compared against a color scale measuring the amount of blood in CSF.
Local anesthetics used were isobaric bupivacaine 0.5% (3 mL) for endoprosthetic surgery or isobaric ropivacaine 0.5% (2.5-4 mL) for all other procedures. If the surgical procedure was expected to be of longer duration, 0.1 mg morphine was additionally injected into the subarachnoid space. Statistical analysis was performed using SPSS Statistics 18.0 (SPSS Inc., Chicago, IL, USA). Categorical variables are expressed as frequency and percentage, whereas continuous variables are represented as means with standard deviation or as median and interquartile range (25th percentile; 75th percentile). Before statistical testing, each continuous variable was analysed exploratively for its normal distribution using Kolmogorov-Smirnov test. The Mann-Whitney test was then applied for comparison of nonparametric variables between the two study groups. The nonparametric patients' baseline characteristics were assessed using the Kruskal-Wallis test. Friedman's signed rank test was used to compare the nonparametric time-dependent variables and the chisquare test for comparison of categorical variables.
Differences were considered as statistically significant at P < 0.05.
Results
161 patients were enrolled in the study. 7 patients were excluded due to changes in the treatment plan. Complete data sets of 154 patients were subsequently analyzed.
Demographic data of all patients is displayed in Table 1.
Overall success rate of SpA in the group of young trainees was 72% versus 100% in the group of consultants. 51 (35%) patients were rated to have a "difficult" anatomy/habitus. Trainees succeeded to perform SpA in patients with an easy habitus in 83.3% of cases versus 52.4% when patients had a difficult anatomy (P = 0.005). When trainees failed a SpA, an operator from group C took over, and they were successful in 100% of the cases hence all patients enrolled in the study had the planned surgical procedure done under SpA. Table 2 lists specific complications encountered in both operator groups and the two patient groups. Obese patients with a BMI ≥ 32 were significantly higher at risk to experience complications during SpA. Duration of puncture was longer, trainees failed SpA in almost half the cases, and there were significantly more bloody punctures and a higher incidence of paresthesias. Furthermore, even consultants required 3 or more punctures to perform successful SpA in 42.5% of the patients with a BMI ≥ 32. The height of the achieved sensory and motor block was not related to weight or BMI of the patient.
Consultants caused less paraesthesias when performing SpA as compared to trainees; however, the difference was not statistically significant (P = 0.31). Patients that were rated to have a difficult habitus had significantly more paresthesias during puncture than patients with identifiable landmarks (13.2 versus 2%; P = 0.005). Furthermore, patients with a difficult habitus had significantly more pain during the procedure than patient with an easy habitus (11.3 versus 1.9%; P = 0.02).
Bradycardia with a heart rate of 45 beats per minute or below was observed in 9 (6%) patients and was not significantly related to the local anaesthetic used but was significantly correlated with the level of puncture.
Interestingly, patients who required 4 or more punctures to place a successful SpA had a significantly greater drop in blood pressure.
On day one postoperatively, two patients (1.3%) showed typical features of a transient neurologic syndrome, 6 patients (3.9%) reported difficulties passing urine during the first 12 hours, but no patient required bladder catheterization. 15 patients (9.7%) had one or more episodes of PONV (Table 3).
No major complications such as severe hemodynamic disturbances, cardiac arrest, cauda equina syndrome, or permanent neurologic complications were observed.
Discussion
Spinal anaesthesia has an excellent safety record in terms of major complications. However, there is a significant number of minor complications that-each on its own-may cause unpleasant sequelae for the patient [3,4,13]. The majority of complications are associated with the procedure itself. Insufficient or failed SpA ranges from 0 to 17% and bloody punctures as well as significant hypotension are not uncommon [3,9]. The current study shows that the overall failure rate of SpA is comparable to previously published data. We have shown that success and failure rate appears to be directly dependent on the operator's experience and the individual patient habitus. Trainees failed significantly more attempts to perform SpA, had more difficulties placing SpA in patients with obscured landmarks, and had significantly more bloody punctures, and the procedure duration was significantly longer as compared to experienced specialists. It has been shown previously that SpA is a complex procedure that is more difficult to master than, for example, endotracheal intubation [14]. Furthermore, it has been estimated that the experience of around 100 performed SpA is required to achieve a 90% success rate [15]. Our data shows that young trainees had a success rate of 84% in patients with a normal anatomy, indicating that some trainees have probably mastered the technique while others were still on the ascending part of the learning curve. However, this picture changes completely when patients present with obscured landmarks or difficult anatomy. Trainees, who were able to perform SpA successfully in anatomically "easy" patients, suddenly faced a failure rate of 52% in those patients with a difficult habitus, significantly different to "easy" patients. Consultants were able to place a SpA even in the difficult patients but in 42.5% of cases, 3 or more punctures were required to position the spinal needle in the correct location. To our knowledge, this is the first study that specifically investigated the role of the individual patient habitus by rating landmarks and other anatomical features. Part of educating trainees is to accept that they do have a higher failure rate [16,17], and it is the responsibility of the relevant societies to define what is an acceptable failure rate for which procedure [18]. Based on our findings we postulate that an experienced anesthesiologist should anatomically rate all patients who are about to receive SpA and if the habitus is considered to be difficult, young trainees should probably not perform SpA to avoid frustration and build a more solid foundation based on successfully performed punctures rather than failing every second attempt. However, from our data, it appears that young trainees do have a higher failure rate, but they do not cause significantly more complications. Hence exposure to the difficult patient is relatively safe, once a solid foundation of the technique has been established. We recommend that the level of supervision should be adequate to avoid that the operator's success or fail rate in these patients is significantly lower than in experienced operators. Multiple attempts by young trainees as well as experienced operators lead to a more significant reaction of hemodynamic parameters. Blood pressures dropped significantly more in patients where multiple attempts were necessary. We offer two possible explanations. Firstly, multiple attempts may lead to the operator changing spinal segments, and the direction is usually upwards thus causing more sympathetic block. Secondly, multiple attempts may cause stress and enhance anxiety in the patient hence causing disturbances of the autonomous sympathetic regulation. Last but not least, avoiding multiple attempts may also affect patient satisfaction, but we have not investigated that matter.
As a training tool for young trainees as well as a tool to use in the anatomically challenging patient, the introduction of ultrasound-guided SpA may be worthwhile to consider. Some studies have shown increased success rates when ultrasound is used in patients with obscured landmarks or difficult anatomy [19][20][21]. However, this might involve teaching both SpA and the use of an ultrasound machine to trainees at the same time, which may be an even bigger challenge. Furthermore, similar to current discussions on the comprehensive use of ultrasound for central venous catheter placement, it needs to be discussed whether trainees should in general learn to perform landmark techniques before they add ultrasound or vice versa.
Our study has limitations. Firstly, patients were not randomized to experienced or unexperienced operators but were consecutively allocated to operators available. Secondly, operators could not be blinded to patient habitus for obvious reasons. However, operators were blinded to the assessment of the anatomical structures and the subsequent grading.
Since trainees were on the ascending part of the learning curve during the study period, repeated exposure to performing SpA itself may have influenced their individual performance and subsequently the results. Furthermore, our study was not powered to comment on incidences of rare major complications such as severe hemodynamic disturbances, cardiac arrest, cauda equina syndrome, or permanent neurologic complications since this was never the aim of this study.
Conclusion
Albeit a relatively safe technique, SpA has its problems and pitfalls, and our study has shown that increased operatorexperience results in a higher success rate of SpA. Furthermore, the individual patient's habitus plays a pivotal role when trainees are involved in performing SpA. Even for experienced anesthesiologists this group of patients has its challenges, but the failure rate of SpA is still very low. We conclude that careful patient selection and prescreening as well as adequate choice of operators is beneficial for the success rate of SpA and may contribute to less complications, greater safety, better patient, and trainee satisfaction. | 2016-05-04T20:20:58.661Z | 2012-07-30T00:00:00.000 | {
"year": 2012,
"sha1": "3aab1d41a877c49dce86cb92c551321aa04cfd92",
"oa_license": "CCBY",
"oa_url": "http://downloads.hindawi.com/journals/arp/2012/165267.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "56cbb67d8af6da95d6528a70dee8953a8d7ec2e8",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
251282312 | pes2o/s2orc | v3-fos-license | Characterization of protein complexes in extracellular vesicles by intact extracellular vesicle crosslinking mass spectrometry (iEVXL)
Abstract Extracellular vesicles (EVs) are blood‐borne messengers that coordinate signalling between different tissues and organs in the body. The specificity of such crosstalk is determined by preferential EV docking to target sites, as mediated through protein‐protein interactions. As such, the need to structurally characterize the EV surface precedes further understanding of docking selectivity and recipient‐cell uptake mechanisms. Here, we describe an intact extracellular vesicle crosslinking mass spectrometry (iEVXL) method that can be applied for structural characterization of protein complexes in EVs. By using a partially membrane‐permeable disuccinimidyl suberate crosslinker, proteins on the EV outer‐surface and inside EVs can be immobilized together with their interacting partners. This not only provides covalent stabilization of protein complexes before extraction from the membrane‐enclosed environment, but also generates a set of crosslinking distance restraints that can be used for structural modelling and comparative screening of changes in EV protein assemblies. Here we demonstrate iEVXL as a powerful approach to reveal high‐resolution information, about protein determinants that govern EV docking and signalling, and as a crucial aid in modelling docking interactions.
INTRODUCTION
Extracellular vesicles (EVs) are small heterogenous vesicles that are secreted by virtually all cells in the body. As important mediators of intercellular communication, EVs can cross biological barriers and have consequently been detected in many biofluids (De La Torre Gomez et al., 2018;Raimondo et al., 2011). Most notably, EVs in patient blood have been very informative as disease diagnostics (Hornung et al., 2020;Hoshino et al., 2020), prognostics and means of treatment stratification Tian et al., 2021;Zhou et al., 2021). EVs can carry specific biomolecular cargoes ranging from nucleic acids and lipids to proteins and glycoconjugates (Haraszti et al., 2016;Royo et al., 2019;Thind & Wilson, 2016;Williams et al., 2018). Quite recently, even fully loaded human leukocyte antigen molecules with presented antigen peptides (Bauzá-Martinez et al., 2021;Raposo et al., 1996;Synowsky et al., 2017) have been documented from EVs, further alluding to the immunological influence EVs might have in the body. Due to possible tissue-homing properties (De Jong et al., 2019;Elsharkasy et al., 2020;Herrmann et al., 2021;Vader et al., 2016;Zipkin, 2020), EVs have also been viewed and tested as drug delivery vectors, although the mechanisms governing tissue-specific EV docking still remain poorly understood.
Through understanding the complex biogenesis of EVs (Raposo & Stoorvogel, 2013), it appears inevitable that EVs would copackage cellular bits from their cells of origin that recapitulate their signalling state (Haraszti et al., 2016). Intriguingly, the uptake of activated proteins packaged in EVs by recipient cells could also transfer and propagate oncogenic signalling in the recipient cells , hence bypassing the need for prior ligand activation. Such phenomena hold strong significance and implications in the treatment of malignant disease and cancer metastasis, but also fundamentally explains another mode of inter-cellular crosstalk.
The protein repertoire packaged in EVs and on the surface of EVs is a major determinant of their transit to target sites and function upon recipient-cell uptake. For instance, EVs harbour a large proportion of membrane proteins which can prime the transfer of information between cells by allowing EVs to dock to a target cell or tissue. Although one-to-one specificity between EV markers and target organs has not been extensively established, EV tissue-homing due to specific membrane proteins has been reported (Joshi & Zuhorn, 2021;O' dea et al., 2020;Park et al., 2019). In addition, EVs are also promising vehicles for therapeutic use since they can be naturally or artificially loaded (Pham et al., 2021) with functional components such as enzymes or nucleic acids (Jafari et al., 2020;Khan et al., 2021). In view of these exciting applications, there remains a crucial knowledge gap in specific EV targeting that should be addressed. For instance, which proteins and protein assemblies can facilitate specific tissue targeting, therapeutic packaging and how these components can connect the distribution of EVs to inherited function in recipient cells.
Advances in mass spectrometry have enabled sensitive characterization of the EV protein repertoire (Jeppesen et al., 2019;Rontogianni et al., 2019;Zhang et al., 2018), thereby expanding our knowledge on EV heterogeneity. Nonetheless, these studies use methods that do not inform on the structural aspects of EV protein cargo, which are critical to mechanistically explain EV docking and uptake (Cvjetkovic et al., 2016). Structural characterization of proteins in EVs requires strategies that can directly retrieve structural information without protein extraction, solubilisation or detachment of proteins and protein complexes from the lipid environment. Ideally, any such procedure should also take into consideration the low sample amount and inherent heterogeneity of EVs. These pre-requisites still pose a great challenge even to state-of-the-art technologies such as super-resolution, atomicforce, and electron microscopy, where only a few proteins can be studied simultaneously (Kim et al., 2019;Lennon et al., 2019;Parisse et al., 2017;Zeev-Ben-Mordehai et al., 2014).
Leveraging on advances in crosslinking mass spectrometry (Chavez et al., 2018;Chen et al., 2019;Gonzalez-Lozano et al., 2020), and our recent breakthrough in extracellular crosslinking of whole-cells (Armony et al., 2021), we present here an intact EV crosslinking mass spectrometry (iEVXL) approach, for the systematic characterization of protein complexes in EVs. Using a pair of metastatic breast cancer cell lines, MDA-MB-231 and the metastatic-derived counterpart LM2 (Minn et al., 2005), we demonstrate here that iEVXL can provide high-resolution and comparative structural information that accurately recapitulates the native structure of previously reported complexes. Furthermore, such structural information in the form of distance restrains can aid in mapping unknown protein structures when adequately combined with computational modelling, and iEVXL can also enable hypothesis-free screening of differential protein-assemblies in EVs. This presents a significant step towards structural characterization of supramolecular complexes present in EVs, and functional elucidation of EV protein complexes responsible for homing and docking mechanisms.
. EV isolation and characterization
EVs were isolated from MDA-MB-231 and LM2 breast cancer cell lines using an ultracentrifugation protocol ( Figure 1a) and the quality of the EV preparations was assessed by multiple biochemical and biophysical techniques, according to the MISEV guidelines (Théry et al., 2018) (Figure 1b-e). We confirmed that our EV preparations are significantly enriched for proteins from exosomes, plasma membrane and focal adhesions using a sensitive shotgun proteomics experiment ( Figure 1b). Amongst these, EV markers such as CD9, CD63, CD81, TSG101 and PDCD6IP (Alix), were consistently enriched in EVs from both cell lines ( Figure 1c). CD81 enrichment in EV preparations was also validated by Western Blot ( Figure S1). As previously reported, HLA proteins were also highly enriched in EVs when compared to cells (Bauzá-Martinez et al., 2021) (Figurs 1C). In addition, the integrity, size distribution and concentration of the EV populations were determined by biophysical methods. Imaging by negative stain transmission electron microscopy (NS-TEM) showed that EVs derived from both MDA-MB-231 and LM2 are largely intact after isolation ( Figure 1d). Nanoparticle tracking analysis (NTA) revealed that the EV concentration and size distribution between MDA-MB-231 and LM2 are similar, with a particle:protein ratio (purity) of 2 × 10 9 and particle size of 110 nm, which are expected for ultracentrifugation preparations (Figure 1e). Collectively, these data indicate high quality and purity of the EVs isolated from both MDA-MB-231 and LM2 cell lines.
. Intact extracellular vesicle crosslinking (iEVXL)
Structural characterization of protein complexes in EVs presents unique challenges, since the phospholipid-bound entities are different in chemical accessibility and biochemical composition compared to a cell lysate. EVs are naturally densely loaded with membrane proteins which can be hard to solubilize while preserving their structure and interactions. In this work, we studied the EV interactome by iEVXL. Compared to conventional strategies of interactome profiling that require prior solubilisation and Crosslinked proteins were significantly enriched in annotations of "extracellular exosome", "plasma membrane" and "focal adhesion". Dot size represents the number of proteins mapped to each term, and the top five most significantly enriched terms based on false discovery rate (FDR) were plotted. (d) Nanoparticle tracking analysis (NTA). EVs before and after chemical crosslinking were indistinguishable in both size or concentration extraction of interacting proteins into a non-membranous and aqueous or detergent system (Liu & Heck, 2015;Pankow et al., 2016), crosslinking the EVs directly with a chemical crosslinker allows the preservation of labile protein-protein interactions in their native state before extensive biochemical extraction and purification. Here, we chose to crosslink with disuccinimidyl suberate (DSS), a crosslinker that covalently links lysine residues within a 30Å radius (C α -C α distance). The partial membrane permeability of DSS also allows it to penetrate the EV membrane slightly, to access peri-membrane protein complexes on the cytoplasmic side, which may otherwise be missed with membrane-impermeable crosslinkers. Using a carefully titrated concentration of DSS (2 × 0.5 mM DSS; optimized in Figure S2), we crosslinked EVs derived from a pair of metastatic breast cancer cells lines (Figure 2a). Only after chemical crosslinking, proteins were extracted from the EVs. This would ensure chemical immobilization of interacting proteins in the native orientation before harsh isolation of protein complexes from their native lipid environment. After DSS crosslinking, denaturing lysis, and trypsin digestion, crosslinked peptides were enriched by strong cation exchange (SCX) chromatography before mass spectrometry analysis of each SCX fraction. The spectral information obtained by LC-MS/MS was then compared with a protein database to retrieve linked peptide pairs ( Figure 2a). In the analysis of protein-protein interactions, and structural mapping within protein complexes, we focused largely on the restraints imposed by these crosslinked peptide pairs.
Starting from ∼100 μg of MDA-MB-231 or LM2 EV proteins, we identified 1959 crosslinked peptide spectra matches (CSMs) and 1756 CSMs, respectively. These CSMs originated from about 140 unique proteins in each EV sample, where up to 52% of these proteins were found crosslinked in EVs derived from both cell lines ( Figure 2b). Furthermore, crosslinked proteins found in EVs from both cell lines mapped to similar ontology terms (Figure 2c), in strong agreement with the EV proteome ( Figure 1b). This provided confidence that crosslinks adequately represent the EV interactome, even with extensive SCX enrichment. Membrane proteins and focal adhesion terms were abundantly mapped in the set of crosslinked proteins (Figure 2c). In addition, the crosslinked positions found within the well-known integral membrane proteins integrin β1, Ep-CAM and CD9 were also consistent with the documented membrane topology, where most of the intralinks were found between residues in the extracellular domain ( Figure S3). After crosslinking, size-distribution and concentration of EVs did not change noticeably, as shown by NTA characterization (Figure 2d), indicating that EVs were also not aggregating upon crosslinking. Collectively, these readouts comprehensively ascertain that EVs remain intact and largely free from aggregation at the conditions we used for DSS crosslinking.
. Structural characterization and modelling of native protein complexes by iEVXL
Experimentally, not only were EVs amenable to our chemical crosslinking workflow ( Figure 2), but many crosslinked peptide pairs could also be identified by mass spectrometry subsequently. This provided spectral evidence of EV protein-protein interactions but may also allow structural characterization of native protein complexes in EVs. Coupled to the use of partially membrane permeable DSS as chemical crosslinking agent, we hypothesized it could be possible to retrieve structural information for protein complexes either on the EV external surface, or encapsulated in the EV, but close to the EV membrane. Such data could potentially be very informative to study EV docking, as well as the structural basis of EV interactions with recipient cells. The untargeted nature of iEVXL allows the discovery of important EV protein complexes not known a priori and is also suitable for retrospective interrogation of protein-protein interactions. Most importantly, the unique membrane curvature of EVs could mean that the membrane proteins might have unique intra-membrane conformations which could only be studied with minimally disruptive techniques like iEVXL. We demonstrate this possibility here by mapping the crosslinks found in our dataset on available protein structures, focusing on high abundant EV proteins which have been previously linked to the metastatic potency of EVs.
In this direction, we first examined abundant crosslinks involving α-enolase in this dataset. Α-enolase is a glycolytic enzyme commonly found in EVs derived from tumour cells (Almaguel et al., 2020;Didiasova et al., 2019;Jiang et al., 2020), and has been shown to function as a dimer, with its structure previously resolved by X-ray crystallography (Kang et al., 2008). We mapped the α-enolase crosslinks found in our dataset to the high-resolution crystal structure of this protein (PDB: 3b97; 2.2 Å), and all the crosslinks we detected were within the distance of 30 Å, a restraint imposed by the physical length of DSS ( Figure 3a). This suggests that XL-MS with DSS on EVs can recapitulate well the natural dimeric structure of α-enolase. Similarly, by mapping the crosslinks involving 14-3-3 proteins detected in the same dataset, we were also able to confirm the validity of crosslinks involving 14-3-3 against the crystal structure of 14-3-3 α/β-heterodimer (PDB: 4dnk, 2.2 Å). In particular, 14-3-3 proteins have been shown to be abundant in oncogenic EVs (Rontogianni et al., 2019;Wang et al., 2018), which is consistent with the metastatic breast cancer origin of our EVs. Since the amino acid resolution of XL-MS could distinguish between the different 14-3-3 isoforms, 'mix-n-match' assembly of 14-3-3 dimers can be distinguished. We mapped all the 14-3-3 crosslinks detected on a representative αβ-heterodimer ( Figure 3b). Collectively, these demonstrate proof-of-principle that structural mapping by chemical crosslinking on intact EVs is feasible using a partially membrane-permeable crosslinkers such as DSS, and can detect endogenous protein complexes in documented conformations.
Taking the capabilities of structural mapping one step further, we used structural information from iEVXL to extend partially resolved protein structures. The human moesin protein is highly abundant in MDA-MB-231 and LM2 EVs and contains a poorly characterized long α-helix. To date, a full-length structure of insect (Spodoptera frugiperda) moesin has been determined in a closed conformation with its α-helix folded as an antiparallel coiled coil (Li et al., 2007). Small-angle X-ray scattering data combined with crystal structures of the moesin homologue ezrin confirmed this arrangement for human moesin and suggests these proteins can form both inactive monomers and active domain-swapped antiparallel dimers (Phang et al., 2016). From our data, four crosslinks were within the α-helical domain of human moesin, including one between the two supposed α-helices of the coiled coil. Initially, we mapped our data into the AlphaFold (Jumper et al., 2021) predicted human moesin structure, which indeed shows a coiled-coil architecture between residues 350 to 450. Although our crosslinks supported the coiled-coil section of the AlphaFold model, the coiled coil was followed by a long-disordered stretch, decreasing our confidence in the AlphaFold F I G U R E Visualization of detected crosslinks on protein structures. (a) Crosslinks of EV α-enolase, mapped on high-resolution crystal structure of homodimeric α-enolase (PDB: 3b97). Histogram summarizing the measured Euclidean distances (in Å) between pairs of uniquely crosslinked lysine residues is shown on the right. (b) Crosslinks of EV 14-3-3 mapped on high-resolution crystal structure of 14-3-3 αβ complex (PDB: 4dnk). (c) Structure of homology-modelled human moesin (coloured in white to blue from N-to C-terminal). Crosslinks mapped on homology-modelled human moesin structure. Insert zooms into residues 375-450 of moesin, a region of the protein that has been proposed to display a coiled-coil type of architecture. Consistent with the length of DSS crosslinker, all crosslinks detected (a-c) were within the Euclidean distances of 30 Å. (d) Mode of interaction for CD151 and integrin α3-β1 dimer in MDA-MB-231 and LM2-derived EVs. Left: proposed mode of interaction via the cytoplasmic tails; Right: protein interaction map showing unique crosslinks detected in EVs that support this interaction model. Blue crosslinks were detected in MDA-MB-231 EVs and yellow crosslinks were detected in LM2 EVs model. Hence, we instead generated a homology model of human moesin based on the Spodoptera frugiperda structure (Li et al., 2007) (PDB: 2I1K). All the crosslinks found in our data were within 30 Å when mapped on the homology-modeled human moesin structure (Figure 3c, histogram). In addition, our crosslinks also supported the coiled-coil type of architecture (Figure 3c, insert) and thus a closed (inactive) conformation of the protein. Therefore, the EV crosslinking data we have generated may also be used to supplement and weigh-in on contradictory structural reports, to propose the more plausible structure in combination with other structural techniques.
Finally, our iEVXL approach may also allow observations about higher-order structural assemblies, for instance in the interaction between integrin α 3 β 1 dimer and tetraspanin CD151 (Figure 3d). The crosslinks revealed that CD151 interacts with the integrin αβ dimer via the cytoplasmic tails (Figure 3d, left). Moreover, our data suggested that this interaction can occur in two ways, since the crosslink from CD151 to the α subunit was found exclusively in the LM2 EVs and the crosslink from CD151 to the β subunit exclusively in the MDA-MB-231 EVs (Figure 3d, yellow and blue dotted lines). This interaction has been documented previously by AP-MS (Huttlin et al., 2021) and strongly implicated in metastasis (Brzozowski et al., 2018;Li et al., 2021;Yang et al., 2010;Zhu et al., 2021), although crucial structural information regarding interacting domains could not be inferred from classical interaction studies. This highlights another key advantage of iEVXL as a complementary technique. Therefore, we demonstrate with four examples of structural modelling, that distance restraints from our iEVXL dataset are coherent with complete or partial structures, and that such data can potentially aid in the detailed re-construction of the EV docking interface with recipient cells.
. Distinct back-to-back annexin A conformation in LM-derived EVs
In the most ideal way, structural profiling should also be sensitive to changes in structural features between closely related systems.
To test this, we compared the structural features in EVs derived from MDA-MB-231 and the closely related LM2 cells and found a distinct back-to-back conformation for annexin A2 dimers that was unique to LM2-derived EVs (Figure 4). Annexin A2 is a phospholipid-binding protein involved in the endocytic and exocytic pathways. Annexin A2 is a well-established marker of EVs (Jeppesen et al., 2019), and was abundantly crosslinked in this current dataset. Structurally, annexin A2 has been shown to exist as a monomer, dimer or hetero-tetramer (Roesengarth & Luecke, 2004;Waisman, 1995). Monomeric annexin A2 consists of a concave surface on the bottom and a convex surface at the top, from which it is thought to attach to membranes via protruding lysine and leucine residues (López-Rodríguez et al., 2018) ( Figure 4a). Although all the crosslinks found in MDA-MB-231 derived EVs could be explained by monomeric annexin A2 (Figure 4b, blue), five crosslinks found only in LM2-derived EVs exceeded the distance restraints of 30 Å when mapped to the same monomeric annexin A2 structure (Figure 4b, yellow; over-length crosslinks represented by red lines). These distance violations seem to imply that there is substantial non-monomeric annexin A2 in LM2 EVs, but not in MDA-MB-231 derived EVs. Apart from structural differences, protein abundance can sometimes explain fewer crosslinks mapping to a complex. However, Annexin A2 ranked similarly in abundance within each cell line (13th and 16th for LM2 and MDA-MB-231). Hence the non-detection of this back-to-back annexin A2 conformation in MDA-MB-231 is unlikely to be attributed to differential protein abundance alone.
To understand further how the over-length crosslinks that were detected only in LM2-derived EVs might be biologically relevant, we used our crosslinking data for guided molecular modelling. By assuming annexin A2 forms dimers related by C2 rotational symmetry (Plaxco & Gross, 2009), we generated symmetry pairs of intra-protomer crosslinks and analysed the accessible interaction space (Honorato et al., 2021;Van Zundert & Bonvin, 2015;Van Zundert et al., 2017) on annexin A2. Complexes could be found using four out of five pairs of these crosslinks, indicating that these over-length crosslinks were in fact interprotomer crosslinks. Therefore, we selected these as restraints to guide the subsequent docking process. Out of 200 docked complexes generated by HADDOCK, 186 modelled complexes clustered together, indicating that a single type of interaction is likely to explain the crosslink restraints. From these, we selected the best annexin A2 model based not only on the Haddock scores, but also on complex Matched and Non-accessible Crosslink (cMNXL) scores ( Figure S5A, Complex_181w represented by a red dot). While the HADDOCK score is based on Euclidean distances, the cMNXL score is based on the solvent accessible surface distance (SASD). This model revealed a 'back-to-back' conformation, where the convex surfaces from both annexin A2 monomers face the same side (Figure 4c and Figure S5B-C). In further support of this 'back-to-back' annexin A2 conformation, none of the other existing Annexin A2 homodimeric (Roesengarth & Luecke, 2004) (PDB: 1xjl) and heterotetrameric (Ecsédi et al., 2017) (PDB: 5lpu) structures could explain the over-length crosslinks observed specifically in the LM2 dataset ( Figure S4).
The inter-protomer interaction surface of annexin A2 was quite large (1172.3 Å 2 ) and displayed a neutral electrostatic potential as well as quite some hydrophobic patches, indicating a hydrophobicity-based interaction (Figure 4d). Given that residues involved in protein interfaces tend to be more conserved than other residues, we also determined the conservation at the protein surface using all the annexin A2 orthologue sequences present in Uniprot. This analysis revealed that, as expected, the plasma-membrane interacting surface ( Figure S6, top view) was highly conserved. The interaction surface found in our 'backto-back' model was also relatively conserved ( Figure S6, back view, interface highlighted in black) when compared to other surfaces, including the lateral surfaces ( Figure S6, lateral views 1 and 2) which were previously thought to contribute to annexin A2 oligomerization (Matos et al., 2020). Overall, these results strongly supported a novel 'back-to-back' mode of interaction for annexin A2 in LM2-derived EVs.
DISCUSSION
In recent years, EVs have emerged as key mediators of intercellular communication, and have been the focus of many biopharmaceutical and therapeutic developments (Grossen et al., 2021;Herrmann et al., 2021;Zhang et al., 2020;Zipkin, 2020). The protein repertoire and structural features on EVs could critically influence the pharmaceutical utility, as these will ultimately determine tissue-targeting and uptake by recipient cells. In this respect, the advent of sensitive shotgun proteomics has made largescale and detailed EV protein characterization rather feasible, and propelled EVs into the scene of clinical diagnostics (Hoshino et al., 2020;Rontogianni et al., 2019). Nevertheless, the challenge to provide resolution on structural features or topology of EVs largely remains. Key questions of functional EV targeting require higher resolution mapping of protein complexes and protein conformations in and on the surface of EVs to be further addressed.
Recently, the interest to understand EV structural features (Cvjetkovic et al., 2016) and interactome (Rai et al., 2021) has grown significantly, but techniques amenable to structurally characterize EV protein complexes are remarkably scarce. EVs are unique and heterogenous entities that significantly challenge existing structural methods. Cryo-electron microscopy and tomography are state-of-the-art techniques that have provided a wealth of structural information on many biologically important complexes. Nonetheless, the stringent requirement for sample homogeneity and low complexity complicates the application of such techniques to study EVs. On the other hand, fluorescence-based techniques such as super resolution microscopy still hold a resolution limit of around 20 nm (Galbraith & Galbraith, 2011), which is much larger than the interaction space of a protein complex. In this respect, iEVXL, as we describe here, can bridge with high resolution (< 30Å) structural information on interacting proteins in their native environment. Such data may also complement the information obtained from tomography and fluorescence-based techniques, to collectively sketch the EV docking interface. Therefore, we envision the application of iEVXL as an important step towards understanding EV biology and selective docking mechanisms.
In this technical report, we demonstrate that iEVXL can detect differential protein interactions in EVs, when coupled to chemical crosslinking with partially membrane-permeable crosslinkers such as DSS. We show that detailed and hypothesis-free analyses of EV membrane interactomes are feasible at the scale of ∼100 μg of EV proteins, and the structural information obtained are highly reliable and consistent with known experimental structures. With the demonstrated cases of α-enolase, 14-3-3α/β, moesin, CD151-integrin α 3-β 1 , and annexin A2 dimers, we showcase a range of utility for iEVXL, in protein structure interface mapping, oligomeric isoform docking, flexible region structural modelling, and determination of higher-order multimeric structural assemblies. As we overcome the generic sensitivity limitations of protein-interaction mass spectrometry, we expect the moderate sensitivity of iEVXL to improve further. Notwithstanding, we envision that the broader application of iEVXL will allow significantly better engineered EV tissue targeting.
. Cell culture and EV isolation
MDA-MB-231 (obtained from ATCC) and LM2 cells (provided by The Netherlands Cancer Institute, NKI) were cultured in DMEM supplemented with 10% fetal bovine serum (FBS; HyClone, USA), 10 mM L-glutamine, 50 U/ml penicillin and 50 μg/ml streptomycin (Lonza) in a humidified incubator at 37 • C with 5% CO 2 . Cells were detached using 10 mM EDTA/PBS for 5 min at 37 • C. Secretion media was prepared by depleting bovine derived EVs from the culture media. To do so, DMEM containing 20% FBS was centrifuged overnight at 100,000 × g at 4 • C in a Sorvall T-865 rotor (Thermo Fisher Scientific), filtered on a 0.22 μm Stericup device (Millipore, USA), diluted to 10% FBS, supplemented as previously described and kept at 4 • C. For EV secretion, cells were seeded on 10 plates (15 cm in diameter) at 50% confluence and left to attach overnight. After attachment, cells were gently washed 3× with warm PBS, before addition of secretion media. Conditioned media containing secreted EVs was collected after 24 h and 48 h, fresh secretion media was added to replace the collected conditioned media. Cell viability was measured at the start and end of secretion using the Trypan Blue method, and it remained ∼95%. The conditioned media were spun down at 300 × g for 10 min to deplete cells, transferred to clean 50 ml Falcon tubes, spun down at 10,000 × g for 40 min to deplete cell debris and larger vesicles, transferred to clean 50 ml Falcon tubes and kept at 4 • C. The two collections were pooled and EVs were immediately pelleted by ultracentrifugation at 120,000 × g, at 4 • C for 2 h in a Sorvall T-865 rotor. The pellet was resuspended by gentle pipetting in 10 ml cold PBS supplemented with 50 μg/ml DNAse Ι (Sigma-Aldrich), to decrease nucleosome contaminations on EV preparations. Purified EVs were finally pelleted again by ultracentrifugation at 120,000 × g at 4 • C for 2 h. The EV-pellet was thoroughly resuspended in 400 μl of PBS, spun down at 10,000 × g for 5 min and the EV-containing supernatant was kept. Aliquots were separated for further characterization of the EV populations.
. EV crosslinking and protein digestion
MDA-MB-231 and LM2-derived EVs were crosslinked using disuccinimidyl suberate (DSS; Thermo Fisher Scientific) with the optimal 0.5 mM crosslinker concentration which was determined in an independent experiment ( Figure S1). EV samples (at an average concentration of 7.5 × 10 9 p/ml, 0.5 mg/ml protein in EV lysate) were crosslinked in 0.5 mM DSS for 20 min at RT. To promote crosslinking of lower abundant species, a second round of 0.5 mM DSS was added for another 20 min at RT. The crosslinking reaction was then quenched with 100 mM TRIS pH 8.5 for 5 min. Crosslinked EVs were aliquoted for further characterization. Crosslinked EVs were then lysed by thorough vortexing in 0.5% SDC, 8 M Urea in 50 mM ammonium bicarbonate, followed by 30 min end-to-end rotation at 4 • C and 15 cycles of sonication at 4 • C (30 s on, 30 s off) in a Bioruptor (Diagenode, Belgium). Proteins were reduced with 4 mM dithiothreitol (DTT) at RT for 60 min, alkylated with 16 mM iodoacetamide (IAA) at RT for 30 min in the dark which was then quenched by addition of 4 mM DTT. Proteins were first digested by addition of Lys-C (at a 1:50 ratio (w/w); Wako, Japan) at 37 • C for 2 h, followed by dilution to 2 M Urea and further digestion with Trypsin (at a ratio 1:50 (w/w); Sigma Aldrich) at 37 • C overnight. Protein digestions were stopped by acidification to 5% FA, and precipitated SDC was pelleted by centrifugation at 20,000 × g at 4 • C for 30 min. Supernatants were carefully collected, desalted using Sep-Pak C18 cartridges (1cc; Waters, MA, USA), vacuum dried and stored at -20 • C until further use.
. LC-MS/MS of crosslinked SCX fractions
The data was acquired with an Ultimate 3000 system (Thermo Fisher Scientific) coupled to an Orbitrap Exploris 480 mass spectrometer (Thermo Fisher Scientific). Peptides were trapped (Dr. Maisch Reprosil C18, 3 μM, 2 cm × 100 μM) for 2 min in 5% solvent B at a flow rate of 300 nl/min, before being separated on an analytical column (Agilent Poroshell, EC-C18, 2.7 μM, 50 cm x 75 μM). Solvent B consisted of 0.1% formic acid in 80% acetonitrile while solvent A consisted of 0.1% formic acid in water. Crosslinked peptides were then separated in the analytical column at a fixed flow rate of 300 nl/min as follows: each SCX fraction was separated using an optimal 95 min. linear gradient (ranging from 9-40% to 6-35% B) followed by a 3 min steep increase to 99% B, a 5 min wash at 99% B and a 10 min re-equilibration step at 5% A. The mass spectrometer was operated in a data-dependent mode (DDA). Peptides were ionized in a nESI source at 1.9 kV and focused at 40% amplitude of the RF lens. Full scan MS1 spectra from 350 -2200 m/z were acquired in the Orbitrap at a resolution of 60,000 with the AGC target set to 3×10 6 and under automated calculation of maximum injection time. Cycle time for MS2 fragmentation scans was set to 2 s. Only peptides with charged states 3-8 were fragmented, and dynamic exclusion was set to a duration of 16 ms. Fragmentation was done using a stepped HCD collision energy strategy (NCEs: 28, 31, 34%). Fragment ions were accumulated until a target value of 1 × 10 5 ions was reached under an automated calculation of maximum injection time, with an isolation window of 1.4 m/z before injection in the Orbitrap for MS2 analysis at a resolution of 30,000. The mass spectrometry proteomics data have been deposited to the ProteomeXchange Consortium via the PRIDE (Vizcaíno et al., 2016) partner repository with the dataset identifier PXD029591.
. Crosslinking database search and data analysis
For crosslinked peptide analyses, spectra were extracted from "raw" files from precursors ranging between 350 and 20,000 Da, filtered by signal-to-noise ratio of 2 and converted to MGF format using Proteome Discovered software (v2.4, Thermo Scientific). MGF files were searched in pLink 2 (Chen et al., 2019) against a database containing the 1000 most abundant proteins of both MDA-MB-231 and LM2 EV proteomes, as determined in section 4.10., and appended with the sequences of common FBS contaminants to avoid misidentification of crosslinked peptide sequences from Bos taurus. The database was further curated to remove signal peptides from the protein sequences (note that positions within proteins still follow the Uniprot numbering). DSS was set as non-cleavable crosslinker and trypsin was set as the digestion enzyme, and up to three missed cleavages were allowed. Peptide length was set to between six and 60 amino-acids, precursor and fragment ion tolerance were set to 10 and 20 ppm, respectively, and oxidation of methionine and acetylation of protein N-terminus were set as variable modifications while carbamidomethylation of cysteines was set as a fixed modification. FDR was set at 1% at all levels, which was calculated by using a reverse decoy database strategy. E-scores were not computed to minimize processing times. Crosslinking and proteomics data were analysed using Excel and in-house built R scripts (R Development Core Team, R 2011), and plots refined with Illustrator 2020 (Adobe, USA). Spectra and site files from pLink were processed to generate the tables in the supplementary data, and the scripts used to curate pLink outputs have been deposited in GitHub (https://github.com/hecklab/pLink-results-analysis). Only crosslinks identified with sufficient spectral evidence (≥ 2 CSM per cell line) were kept while crosslinks involving histones, a debatable contaminant in EV preparations, were not analysed further. In cases of protein ID ambiguity due to shared crosslinked peptides between different proteins, intra-links were preferred over inter-links.
. Molecular modelling
A homology model of human moesin (2-577) was generated using Swissmodel (Waterhouse et al., 2018), with the (almost) fulllength Spodoptera frugiperda moesin structure 2I1K (Li et al., 2007) as a template (homology: 57.89% identity). The N-terminal residue missing from this model was manually added using Coot (Emsley & Cowtan, 2004). The AlphaFold predicted structure of annexin A2 (Unirpot ID: P07355) was used for the molecular docking procedures. DisVis webserver (Honorato et al., 2021;Van Zundert & Bonvin, 2015;Van Zundert et al., 2017) was used to check compatibility between the crosslinks (including their symmetry mates). Molecular docking was performed with HADDOCK 2.4 web-service (Honorato et al., 2021;Van Zundert et al., 2016). The default parameters of HADDOCK were used with unambiguous restraints based on the crosslinks. The HADDOCK structures were scored based on the crosslinks solvent accessible surface distances (SASDs) using XLM tools (Sinnott et al., 2020). The best docking models were picked based on both the HADDOCK and the cMNXL scores (Bullock et al., 2018). Cocomaps (Vangone et al., 2011) was used to determine which residues were in close contact. The interface was defined as the residues predicted to be in close contact for both annexin A2 copies in the modelled structure. For conservation analysis, all the annexin A2 orthologs present in Uniprot (excluding low quality proteins) were aligned using the Clustal Omega (Sievers et al., 2011) algorithm and the alignment and phylogenetic tree were fed to ConSurf (Ashkenazy et al., 2016) to calculate the conservation score and visualize it on the protein structure.
. Negative stain electron microscopy (NS-TEM)
Thin layer continuous formvar/carbon-coated copper mesh grids (Ted Pella 400 mesh Cu, 01754-F) were glow discharged for 10 s at 10 mA, and immediately incubated with 3 μl of undiluted EVs in PBS for 45s. Excess solution was blotted away, and the samples were stained first by a quick immersion in 2% (w/v) of uranyl acetate followed by re-staining for 1 min with the same reagent. After each staining step, the excess solution was blotted away. Grids were then dried at RT before electron microscopy imaging. NS-TEM data was collected on a Talos L120C transmission electron microscope (Thermo Fisher Scientific) operated at 120 kV. Images were acquired with a 4k × 4k Ceta CMOS camera (Thermo Fisher Scientific) at a magnification of 11000× corresponding to a pixel size of 13.6 Å.
. Nanoparticle tracking analysis (NTA)
EVs were analysed in a NanoSight NS500 (Malvern Panalytica, UK), equipped with a sCMOS camera and a Blue405 laser. The camera level was set to 16. The samples were diluted 1:500 in PBS to a final volume of 1 ml to be in the optimal range of operation (between 30-100 particles/frame). Four videos of 1 min were taken at 25 FPS and averaged with the built-in NanoSight Software NTA v.3.4 using a detection threshold of 5.
. SDS-page and western blot
Lysed EVs or source-cell material were resolved on 12% Bis-Tris Criterion XT precast gels (Biorad, USA) with 1x XT-MOPS buffer at fixed voltage of 150 V for about 2 h. Proteins were stained in-gel using Imperial Protein Stain (Thermo Fisher Scientific). For Western detection, proteins were transferred to a PVDF membrane in Towbin buffer (0.025 M Tris, 0.192 M glycine, 20% methanol) at 100 V and 4 • C for 1 h. Membranes were washed 3x with TBS buffer containing 1% Tween-20 (TBST) and then blocked for 1 h in TBST supplemented with 5% Blotting-Grade Blocker (Biorad). Primary antibody (α-CD81 at 1:200) was incubated at 4 • C overnight in TBST supplemented with 1% milk. Secondary antibody incubation was done using HRP conjugated α-mouse IgG antibody (1:2000 dilution) for 2 h at RT in the same buffer. Between and after antibody incubations, membranes were washed 3× 10 min in TBST. HRP signal was visualized using SuperSignal West Dura (Thermo Fisher Scientific) substrate, on an Amersham Imager 600 (GE healthcare, USA).
. MDA-MB- and LM EV and source cells proteomic characterization
The proteomic characterization of the EVs and source cell proteomes has been done re-using high resolution data previously published (Rontogianni et al., 2019). Cell culture and EV-isolation conditions were the same and the reproducibility of the source cell and EV proteomes was assessed in independent biological replicates for both cells lines under such conditions. Briefly, the "raw" files from four biological replicates of EVs isolated from either MDA-MB-231 or LM2 cells, as well as three biological replicates of their source cells, were downloaded from Proteome Xchange (PXD012162) and re-searched using MaxQuant (v_1.6.5.0) (Tyanova et al., 2016) against SwissProt human database (downloaded on 09/2019, containing 20,431 protein sequences) appended with common contaminants. Trypsin was set as the digestion enzyme and up to two missed cleavages were allowed. Oxidation of methionine and acetylation of protein N-terminus were set as variable modifications and carbamidomethylation of cysteine was set as a fixed modification. Label-free quantification (LFQ) was enabled using a minimum ratio count of two and both razor and unique peptides for quantification. Match between runs approach was enabled using default parameters. Precursor ion tolerance was set to 20 ppm for the first search and 4.5 ppm after recalibration, and fragment ions tolerance was set to 20 ppm. FDR was set at 1% for both PSM and protein level by using a reverse decoy database strategy. | 2022-08-04T06:17:06.872Z | 2022-07-01T00:00:00.000 | {
"year": 2022,
"sha1": "be8200e409fbde695e983541a384ff5588936706",
"oa_license": "CCBYNCND",
"oa_url": null,
"oa_status": null,
"pdf_src": "PubMedCentral",
"pdf_hash": "06de86bb2076626b9fe4a49c3a536ccb8c43eb42",
"s2fieldsofstudy": [
"Biology",
"Chemistry",
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
221878799 | pes2o/s2orc | v3-fos-license | Existence of invariant volumes in nonholonomic systems
We derive sufficient conditions for a nonholonomic system to preserve a smooth volume form; these conditions become necessary when the density is assumed to only depend on the configuration variables. Moreover, this result can be extended to geodesic flows for arbitrary metric connections and the sufficient condition manifests as integrability of the torsion. As a consequence, volume-preservation of a nonholonomic system is closely related to the torsion of the nonholonomic connection. This result is applied to the Suslov problem for left-invariant systems on Lie groups (where the underlying space is Poisson rather than symplectic).
1.
Introduction. This work is motivated by Liouville's theorem which asserts that all (unconstrained) Hamiltonian systems preserve the symplectic form (and, consequently, the induced volume form). However, nonholonomic systems are not symplectic (which follows from the fact that nonholonomic systems are not variational). As such, the question of volume-preservation becomes nontrivial. A famous example of this is the Chaplygin sleigh; this system, although energy-preserving, experiences "dissipation" (cf. [31] for a general discussion on stability of nonholonomic systems or [25] for an interpretation via impact systems).
The purpose of this work is to construct a systematic way to determine whether or not a nonholonomic system preserves volume. In particular, we present necessary and sufficient conditions on when there exists an invariant volume form with density depending only on the configuration variables, i.e. f = π * Q g : T * Q → R where g : Q → R and π Q : T * Q → Q is the standard cotangent projection. Theorem 1.1 (Main Result). Let L : T Q → R be a natural Lagrangian (i.e. the kinetic energy is induced by a Riemannian metric) and D ⊂ T Q be a regular distribution (each fiber has constant dimension). Then, there exists an invariant volume with density depending only on the configuration variables if and only if there exists ρ ∈ Γ(D 0 ) such that ϑ C + ρ is exact where Here, D 0 ⊂ T * Q is the annihilator of D ⊂ T Q, {η β } is a frame for D 0 , W α = FL −1 (η α ) are dual vector fields, and m αβ = η α (W β ).
In particular, suppose that ϑ C + ρ = dg. Then the following volume form is preserved: exp π * Q g · µ C , where µ C is the nonholonomic volume form (cf. Definition 4.1) Preliminaries on nonholonomic systems are presented in Section 2. Section 3 presents the construction of the "global nonholonomic vector fields" which allow us to work on the whole manifold T * Q and restrict to the constraint distribution after the calculations are performed. The divergence calculation for a nonholonomic system is performed in Section 4. The main result, Theorem 1.1, is proved in Section 5 (cf. Theorem 5.3). Section 6 shows that this 1-form, ϑ C is intimately connected to the torsion of the nonholonomic connection, which seems to be a new observation. Section 7 shows how this result is applicable to the Suslov problem and to the problem of invariant volumes on Poisson manifolds. This paper concludes with examples in Section 8.8. This paper is a continuation of the work done in [6] and, as such, many of the results below can be found there.
Related results can be found in [8], cf. Theorem 4.2 therein. However, there exists a few key differences. Firstly, [8] constructs an almost-Poisson structure on D * = FL(D) ⊂ T * Q and studies its modular class. This differs from our treatment as we define a global nonholonomic vector field on the whole of T * Q and restrict to D * at the end. This has the advantage of avoiding local coordinates and allowing greater freedom in choosing how to express the constraints. Additionally, [8] requires Q to be orientable while we make no such assumption; cf. §8.8 where we consider the case where Q is the Möbius strip.
Preliminaries.
2.1. Unconstrained Mechanics. We will first briefly cover the case of unconstrained mechanical systems before discussing nonholonomic systems. A smooth (finite-dimensional) manifold Q is called the configuration space, the tangent bundle T Q is called the state space, and the cotangent bundle T * Q is called the phase space.
where g is a Riemannian metric on Q and V : Q → R is a smooth function called the potential.
For the most part, we will be dealing with natural Lagrangians (which are hyperregular) and the fiber derivative in this case takes the form For a given (hyperregular) Lagrangian, we define the Hamiltonian H : T * Q → R via the Legendre transform: While L generates dynamics on T Q variationally, H generates dynamics on T * Q symplectically.
where ω is a symplectic form on the manifold M is called a symplectic manifold. A vector field, X H , on M is called Hamiltonian if for some energy H : M → R. Here, i X ω = ω(X, ·) is the contraction.
We can construct Hamiltonian vector fields on T * Q via the natural symplectic form ω = dq i ∧ dp i ∈ Ω 2 (T * Q). With this symplectic form, the Lagrangian and Hamiltonian formulations are equivalent.
Proposition 2 (cf. Theorem 3.6.2 in [1]). Let L be a natural Lagrangian on Q and H its Legendre transform. Then the integral curves of (1) are mapped to the integral curves of (2) under FL. Furthermore, both systems have the same base integral curves.
Given a symplectic form ω, the n-fold wedge product ω n is a volume form. An important feature of Hamiltonian systems is that they are always volume-preserving. Proof. This follows immediately from Cartan's magic formula: The first term vanishes as di X H ω = ddH and the second vanishes as ω is closed.
The main goal of this work is to extend Liouville's theorem to nonholonomic systems.
Constraint Distributions.
Suppose that a Lagrangian system L : T Q → R is subject to certain constraints, i.e. a figure skater who cannot slide perpendicular to the direction of her skate. Constraints involving the velocities of the system are known as nonholonomic constraints (holonomic constraints involve only the positions; this distinction will be made precise below). Everything below that holds true for nonholonomic constraints also works for holonomic constraints, so we will treat everything as nonholonomic and not worry about the distinction. For the most part, we will assume that the constraints are linear in the velocities.
Nonholonomic constraints are normally described as specifying a submanifold D ⊂ T Q that describes the restricted motion. When the constraints are linear in the velocities, the submanifold D is a distribution.
Theorem 2.5 (Frobenius' Theorem). D is involutive if and only if there is a foliation on Q whose tangent bundle equals D.
If D is involutive, it is said to be integrable and the constraints are called holonomic. When D is not involutive, it is nonintegrable and the constraints are nonholonomic.
Constraint distributions are usually described by a family of 1-forms η α .
In this situation, the distribution is integrable if the 1-forms η α can be chosen such that they are all closed: dη α = 0.
Hamiltonian Nonholonomic Systems.
It is important to note that nonholonomic systems are not described by variational principles (on the Lagrangian side) nor are they symplectic (on the Hamiltonian side). Rather than obeying Hamilton's principle, nonholonomic systems follow the Lagrange-d'Alembert principle. In the Hamiltonian setting, this manifests as (see [16,22] and §5.8 in [3]): where π Q : T * Q → Q is the cotangent projection and the λ α are multipliers to enforce the constraints. Let g be the Riemannian metric underlying the natural Hamiltonian, H (a Hamiltonian is natural if it comes from a natural Lagrangian). For each constraining 1-form η α , let W α ∈ X(Q) be the vector field such that g(W α , ·) = η α (equivalently, W α = FL −1 η α ). The constraint distribution D ⊂ T Q on the cotangent side becomes The function P (W ) : T * Q → R is the momentum of the vector field W given by The multipliers λ α in (3) are chosen such that X D H is tangent to D * ⊂ T * Q.
3. Global Nonholonomic Vector Fields. Given a constraint distribution, D * ⊂ T * Q, we can determine the nonholonomic vector field X D H ∈ X(D * ) via (3). Commonly local, noncanonical, coordinates are chosen for D * (cf. §5.8 in [3] and [26]). However, we will instead work with the entire manifold T * Q and define a global vector field X global H . This section outlines an intrinsic (albeit non-unique) way to determine such a vector field. Definition 3.1. For a given constraint submanifold D * ⊂ T * Q (D * need not be a distribution), a realization of D * is an ordered collection of functions C := {g i : T * Q → R} such that zero is a regular value of G = g 1 × . . . × g m and If the functions g i are given by momenta, i.e. g i = P (X i ), then the realization is called natural.
Remark 1.
Under the case where the Lagrangian is natural (which provides a Riemannian metric on Q) and the constraint submanifold is a distribution, we can choose the realization to be natural: By replacing D * with a realization C , we can extend the nonholonomic vector field to a vector field on T * Q that preserves the constraining functions g i . Recall that the form of the nonholonomic vector field is i X D H ω = dH + λ α π * Q η α . We construct the global nonholonomic vector field, Ξ C H , by requiring that: (NH.1) i Ξ C H ω = dH + λ α π * Q η α for smooth functions λ α : T * Q → R, and (NH.2) L Ξ C H g i = 0 for all g i ∈ C . Under reasonable compatibility assumptions on C (cf. §3.4.1 in [23]), such a vector field exists and is unique. However, given two different realizations, C and C , of the same constraint distribution D * , it is not generally true that Ξ C When both the Hamiltonian and realization are natural, the global field can be explicitly computed via the constraint mass matrix defined below.
Remark 2. The constraint manifold is given by the joint zero level-sets of the g i while the realization provides additional irrelevant information off of the constraint manifold. This is why Ξ C H = Ξ C H but they agree once restricted. Definition 3.2. For a natural realization C = {P (W 1 ), . . . , P (W m )} and natural Hamiltonian (so (Q, g) is Riemannian), the constraint mass matrix, m αβ , is given by orthogonally pairing the constraints, i.e.
Additionally, its inverse will be denoted by (m αβ ) = m αβ −1 . Proof. This follows from the fact that (m αβ ) is a Gram matrix for a nondegenerate inner product.
We can now write down a formula for Ξ C H . Using (NH.1) and (NH.2), we get that (where {·, ·} is the standard Poisson bracket) Due to the constraint mass matrix being nondegenerate, the multipliers have a unique solution and the global nonholonomic vector field is given by Remark 3. The global nonholonomic vector field given by (4) can be extended to the case of nonlinear constraints via Chetaev's rule (which is not necessarily the correct procedure, cf. [19] for a discussion), which will give equivalent results to those in [17] where the "almost-tangent" structure of the tangent bundle is utilized. For Lagrangian systems, Chetaev's rule states that if we have a nonlinear constraint f (q,q) = 0, then the constraint force takes the following form: and S : T (T Q) → T (T Q) is its almost-tangent structure. However as we are instead on the cotangent bundle, the object we will use will be the related to the almost-tangent structure through the fiber derivative, For a constraint realization C = g 1 , . . . , g m , the nonholonomic vector field is given by where the multipliers are given by m αβ = C * dg β (X g α ).
Definition 3.4. The 1-form given by Therefore, we have which vanishes on D * . A similar argument works for multiple constraints.
Throughout the rest of this work, we will assume that C is a natural realization. This, in turn, requires that the constraints are linear in the velocities / momenta.
Nonholonomic
Volume. An invariant measure is a powerful tool for understanding the asymptotic nature of a dynamical system. In the case of nonholonomic systems, a smooth invariant measure offers two key insights. The first is the usual case in dynamical systems where an invariant measure allows for the use of the Birkhoff Ergodic Theorem (cf. e.g. 4.1.2 in [13]) as well as for recurrence. The other is unique to nonholonomic systems; even though nonholonomic systems are not Hamiltonian, "nonholonomic systems which do preserve volume are in a quantifiable sense closer to Hamiltonian systems than their volume changing counterparts," [9] (see also [2] and [5]). Therefore, being able to find an invariant measure for a nonholonomic system allows for ergodic-like understanding of its asymptotic behavior as well as provide a way to "Hamiltonize" a nonholonomic system.
There has already been work done in finding invariant measures in systems where symmetries are present: Chaplygin systems are studied in, e.g. [11,14,23,24], Euler-Poincaré-Suslov systems are studied in, e.g. [3,12], systems with internal degrees of freedom are studied in, e.g. [3,4,30], and [8] studies the case of symmetric kinetic systems where the dimension assumption does not hold. Related work on asympotic dynamics may be found in [29]. This work, rather, uses an all-together different approach where no symmetries will be used. Additionally, in §5.2, we provide necessary and sufficient conditions for when an invariant measure exists whose density depends only on the base variables, i.e. f = π * Q g for some g ∈ C ∞ (Q).
4.1.
Nonholonomic Volume form. The symplectic manifold T * Q has a canonical volume form ω n . However, the nonholonomic flow takes place on a submanifold D * ⊂ T * Q which is 2n − m dimensional. Therefore, ω n is not a volume form on D * . Here, we construct a volume form on D * which is unique up to the choice of realization. The derivation of this will be similar to the construction of the volume form on an energy surface in §3.4 of [1]. For the realization Definition 4.1. If we denote the inclusion map by ι : D * → T * Q, then a nonholonomic volume, µ C , is given by Proposition 4. Given an ordered collection of constraints, C , the induced volume form µ C is unique.
Proof. Suppose that ε and ε are two forms satisfying σ C ∧ ε = ω n . Then Now let ι : D * → T * Q be the inclusion. Then from the above, we see that The result will follow so long as ι * α = 0. Suppose that ι * α = 0 and choose vectors which is a contradiction.
Remark 4. Notice that for an ordered collection of constraints the volume form is unique. However, changing the order of the constraints changes the sign of the induced volume form and rescaling constraints rescales the volume form. In this sense, C uniquely determines µ C , but D * only determines µ C up to a multiple.
While examining the failure of Liouville's theorem (Theorem 2.3) for nonholonomic systems, we will see when µ C is preserved under the flow of X D H . More generally, we will consider the existence of a smooth density f ∈ C ∞ (D * ) when f µ C is preserved.
Divergence.
Let ω = dq i ∧ dp i be the standard symplectic form on T * Q. This in turn induces a volume form ω n . It is a known result that Hamiltonian flows preserve this measure, however, nonholonomic flows generally do not. A measure of how much a flow fails to preserve a volume form is described by its divergence. Below, we first discuss some basics of the divergence before applying it to nonholonomic systems.
Divergence Preliminaries.
To understand volume preservation, we will use the notion of the divergence of a vector field (cf. §2.5 of [1] or §5.1 in [13]). Definition 4.2. Let M be an orientable manifold with volume form Ω and X a vector field on M . Then the unique function div Ω (X) ∈ C ∞ (M ) such that L X Ω = div Ω (X)Ω is called the divergence of X. The vector field X is called incompressible iff div Ω (X) = 0.
This definition of divergence generalizes the familiar one from multivariate calculus in which M = R n and Ω = dx 1 ∧ . . . ∧ dx n . Indeed, Studying the divergence is a useful test to check volume preservation via the following proposition.
Proposition 5 (2.5.25 in [1]). Let M be a manifold with volume Ω and vector field X. Then X is incompressible iff every flow box of X is volume preserving.
Liouville's theorem in this language states that for an unconstrained Hamiltonian system, div ω n (X H ) = 0. That is, Hamiltonian systems preserve the volume induced by the symplectic form. This is, in general, not the case for nonholonomic systems.
4.2.2.
Divergence of a nonholonomic system. We now proceed with computing the divergence of a nonholonomic vector field, div µ C (X D H ). When this is nonzero, we will be interested in finding a density, f , such that div f µ C (X D H ) = 0. This problem will be addressed in §5.
Before we begin with the divergence calculation, we first present a helpful lemma which allows us to relate the divergence of the global nonholonomic vector field with the corresponding restricted vector field.
Leibniz's rule for the Lie derivative provides because the constraints are preserved under the flow. Applying this, we see that Due to the fact that the Lie derivative commutes with restriction, the result follows.
This lemma allows for us to calculate the divergence of the global nonholonomic vector field and to restrict to the constraint distribution afterwards.
Before we compute the divergence of arbitrary nonholonomic systems, we first consider the simplified case where there is only a single constraint present, i.e. C = {P (W )}. Here, we make the normalization η(W ) = 1 to simplify equation (4). The divergence of X D H is given by In order to compute this, we will invoke Cartan's magic formula as well as Lemma 4.3 (restricting to D * will occur at the end): The problem of computing the divergence collapses to calculating di Ξ C H ω (which captures how "non-symplectic" the flow is). Let N be difference between the nonholonomic and Hamiltonian vector fields: Then, from Hamilton's equations, we obtain Returning to the divergence calculation, Applying the exterior derivative yields: Notice that when we wedge di N ω with ω n−1 , the entire first line vanishes and only the diagonal on the second survives. Combining everything, we see that The exact same procedure can be carried out when there are an arbitrary number of constraints. The divergence is then simply 4.2.
3. An intrinsic form of the divergence. This section concludes with an intrinsic way to interpret (5) and (6). Recall the cotangent projection π Q : T * Q → Q and the fact that This can also be carried over to the multiple constraint case.
The formulas (7) and (8) have a structure similar to the curvature of an Ehresmann connection. This is because these formulas have the structure of a projection composed with a vector field bracket. The main difference is that while the curvature of an Ehresmann connection is vertical-valued, these formulas are real-valued. It turns out that the divergence is closely related to the torsion of the nonholonomic connection as will be discussed in §6.
Remark 5.
In the same way that the nonholonomic 1-form can be extended to the case of nonlinear constraints via Chetaev's rule, see Remark 3, the divergence described above by (8) can also be extended to the case of nonlinear constraints. The divergence is given by div µ C (X D H ) = −n · m αβ · C * dg α X H , X g β .
Invariant Volumes and the Cohomology Equation.
In general, the divergence of a nonholonomic system does not vanish as (8) shows. When does there exist a different volume form on D * that is invariant under the flow? i.e. does there exist a density f > 0 such that div f µ C (X D H ) = 0? Finding such an f requires solving a certain type of partial differential equation which is known as the smooth dynamical cohomology equation. Solving this PDE is generally quite difficult, but if we add the assumption that f = π * Q g for some g : Q → R, then the problem becomes much more tractable and reduces to studying a 1-form called the density form.
The Cohomology Equation.
What conditions need to be met for f such that f µ C is an invariant volume form? Using the formula for the divergence as well as the fact that the Lie derivative is a derivation yields: Therefore the density, f , yields an invariant measure if and only if 1 Notice that the left hand side of (9) can be integrated to Calling g = ln f , we have the following proposition.
Proposition 6. For a nonholonomic vector field, X D H , there exists a smooth invariant volume, f µ C , if there exists an exact 1-form α = dg such that Then the density is (up to a multiplicative constant) f = e g .
Therefore the existence of invariant volumes boils down to finding global solutions to the PDE (10). The remainder of this section deals with uniqueness of solutions and a necessary condition for solutions to exist. Remark 6. PDEs of the form dg(X) = f for a given smooth function f and vector field X are called cohomology equations [10,18]. Thus the equation (10) is a cohomology equation. 5.1.1. Uniqueness. The problem of existence is quite difficult in general and we postpone that discussion until the next subsection where we assume that the solution has the form f = π * Q g. In the meantime, assuming that there exists a function g ∈ C ∞ (D * ) that solves (10), do there exist other solutions? Suppose that g 1 and g 2 both solve (10). Then their difference must be a first integral of the system: L X D H (g 1 − g 2 ) = 0. Solutions of (10) are then unique up to constants of motion. i.e. if g solves (10), then every invariant density has the form (again, up to a multiplicative constant) Therefore invariant measures can be thought of as an affine space with dimension being equal to the number of first integrals of the nonholonomic system. 5.2. Special Case: Densities depending only on configuration. In general, solving the cohomology equation (10) is quite difficult. It turns out, however, that it is relatively easy to determine necessary and sufficient conditions on the solvability when the density is assumed to depend only on the configuration variables. Under this assumption, (8) can be presented in a surprisingly nice way. In this case, the divergence can be described by an equivalence class of 1-forms. The density form, defined below, is a representative element from this class.
Definition 5.2. Let C be a (natural) realization of D * ⊂ T * Q. Then, define the density form to be the following 1-form Studying the 1-form, ϑ C , provides necessary and sufficient conditions for the existence of densities depending only on configuration. Recall that D 0 = Ann(D) ⊂ T * Q is the annihilator of D ⊂ T Q and Γ(D 0 ) are its sections. Proof. To show this, we will prove that −n · π * Q ϑ C (X D H ) = div µ C (X D H ). Recall that the differential of a 1-form is given by dα This computation shows that div µ C (X D H ) = −n · ϑ C (q), butq cannot be arbitrary as it must lie within D. Therefore, we can add on an element of D 0 to ϑ C without changing its value onq: div µ C (X D H ) = −n · (ϑ C + ρ)(q) for any ρ ∈ D 0 . Hence, a solution exists depending only on configuration if ϑ C + ρ can be integrated, i.e. it is exact.
This theorem allows for a straight-forward algorithm to find invariant volumes in nonholonomic systems; one only needs to compute the 1-form ϑ C and determine whether or not it can be made exact by appending constraints to it. This procedure will be carried out on multiple examples in §8.8.
Remark 7.
In the pure kinetic energy case discussed in [8], it is proved that if the system admits an (arbitrary) invariant volume, then one can always find another invariant volume form whose density function depends only on the (reduced) configuration variables.
The above shows that exactness of ϑ C determines the existence of a density depending on configuration. How does this depend on the choice of C to realize the constraints? It turns out the answer is independent of the choice of realization.
Theorem 5.4. Let C and C both be natural realizations of the constraint D * . If ϑ C +ρ is exact, then there exists ρ such that ϑ C +ρ is too. Moreover, if ϑ C +ρ = df and ϑ C + ρ = df , then e f · µ C = e f · µ C , modulo a constant of motion.
Proof. Suppose, as was in the proof of Proposition 3, that there is a single constraint such that C = {P (W )} and C = {h · P (W )}. Computing ϑ C gives: This shows that ϑ C and ϑ C differ by something exact and something living in D 0 . The component in D 0 can be disregarded as it is absorbed into ρ . Integrating gives f = f + ln h and it remains to prove that hµ C = µ C . Recalling Definition 4.1, we have σ C = dP (W ) and σ C = P (W )dh + hdP (W ), so dP (W ) ∧ ε = (P (W )dh + hdP (W )) ∧ ε = ω n , µ C = ι * ε, µ C = ι * ε .
Using the fact that P (W ) = 0 under the pullback of ι, this component can be ignored and we have µ C = hµ C .
Remark 8. It is only possible for e f · µ C and e f · µ C to be off by a constant of motion if there exists an exact form in Γ(D 0 ). This only happens if the constraints are not completely nonintegrable.
A reason why studying ϑ C is insightful is that it immediately demonstrates why holonomic systems systems are measure-preserving. This can be shown with the help of a useful lemma. Proof. It suffices to check along a curve in the manifold. Let γ : I → Q be a curve and let A(t) = m αβ • γ(t) be the mass matrix along the curve. Note that A(t) is positive-definite and changes smoothly with t. We have where A i (t) is obtained from A(t) by differentiating the i-th row and leaving all other rows intact, i.e.
Expanding det A i (t) along the i-th row: where A ij (t) is the (m − 1) × (m − 1) matrix obtained from A i (t) and hence from A(t) by crossing out the i-th row and j-th column. Next, observe that (−1) i+j−1 det A ij / det A(t) is the (j, i)-th entry of the inverse matrix A −1 (t) = (b ij )(t), and since A(t) is symmetric, is also the (i, j)-th entry of (b ij )(t). Summarizing, Proposition 7. If the constraints are holonomic, then there exists a ρ ∈ Γ(D 0 ) such that ϑ C + ρ is exact. In particular, if C is chosen such that all η α are closed, ϑ C is exact.
Proof. When the constraints are holonomic, the 1-forms η α can be chosen such that they are closed. Then the density form is which is exact by Lemma 5.5.
5.3.
Example: The Chaplygin Sleigh. As an example of Theorem 5.3, we will prove that no invariant volumes exist for the Chaplygin sleigh (where the density depends only on the configuration variables). The Chaplygin sleigh is a nonholonomic on the configuration Q = SE 2 , the special Euclidean group, and has the following Lagrangian L = 1 2 mẋ 2 + mẏ 2 + I + ma 2 θ 2 − 2maẋθ sin θ + 2maẏθ cos θ , where (x, y) ∈ R 2 is the coordinate of the contact point, θ ∈ SO 2 is its orientation, m is the sleigh's mass, I is the moment of inertia about the center of mass, and a is the distance from the center of mass to the contact point (cf. §1.7 in [3]). The nonholonomic constraint is that the sleigh can only slide in the direction it is pointing and is given byẏ cos θ −ẋ sin θ = 0, which corresponds to the 1-form η = (cos θ) dy − (sin θ) dx.
We wish to compute ϑ C for the Chaplygin sleigh and show that no volumes depending on configuration exist. For this example, This gives us As a consequence of this, the divergence of the Chaplygin sleigh is given by We want to show that for anyη ∈ Γ(D 0 ), ϑ C +η is not exact. Because there is only one constraint, it suffices to show that there does not exist a smooth k such that ϑ C + k · η is exact, i.e. it requires the following to be zero: Separating the above, we need the following three to vanish: The second two lines of (12) are overdetermined for k in the θ-direction and are inconsistent (unless a = 0 and we obtain the trivial solution k ≡ 0). Therefore, there does not exist a smooth k such that ϑ C + k · η is closed. We note that this is compatable with the known result that when a = 0, no asymptotically stable dynamics occur.
6. Connections with the Nonholonomic Connection. It turns out that the divergence of a nonholonomic system, in particular the density form, is encoded in the nonholonomic connection. This interpretation seems to be new. Let (L, Q, C ) be a natural nonholonomic Lagrangian. The nonholonomic connection for this system is given by (cf. §5.3 in [3] and [27]): . The equations of motion can then be described via ∇ Ċ qq = F, where F contains the forces (including the potential forces). 6.1. Torsion. The nonintegrability of the constraints appears in the torsion of the connection. Computing this, we see The torsion can be written as Indeed, if the constraining 1-forms η j are all closed (so holonomic) then the torsion vanishes. It is worth pointing out that the torsion is vertical-valued; if X, Y ∈ D, Due to the fact that the torsion is a (1,2)-tensor, its trace will be a (0,1)-tensor. Therefore, the trace of the nonholonomic torsion will be a 1-form: Returning to the density form, we see that i.e. the trace of the torsion differs from the density form by something exact. This leads to the following theorem. Theorem 6.1. A natural nonholonomic system (Q, L, C ) has an invariant volume of the form (π * Q f ) · µ C if and only if there exists a ρ ∈ Γ(D 0 ) such that tr T C + ρ is exact.
Remark 9.
The vanishing of the torsion shows that the constraints are integrable while the integrability of the (trace of the) torsion shows that a volume is preserved.
In the case of nonholonomic systems, the nonholonomic connection is compatible with the metric but has nonzero torsion. This idea extends to arbitrary, metriccompatible connections as the following theorem states. Proof. Consider the volume form on T Q given by We want to compute L X Ω where X is the geodesic spray given by and therefore the divergence is given by We will now use the fact that the connection is compatible with the metric: ∂g jk ∂x i = g k Γ ij + g j Γ ik . This implies that Integrating the left-hand side above gives Substituting (14) into (13), we get div Ω (X) = Γ i ki − Γ i ik v k . It remains to show that this is the trace of the torsion. Indeed, This shows that a way to interpret the torsion of a connection is by measuring how much the geodesic spray fails to preserve volume. The pair (P, {·, ·}) is called a Poisson manifold if the bracket is R-bilinear, anticommutative, satisfies Jacobi's identity, and satisfies Leibniz' rule.
Notice that, unlike symplectic manifolds, the bracket is allowed to be degenerate. The degeneracy of the bracket is what inhibits the creation of a distinguished volume form.
In order to study volume preservation in Poisson manifolds, we look at the "modular vector field" [28].
Definition 7.2. Let µ ∈ Ω dim P (P ) be a volume form. Let us define the derivation M µ : C ∞ (P ) → C ∞ (P ) via As M µ is a derivation, it is a vector field called the modular field. If the modular field is Hamiltonian, the Poisson manifold is said to be unimodular. Proof. Suppose that there exists a density f ∈ C ∞ (P ) such that all Hamiltonian flows preserve f µ. Therefore, This tells us that div µ (X h ) = −d (ln f ) (X h ) = {h, ln f } , and therefore the modular field is Hamiltonian. The opposite direction follows. Proposition 9. The dual space g * is a Poisson manifold with the following bracket where g is identified with g * * .
For a Hamiltonian function, h : g * → R, the equations of motion (the Lie-Poisson equations) are given by dp dt = ad * dh p.
It turns out that volume-preservation of the Lie-Poisson equations can be understood by the algebraic properties of g. Proof. We wish to show, first, that the modular vector field is given by the constant vector field tr ad ∈ g * . Choose a basis {e k } ∈ g * where p = p k e k and let µ = dp 1 ∧ . . . ∧ dp n be our volume form. For a function f : g * → R, its Hamiltonian vector field is given by The divergence of X f is given by However, the second term vanishes as mixed partials are equal and c k ij = −c k ji . We have div µ (X f ) = tr ad(df ).
It remains to show that if tr ad(df ) = {h, f } λ , then tr ad ≡ 0. Assume that tr ad = ν ∈ g * , then for every f we have Choose f such that ν, df (0) = 0. Then we have which is a contradiction.
where I ij is assumed to be positive-definite and symmetric. Suppose we have a constraint distribution (a subspace) of the form The equations of motion have the form dp dt = ad * dh p + λ α η α , The multipliers can be explicitly solved (reminiscent of (4)) to obtain the following dp dt where W α ∈ g are related to η α ∈ g * by the fiber derivative (as h is assumed to be natural) and m αβ = η α (W β ). For the divergence calculation, we already know the component corresponding to ad * dh p, so we will focus on the reaction forces. In coordinates, Calling Y αβ := [W α , W β ], and noting that I : g → g * , we have the following result. Notice that m αβ Y αβ = 0 as Y αβ is skew.
Theorem 7.5. The divergence of (16) is given by Moreover, volume is preserved if and only if ϑ := tr ad + m αβ · ad * W β η α ∈ D 0 . A corollary of this is the well-known result of Kozloz [15] (see also [12] and [30]). Corollary 1 ([15]). Let G be compact, κ : g × g → R be the Killing form, and κ : g * → g be the induced isomorphism. If there is a single constraint η ∈ g * , then there exists and invariant volume if and only if i.e. κ η is an eigenvector of ad I −1 η .
Proof. As G is compact, g is unimodular. Additionally, as there is only a single constraint, the divergence (17) has the form Therefore, there exists an invariant volume if and only if ad * I −1 η η ∈ D 0 = R · η. Using the fact that the Killing form is associative, we have It follows that ad * I −1 η η = a · η if and only if I −1 η, κ η = a · κ η.
In the theory of spinning tops, a totally symmetric top corresponds to a biinvariant metric on g. When this happens, the Euler-Poincaré-Suslov equations are volume preserving independent of the choice of D.
Corollary 3. Suppose that I is bi-invariant. Then for any subspace, D ⊂ g, there exists an invariant volume.
Proof. A theorem of Milnor (cf. lemma 7.5 in [21]) states that Lie algebras admitting a bi-invariant metric are unimodular. Additionally, as the metric is bi-invariant, it is associative. The divergence becomes This vanishes because m αβ · [W β , W α ] = 0. 7.4. Example: The Chaplygin Sleigh. Let us revisit the Chaplygin sleigh as a Euler-Poincaré-Suslov system rather than a general nonholonomic system as earlier to compare results. Recall that the configuration space for this system is G = SE(2). The constraint is left-invariant because and the constraint isẏ cos θ −ẋ sin θ = 0, i.e. D = ker e * 3 . Notice that we cannot utilize Corollary 1 as SE (2) is not compact (but is unimodular). To compute the equations of motion, we translate the Lagrangian to se(2) and we obtain = 1 2 mu 2 + mv 2 + (I + ma 2 )ω 2 − maωv, which gives the moment of inertia tensor to be The constraint (and dual vector) are given by Computing the divergence gives Therefore, no invariant volumes exist (which is compatible with the observation in §5.3). Also note that the divergence here agrees with the divergence in (11), 8. Examples. We end this work with applying Theorem 1.1 to various nonholonomic systems. The idea is to compute ϑ C for the examples below to determine whether or not an invariant volume exists. Each example will come equipped with a Lagrangian, L, and a collection of constraining 1-forms, η α . Recall that ϑ C need not be exact to guarantee the existence of an invariant volume, merely that there exists a collection of smooth f α such that is exact. These examples are taken from [3] and [23].
8.1. The Vertical Rolling Disk. The first example we will consider is that of the vertical rolling disk. The Lagrangian is given by with constraints η 1 = dx − R (cos ϕ) dθ, Here, m is the mass of the disk, I is the moment of inertia of the disk about the axis perpendicular to the plane of the disk, J is the moment of inertia about an axis in the plane of the disk, and R is the radius of the disk. The Lagrangian for this system is only the kinetic energy. The corresponding vector fields are given by The constraint mass matrix is given by The four Lie derivatives are This leads to ϑ C = 0 and therefore volume is preserved for the vertical rolling disk.
8.2.
The Falling Rolling Disk. The next example is a physical extension of the previous example where the disk is now allowed to tilt. The Lagrangian is where ξ =ẋ cos ϕ +ẏ sin ϕ + Rψ, η = −ẋ sin ϕ +ẏ cos ϕ.
The constraints are η 1 = cos ϕ · dx + sin ϕ · dy + R · dψ, Here m, R, I, and J are all the same as in the vertical rolling disk. As the disk is now allowed to fall, a potential energy term is added where g is the acceleration due to gravity. The corresponding vector fields are The constraint mass matrix for these constraints is diagonal, Due to the fact that the constraint mass matrix is diagonal, we only care about two Lie derivatives: The corresponding density form is which, although nonzero, is exact. Therefore volume is preserved for the falling disk with density where K is some constant.
where r is the radius of the ball and k contains the inertial terms (cf. §3.2.2 in [23] The constraint mass matrix for this system is i.e. the constraints are "orthogonal." In this case, we only need to calculate two Lie derivatives. These are given by L W 1 η 1 = L W 2 η 2 = 0.
Therefore ϑ C = 0 and volume is preserved for the rolling ball.
8.4. The Heisenberg System. The next example is the Heisenberg system which is associated with the homonymous Lie algebra. The Lagrangian is the standard kinetic energy on R 3 , with the constraint η = ydx − xdy − dz.
The corresponding vector field is The density form is given by Therefore, volume is preserved for the Heisenberg system. The form ϑ C is closed because ∂A ∂ψ = ∂B ∂θ .
Therefore, volume is preserved for the Chaplygin Sphere.
8.7.
Chaplygin Sleigh with an Oscillator. The Lagrangian is given by L = m 2 ẋ 2 +ẏ 2 +ṙ 2 + r 2θ2 + 2ṙ (ẋ cos θ +ẏ sin θ) + 2rθ (ẏ cos θ −ẋ sin θ) with constraint η = cos θ · dy − sin θ · dx Remark 10. Notice that potential terms to not enter into the computation at all; the only thing we need are the constraints and the metric. Notice that this is not a constant and we therefore have We can integrate the shape component to obtain δ(r) = 2 arctan m 2 r 2 2I(m + M ) + (2M m + m 2 )r 2 . However, form the same reasoning from the Chaplygin sleigh above, we cannot make ϑ C exact. Therefore, there does not exist an invariant volume with density depending only on configuration variables for the Chaplygin sleigh with an oscillator.
When dim Q = 2 any single constraint is automatically holonomic. As a consequence, volume will always be preserved. To make this problem more interesting, let us "thicken" the strip by w so the metric becomes g thick = 4v cos u 2 + 2v 2 cos u 2 + v 2 2 + 2 du 2 + 2dv 2 + dw 2 .
For the sake of this example, let us impose the nonholonomic constraint η = dv + sin(u) · dw, W = 1 2 The density form is then ϑ C = sin(u) cos(u) 1 + sin 2 (u) du, which is exact. Therefore, the exponential of its integral is an invariant density and is given by ρ = 1 + sin 2 (u). | 2020-09-25T06:13:59.366Z | 2020-09-23T00:00:00.000 | {
"year": 2020,
"sha1": "07f3bd33a9eb3984fe529825898d70adb83e728d",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "d60c3d740664762a1a2f367c59eace42a264ddaa",
"s2fieldsofstudy": [
"Mathematics"
],
"extfieldsofstudy": [
"Mathematics"
]
} |
209538967 | pes2o/s2orc | v3-fos-license | Diagnostic value of plasma HSP90α levels for detection of hepatocellular carcinoma
Background Hepatocellular carcinoma (HCC) is a major health problem worldwide. However, the popular tumor marker, AFP, lacks sensitivity although its specificity is high. Tissue biopsy is an invasive operation and may increase the risk of needle-track metastases. Heat shock protein 90 (HSP90) is a potential biomarker for tumor diagnosis and prognosis. This study aims to determine whether levels of plasma HSP90α in HCC patients can be used as a cost-effective and simple test for the initial diagnosis of the disease. Methods Plasma samples were collected from 659 HCC patients, 114 secondary hepatic carcinoma (SHC) patients, 28 hepatic hemangioma patients and 230 healthy donors. The levels of HSP90α were measured by ELISA. Results The levels of plasma HSP90α in HCC patients were significantly higher than in healthy donors and in patients with hepatic hemangioma or SHC (144.08 ± 4.98, 46.81 ± 1.11, 61.56 ± 8.20 and 111.96 ± 10.08 ng/mL, respectively; p < 0.05 in all cases). The levels were associated with age (p = 0.001), BCLC stage (p < 0.001), levels of AFP (p < 0.001), tumor size (p < 0.001), tumor number (p < 0.001), PVTT (p < 0.001), EHM (p < 0.001) and Child-Pugh stage in the HCC cohort. In addition, the levels of plasma HSP90α showed an upward trend along with the progression of the BCLC stage. ROC curve analysis showed that compared to AFP (AUC 0.922, 95%CI 0.902–0.938) or HSP90α (AUC 0.836, 95%CI 0.810–0.860), the combination of HSP90α and AFP (AUC0.943, 95%CI 0.925–0.957) significantly improved the diagnostic efficiency for HCC patients. Conclusion The results suggest that plasma Hsp90 α levels can be used as an initial diagnosis for patients with HCC in both rural and cosmopolitan settings.
Background
Liver cancer is composed mostly of primary liver cancers and secondary liver cancers. Hepatocellular carcinoma (HCC), as the major type of primary liver cancer, is the fifth most common tumor worldwide and the third leading cause of cancer mortality, responsible for 745,500 cancer deaths annually [1]. Moreover, more than 50% of HCC-related deaths occurred in China [2]. In spite of great progress for HCC therapy in recent years, the prognosis remains poor, due to the high incidence of recurrence and metastasis and the 5-year-survival rate has remained at less than 12% [3]. To the best of our knowledge, most of HCC patients are diagnosed at an advanced stage due to lack of typical clinical manifestations and awareness of disease screening. Nowadays, the HCC screening is based on measurement of serum alpha-fetoprotein (AFP) as well as imaging technologies and histology [4,5]. However, the conventional liver imaging for HCC does not perform well on tumors of less than 1 cm and AFP lacks adequate sensitivity and specificity in patients with atypical AFP levels. In addition, although tissue biopsies can make an accurate judgment, it is an invasive operation and may increase the needletrack metastases [6,7]. Therefore, non-invasive and more effective biomarkers for HCC are urgently needed.
Heat shock protein 90 (HSP90) is an evolutionarily highly conserved intracellular molecular chaperone that is usually induced in response to cellular stress. It assists the maturation of an array of client proteins. The HSP90 family is composed of four major members: HSP90α, HSP90β, Grp94 and TRAP1. HSP90α and HSP90β are located mainly in the cytoplasm, and the other two proteins are located mainly in the endoplasmic reticulum and mitochondrial matrix, respectively. Due to its key roles in modulating signal transduction, especially in tumor cells, HSP90α has become a research hotspot. A large sample data study has shown that the plasma levels of HSP90α in lung cancer patients are significantly higher compared to healthy controls [8]. In addition, a recent study showed that plasma HSP90α can discriminate patients with liver cancer from non-liver cancer controls [9]. However, the plasma HSP90α levels were not elevated in benign liver tumors and secondary hepatic carcinoma (SHC) patients in this study. In order to verify the presence of a stable and reliable biomarker, a large amount of data and studies are required for verification. Therefore, in order to establish whether plasma HSP90α levels can be used as a biomarker for HCC in the clinic, in the current study we measured plasma HSP90α levels in HCC and SHC patients as well as benign liver tumor cohort.
Patients
From January 1, 2018 to February 28, 2019, a total of 801 liver disease patients in the Hepatobiliary Surgery Department of the Affiliated Tumor Hospital of Guangxi Medical University were enrolled in this study. The subjects included were 659 HCC patients, 114 SHC patients and 28 patients with hepatic hemangioma (HH). HCC was diagnosed according to the American Association for the Study of Liver Diseases (AASLD) guidelines. All patients were scanned by means of magnetic resonance imaging, abdominal B ultrasound and computed tomography, and were examined for clinical symptoms and signs of disease, together with measurement of serum AFP levels. In addition, none of the patients were subjected to any antitumor treatment or surgical resection at the time of diagnosis. The clinical features were obtained from the electronic records. Tumor stage was determined according to the Barcelona Clinic Liver Cancer (BCLC) staging system. The control group included 230 healthy donors (HD). All controls and patients provided written informed consent. This study was approved by the Local Ethics Committee of the Affiliated Tumor Hospital of Guangxi Medical University, and it was conducted in accordance with the Declaration of Helsinki and current hospital ethical guidelines.
Assessment of HSP90α and AFP levels
The levels of plasma HSP90α were measured by using the ELISA kit for HSP90α protein (Yantai Protgen Biotechnology Development Co., Ltd., Yantai, China). 2 mL of fresh blood samples with EDTA-K2 anticoagulant were collected from patients and controls. All the samples were collected prior to anti-cancer treatment or surgery. All the operations were followed according to the manufacturer's instruction. The kits were first preincubated at 37°C for 30 min. The samples were prepared for ELISA analysis: a. the fresh blood samples were centrifuged at 3000 rpm for 10 min; b. the plasma was removed and diluted 20 times with the diluent solution provided. Then, the standards were loaded together with the quality controls and the prepared samples (50uL of each) were added into 96-well plates followed by addition of 50uL of anti-Hsp90aHRP-conjugated antibody. These were incubated at 37°C for 1 h after gentle shaking. Then, the plates were washed six times with the wash buffer provided which was proceeded by the chromogenic reaction; 50uL peroxide and 50uL 3, 3′, 5, 5′ -tetramethylbenzidine and incubation at 37°C for 20 min and the reaction was terminated by addition of an acid stop buffer. Finally, the optical density was measured by using a spectrophotometer at 450 nm for the detection wavelength with 620 nm as the reference wavelength. The concentration of HSP90α protein in each sample was calculated according to a standard curve of optical density values. The levels of serum AFP were measured using electro-chemiluminescence immunoassay kits (Cobas, Roche Diagnostics, Germany) according to the manufacturer's instruction. Serum samples were obtained in a similar way to those of plasma, but blood samples were initially placed in tubes without anticoagulant and treated as described above.
Statistical analysis
All quantitative data are presented as the mean ± SE. The HH patients were analyzed as the benign liver tumor group. The one-way ANOVA was performed using SPSS 17.0 software (SPSS, Chicago, IL, USA). The scatter plots were performed using GraphPad Prism 7 software (GraphPad Software, Inc., San Diego, CA, USA). The paired comparison of ROC curves was performed using MedCalc version 18.11.3. The optimum cut-off value was determined by using the quantity corresponding to the maximum value of the Youden's index (Youden's index = sensitivity + specificity -1) [10].
Results
A total of 901 cases in this study consisted of 659 HCC patients, 114 SHC patients, 28 HH patients and 230 healthy controls (Raw data for each cohort are atttached in Additional files 1, 2, 3 and 4). The median ages in each group were 51, 61, 47 and 37 years, respectively.
Comparison of HSP90α levels between groups
The plasma levels of HSP90α in the different groups of patients and controls are shown in Fig. 1. The levels of plasma HSP90α in HD, HH, SHC and HCC cohorts were 46.81 ± 1.11, 61.56 ± 8.20, 111.96 ± 10.08 and 144.08 ± 4.98 ng/mL, respectively. Statistical analysis showed that HSP90α was at significantly higher levels in HH, SHC and HCC patient cohorts when compared to the HD cohort (p < 0.001, p < 0.001, p < 0.001, respectively). In pairwise comparisons, the plasma HSP90α showed significantly higher levels in HCC patients when compared to the HH and SHC patients groups (p < 0.001 and p = 0.011, respectively).
Associations between HSP90α levels and clinical characteristics of HCC patients
The relationship between the HSP90α levels and clinical characteristics in HCC patients are shown in Table 1. The levels of HSP90α showed no statistical difference with gender, liver cirrhosis and HBV infection status (p = 0.419, p = 0.099 and p = 0.605, respectively). Moreover, what is noteworthy is that the plasma HSP90α showed a remarkable higher level in a younger group of
The diagnostic efficiency of HSP90α and AFP for determination of hepatic malignancy
The ROC curve analysis was conducted to assess the diagnostic efficiency of HSP90α and AFP in determining hepatic malignancy and the results are shown in Fig. 3. The analysis of hepatic malignancy was performed after dividing the patients into two groups: an HCC and a SHC cohort. The diagnostic efficiency of HSP90α and AFP showed a better performance in the HCC cohort (AUC 0.836, sensitivity 67.07%, specificity 90.43%; AUC 0.922, sensitivity 81.18%, specificity 93.91%; respectively, Fig. 3a, Table 2) than in the SHC cohort (AUC 0.735, sensitivity 56.14%, specificity 86.96%; AUC 0.597, sensitivity 56.14%, specificity 62.61%, respectively; Fig. 3b, Table 2) when compared to healthy donors. In addition, the combination of HSP90α and AFP significantly improved the diagnostic ability of HCC from healthy donors (AUC 0.943, sensitivity 85.89%, specificity 98.26%, Fig.3a, Table 2). However, when we focus on the diagnostic ability of HCC from SHC, the serum AFP (AUC = 0.889, sensitivity 76.9%, specificity 92.1%) was better than plasma HSP90α (AUC = 0.601, sensitivity 63%, specificity 54.4%) for distinguishing the HCC patients from those with SHC (Fig. 3c). Subsequently a subgroup analysis was conducted to evaluate the plasma HSP90α initial diagnosis value for early HCC patients and the results demonstrated that plasma HSP90α had a poor performance for the initial diagnosis of early HCC when patients had tumors of less than 2 cm (AUC = 0.635, Fig. 4a) or the early stage of HCC as characterized by patients at BCLC-A stage (AUC = 0.714, Fig. 5a).
Discussion
HCC is a major health problem worldwide, with more than 700, 000 cases diagnosed annually and with a 1year survival rate of 47%, and a 5-year survival rate of 10% [1,11]. The decrease in survival rate after the first year is highly significant. Although risk factors (such as cirrhosis of the liver) are recognized, they are the third leading cause of tumor-related mortality. Since there are no obvious symptoms at the early stages, there are still huge challenges in the early diagnosis of high-risk groups. The biomarker, AFP, has been used widely over the last 40 years. However, its sensitivity and specificity is limited for diagnosis of HCC [12]. The identification of new tumor biomarkers could be pivotal for the improvement of patient diagnosis and survival. HSP90α, is an abundant intracellular chaperone and it has been shown to be located in the extracellular space [13,14]. Moreover, increasing evidence have demonstrated that HSP90α is widely recognized to have a role in modulating the conformation, stability and function of oncogenic proteins, and that it is involved in cell proliferation, apoptosis, cell cycle progression, migration and invasion [15][16][17][18][19][20]. In addition, previous studies have shown that the high levels of protein HSP90α are associated with the occurrence of solid malignant tumors [9,19,21].
In the present study, the levels of plasma HSP90α were significantly higher in hepatic malignancy compared to a healthy donor cohort or patients with benign liver tumors. This finding was in accordance with previous studies [9,19]. Beyond that, the difference is that in this study, plasma HSP90α levels were assessed in secondary hepatic carcinoma cohorts, ant the results showed that the plasma HSP90α levels were also significantly higher in both an HCC cohort and a SHC cohort compared to patients with hemangioma, respectively. In addition, plasma HSP90α levels were significantly higher in the HCC cohort compared to the SHC cohort. Therefore we speculate that the levels of plasma HSP90α might be a potential cancer-specific biomarker for diagnosis of hepatic malignancy and have the ability to distinguish between primary and secondary hepatic cancer. The purpose of this study was to investigate the diagnostic value of plasma HSP90α in HCC patients. Fig. 4 The ROC curve analysis the diagnosis efficency of HSP90α and AFP for tumor size in HCC patients. a The diagnostic ability to distinguish HCC patients with tumor size less than 2 cm from healthy donors. b The diagnostic ability to distinguish HCC patients with tumor size 2-4.99 cm from healthy donors. c The diagnostic ability to distinguish HCC patients with tumor size greater than or equal to 5 cm from healthy donors Compared to previous studies, the interesting findings here are that plasma HSP90α levels were associated with age, and significantly higher levels existed in those HCC patients who were less than 50 years old. The reason for the differential levels of plasma HSP90α in different age groups is still unclear, and more studies are needed to confirm this result. Moreover, the relationship between the plasma HSP90α levels and tumor size, tumor number, EHM, PVTT and Child-Pugh class were analyzed in the current study and the results demonstrated that HCC patients with EHM or PVTT or greater tumor size or multiple tumors is associated with high levels of plasma HSP90α. Therefore, we speculated that the levels of plasma HSP90α might be related to prognosis. In addition, the plasma HSP90α levels were significantly higher in those HCC patients who were at advance stages of the disease compared to those patients at an early stage, and this result is consistent with recently published studies [8,9,21]. Moreover, it was shown that there was an increasing trend of HSP90α levels with the progression of BCLC stage. Therefore we speculated that the levels of plasma HSP90α may play a key role in determining the disease stage. In addition, an interesting finding in the current study was that the levels of plasma HSP90α were significantly higher in the patients with AFP levels greater than or equal to 400 ng/mL compared to those patients with AFP levels of less than 400 ng/mL. However, the levels of HSP90α protein when assessed by immunohistochemistry in HCC tissue samples showed no association with the levels of serum AFP in a previous study [21]. Hence, the quantitative test of HSP90α levels in peripheral blood plasma samples was a better means of measuring the expression levels than the qualitative measurements derived from assessing the levels of HSP90α in tissue samples by immunochemical techniques.
The results of ROC curve analysis showed that by comparison with AFP (AUC 0.922, sensitivity 81.18%, Fig. 5 The ROC curve analysis the diagnosis efficency of HSP90α and AFP for BCLC stage in HCC patients. a The diagnostic ability to distinguish HCC patients with BCLC-A stage from healthy donors. b The diagnostic ability to distinguish HCC patients with BCLC-B stage from healthy donors. c The diagnostic ability to distinguish HCC patients with BCLC-C stage from healthy donors. d The diagnostic ability to distinguish HCC patients with BCLC-D stage from healthy donors specificity 93.91%, cut-off 5.38 ng/mL), the plasma HSP90α levels (AUC 0.836, sensitivity 67.07%, specificity 90.43%,cut-off 69.10 ng/mL) have a poor performance in the diagnosis of HCC from healthy donors. In addition, a further subgroup analysis showed that the plasma HSP90α have a limited diagnosis efficiency for early HCC patients with tumors of less than 2 cm or those at an early BCLC stage (ie. BCLC-A). However, this result is inconsistent with the recent study that showed plasma HSP90α (AUC 0.965, sensitivity 93.3%, specificity 90.3%) improved significantly compared with AFP (AUC 0.887, sensitivity 61.1%, specificity 96.3%) for the diagnostic ability of distinguishing HCC or early-HCC from nonliver cancer control patients [9].
However, from another study, we found that the diagnostic efficiency of AFP for HCC patients was AUC 0.67 with a sensitivity of 47.8% and a specificity of 93.2% [22]. A possible reason for this discrepancy of ROC curve results could be the difference of control subjects in the two studies. Even so, the combination of HSP90α and AFP significantly improved the diagnostic efficiency for HCC compared to the use of a single marker in both, the current study (AUC 0.943, sensitivity 85.89%, specificity 98.26%) and the previous study (AUC 0.977, sensitivity 93.7%, specificity 94.4%) [9]. Here we can see that the AFP levels have a high specificity for the diagnosis of HCC, but this is coupled with poor sensitivity. Considering that the use of a single protein as a biomarker has the limitations of both sensitivity and specificity, the combination of HSP90α, AFP and potentially another clinical index or biomarker might improve the diagnostic efficiency and staging determination for HCC in the future. | 2020-01-03T14:35:34.256Z | 2020-01-02T00:00:00.000 | {
"year": 2020,
"sha1": "6ec5ca3c6914c89dd2f59c1e1aeeb1f3e891da3f",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.1186/s12885-019-6489-0",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "b4325c743e854ce446a072088a29faf4869782d7",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.